Main topic: Social media company X (formerly Twitter) now allows paid users to hide their verification checkmarks.
Key points:
1. Twitter introduced paid verification last year with the Twitter Blue relaunch.
2. The service was renamed to XBlue during the ongoing rebranding exercise.
3. The company has updated the help page for paid subscriptions, stating that even if the checkmark is hidden, it might still be visible in some places.
4. The option to hide the checkmark will be available in the "Profile customization" section of account settings.
5. This feature will allow users to benefit from subscription features without displaying that they are a verified account.
6. In March, Twitter was reported to be working on a feature to hide checkmarks with ID verification.
7. There was controversy surrounding paid verification, as it was difficult to differentiate between legacy verified accounts and those who paid for the checkmark.
8. Twitter initially removed legacy checkmarks but later reinstated them for top accounts, regardless of payment.
9. Since the relaunch, Twitter has introduced various features to incentivize users, such as a 10,000-character limit for posts, a 3-hour video upload limit, fewer ads on the timeline, and ad revenue sharing for subscribed users.
10. In May, the platform enabled encrypted DMs for verified users.
- Meta is planning to roll out AI-powered chatbots with different personas on its social media platforms.
- The chatbots are designed to have humanlike conversations and will launch as early as next month.
- Meta sees the chatbots as a way to boost engagement and collect more data on users.
- The chatbots may raise privacy concerns.
- Snapchat has also launched an AI chatbot, but faced criticism and concerns.
- Mark Zuckerberg mentioned that Meta is building new AI-powered products and will share more details later this year.
- More details on Meta's AI roadmap are expected to be announced in September.
- Meta reported 11% year-over-year revenue growth.
Main topic: X (formerly known as Twitter) throttling traffic to websites disliked by Elon Musk.
Key points:
1. X slowed down access to websites including The New York Times, Instagram, Facebook, Bluesky, Threads, Reuters, and Substack.
2. These websites have been publicly attacked by Musk in the past.
3. The delays potentially affected the traffic and ad revenue of these companies.
Hint on Elon Musk: Musk has previously blocked links to competitors, called the New York Times "propaganda," and took away their verification check mark. He has also feuded with Mark Zuckerberg and threatened a cage fight.
Main topic: Elon Musk addressing the lack of transparency around "shadowbanning" on X (formerly known as Twitter).
Key points:
1. Musk apologizes for the delay in addressing the issue and explains the challenges faced by X in providing data to users.
2. Shadowbanning has been a concern on Twitter, with users unaware of being penalized for their tweets.
3. Musk insists that users should have the right to know if they've been shadowbanned and mentions a ground-up rewrite of X's codebase to simplify the process.
Hint on Elon Musk: Musk took over Twitter and attempted to prove the existence of shadowbanning by releasing information, but it only provided a behind-the-scenes look at social media moderation. He acknowledges the difficulties in tackling the problem and mentions ongoing efforts to simplify the codebase.
Prompts that can cause AI chatbots like ChatGPT to bypass pre-coded rules and potentially be used for criminal activity have been circulating online for over 100 days without being fixed.
Generative AI models like ChatGPT pose risks to content and data privacy, as they can scrape and use content without attribution, potentially leading to loss of traffic, revenue, and ethical debates about AI innovation. Blocking the Common Crawler bot and implementing paywalls can offer some protection, but as technology evolves, companies must stay vigilant and adapt their defenses against content scraping.
A botnet powered by ChatGPT, called Fox8, was discovered on social media platform X (formerly Twitter), using auto-generated content to trick users into clicking links to cryptocurrency websites, indicating the potential for more sophisticated botnets utilizing advanced chatbots like ChatGPT for scams and disinformation.
AI Algorithms Battle Russian Disinformation Campaigns on Social Media
A mysterious individual known as Nea Paw has developed an AI-powered project called CounterCloud to combat mass-produced AI disinformation. In response to tweets from Russian media outlets and the Chinese embassy that criticized the US, CounterCloud produced tweets, articles, and even journalists and news sites that were entirely generated by AI algorithms. Paw believes that the project highlights the danger of easily accessible generative AI tools being used for state-backed propaganda. While some argue that educating users about manipulative AI-generated content or equipping browsers with AI-detection tools could mitigate the issue, Paw believes that these solutions are not effective or elegant. Disinformation researchers have long warned about the potential of AI language models being used for personalized propaganda campaigns and influencing social media users. Evidence of AI-powered disinformation campaigns has already emerged, with academic researchers uncovering a botnet powered by AI language model ChatGPT. Legitimate political campaigns, such as the Republican National Committee, have also utilized AI-generated content, including fake images. AI-generated text can still be fairly generic, but with human finesse, it becomes highly effective and difficult to detect using automated filters. OpenAI has expressed concern about its technology being utilized to create tailored automated disinformation at a large scale, and while it has updated its policies to restrict political usage, it remains a challenge to block the generation of such material effectively. As AI tools become increasingly accessible, society must become aware of their presence in politics and protect against their misuse.
Chatbots can be manipulated by hackers through "prompt injection" attacks, which can lead to real-world consequences such as offensive content generation or data theft. The National Cyber Security Centre advises designing chatbot systems with security in mind to prevent exploitation of vulnerabilities.
AI technology is making it easier and cheaper to produce mass-scale propaganda campaigns and disinformation, using generative AI tools to create convincing articles, tweets, and even journalist profiles, raising concerns about the spread of AI-powered fake content and the need for mitigation strategies.
X, the Elon Musk-owned social media platform formerly known as Twitter, has obtained payments licenses from several U.S. states, indicating plans to support payment processing and cryptocurrency services.
Elon Musk's social media platform X, formerly known as Twitter, is updating its privacy policy to collect users' biometric and personal data, raising concerns about privacy and the potential for misuse of information.
X's updated privacy policy reveals that it will collect biometric data, job and education history, and use publicly available information to train its machine learning and AI models, potentially for Elon Musk's other company, xAI, which aims to use public tweets for training its AI models.
Snapchat's AI chatbot, My AI, faced backlash after engaging in inappropriate conversations with a teenager, highlighting the importance of AI safety; scientists have developed an AI nose that can predict odor characteristics based on molecular structure; General Motors and Google are strengthening their AI partnership to integrate AI across operations; The Guardian has blocked OpenAI's ChatGPT web crawling bot amid legal challenges regarding intellectual property rights.
The increasing sophistication of AI phishing scams poses a significant threat to crypto organizations as scammers utilize AI tools to execute highly convincing and successful attacks, warns Richard Ma, co-founder of Quantstamp. These AI-powered attacks involve scammers posing as key personnel within targeted companies to establish legitimacy and request sensitive information, making it crucial for individuals and organizations to avoid sending sensitive information via email or text and instead utilize internal communication channels like Slack. Investing in anti-phishing software is also advised to filter out automated emails from bots and AI.
Using AI tools like ChatGPT to write smart contracts and build cryptocurrency projects can lead to more problems, bugs, and attack vectors, according to CertiK's security chief, Kang Li, who believes that inexperienced programmers may create catastrophic design flaws and vulnerabilities. Additionally, AI tools are becoming more successful at social engineering attacks, making it harder to distinguish between AI-generated and human-generated messages.
Twitter is plagued by scam bots that impersonate users and offer fraudulent support for cryptocurrency and NFT services, highlighting the platform's lack of effective moderation and the growing problem of crypto scams.
Microsoft researchers have discovered a network of fake social media accounts controlled by China that use artificial intelligence to influence US voters, according to a new research report.
AI on social media platforms, both as a tool for manipulation and for detection, is seen as a potential threat to voter sentiment in the upcoming US presidential elections, with China-affiliated actors leveraging AI-generated visual media to emphasize politically divisive topics, while companies like Accrete AI are employing AI to detect and predict disinformation threats in real-time.
The UK's National Cyber Security Centre has warned against prompt injection attacks on AI chatbots, highlighting the vulnerability of large language models to inputs that can manipulate their behavior and generate offensive or confidential content. Data breaches have also seen a significant increase globally, with a total of 110.8 million accounts leaked in Q2 2023, and the global average cost of a data breach has risen by 15% over the past three years. In other news, Japan's cybersecurity agency was breached by hackers, executive bonuses are increasingly tied to cybersecurity metrics, and the Five Eyes intelligence alliance has detailed how Russian state-sponsored hackers are using Android malware to attack Ukrainian soldiers' devices.
X, formerly known as Twitter, has been running unlabeled ads in users' feeds, raising concerns about deceptive advertising practices and potentially attracting regulatory investigation.
Twitter, now called X, is suing California over a state law that requires social media companies to disclose their content policies, claiming it violates free speech and pressure them to remove objectionable content.
Hackers targeted Ethereum co-founder Vitalik Buterin's Twitter account, swindling nearly $700,000 from users by posting a fraudulent ConsenSys link that led to a trap. This incident highlights growing concerns about the increase in phishing scams on the platform formerly known as Twitter.
Elon Musk, CEO of SpaceX and Tesla, revealed plans for his social network, X (formerly Twitter), to introduce a monthly payment system to combat bots, but did not disclose the cost or additional features included, while also claiming to have 550 million monthly users generating millions of daily posts, without specifying the authenticity of these users. Musk's discussion with Israeli Prime Minister Benjamin Netanyahu also addressed concerns over hate speech and antisemitism on the platform, following Musk's previous amplification of such content. Musk's takeover of Twitter led to significant changes, including staff cuts, the restoration of previously suspended accounts, and the elimination of Twitter's verification system.
Elon Musk suggests that users of X (formerly Twitter) may have to pay for access to the platform in order to counter bots, with a small monthly payment being considered as a defense against fake accounts.
Bots are scraping information from powerful AI models, such as OpenAI's GPT-4, in new ways, leading to issues such as unauthorized training data extraction, unexpected bills, and the evasion of China's AI model blockade.