- Meta Platforms, formerly known as Facebook, is exploring the development of artificial intelligence (AI) products to assist creators in connecting with their fans.
- CEO Mark Zuckerberg mentioned the potential use of AI agents or chatbots to facilitate interactions between creators and their audiences.
- The company aims to create experiences that enable people to connect with the creators they admire and help creators build and nurture their communities.
- The specific AI products and features that Meta Platforms plans to develop for this purpose were not disclosed.
- This move aligns with Meta's broader strategy of focusing on the creator economy and enhancing user experiences on its platforms.
The main topic is Instagram launching a new feature to protect users from unwanted images and videos in DMs. The key points are:
1. Users can now only send one DM request to someone who doesn't follow them.
2. DM invites are limited to text only until the recipient accepts the request to chat.
3. The new feature aims to prevent users from receiving unwanted images or videos and reduce repeated messages.
4. The feature is particularly beneficial for women who often receive unsolicited nudes.
5. Instagram already has existing restrictions, such as the "Hidden Words" setting and the "Limits" feature, to protect users from abuse and unwanted contact.
6. The "Restrict" setting allows users to monitor bullies without blocking them.
7. Meta is also introducing new parental control tools for Instagram and Messenger.
Meta is introducing non-personalized content feeds on Facebook and Instagram for users in the European Union in order to comply with the Digital Services Act, allowing users to switch off AI-driven "personalization" features that track and profile individuals. The move comes ahead of the August 25 deadline and follows a similar announcement by TikTok.
Meta, the company behind Facebook, is taking a different approach from other AI developers by releasing its AI models for download and free commercial use, sparking a larger debate about access control to AI models and their potential risks and benefits.
Meta, formerly known as Facebook, is allowing users to delete personal information used in training generative AI models through a new opt-out tool featured on its website.
A.I. models pose a challenge to data privacy as it is difficult to remove user data from these models without resetting or deleting the entire model, presenting a collision course with inadequate privacy regulations, according to experts.
X's updated privacy policy reveals that it will collect biometric data, job and education history, and use publicly available information to train its machine learning and AI models, potentially for Elon Musk's other company, xAI, which aims to use public tweets for training its AI models.
Big Tech companies are using personal data to train their AI systems, raising concerns about privacy and control over our own information, as users have little say in how their data is being used and companies often define their own rules for data usage.
Tech company Voyager Labs, known for using AI to predict crime, is facing a privacy lawsuit from Meta (formerly Facebook), which claims that Voyager Labs created thousands of fake accounts on Facebook and Instagram to gather personal data, leading to a legal battle between AI's potential public safety use and individual privacy rights.
Meta, formerly known as Facebook, is reportedly developing a powerful new AI model to compete with OpenAI's GPT-4 and catch up in the Silicon Valley AI race.
Companies such as Rev, Instacart, and others are updating their privacy policies to allow the collection of user data for training AI models like speech-to-text and generative AI tools.