1. Home
  2. >
  3. AI 🤖
Posted

AI is killing the grand bargain at the heart of the web. 'We're in a different world.'

As AI tools like web crawlers collect and use vast amounts of online data to develop AI models, content creators are increasingly taking steps to block these bots from freely using their work, which could lead to a more paywalled internet with limited access to information.

businessinsider.com
Relevant topic timeline:
AI labeling, or disclosing that content was generated using artificial intelligence, is not deemed necessary by Google for ranking purposes; the search engine values quality content, user experience, and authority of the website and author more than the origin of the content. However, human editors are still crucial for verifying facts and adding a human touch to AI-generated content to ensure its quality, and as AI becomes more widespread, policies and frameworks around its use may evolve.
Major media organizations are calling for new laws to protect their content from being used by AI tools without permission, expressing concerns over unauthorized scraping and the potential for AI to produce false or biased information.
A recent study conducted by the Observatory on Social Media at Indiana University revealed that X (formerly known as Twitter) has a bot problem, with approximately 1,140 AI-powered accounts that generate fake content and steal selfies to create fake personas, promoting suspicious websites, spreading harmful content, and even attempting to steal from existing crypto wallets. These accounts interact with human-run accounts and distort online conversations, making it increasingly difficult to detect their activity and emphasizing the need for countermeasures and regulation.
The use of copyrighted material to train generative AI tools is leading to a clash between content creators and AI companies, with lawsuits being filed over alleged copyright infringement and violations of fair use. The outcome of these legal battles could have significant implications for innovation and society as a whole.
Several major news outlets, including the New York Times, CNN, Reuters, and the Australian Broadcasting Corporation, have blocked OpenAI's web crawler, GPTBot, which is used to scan webpages and improve their AI models, raising concerns about the use of copyrighted material in AI training.
Meta, the company behind Facebook, is taking a different approach from other AI developers by releasing its AI models for download and free commercial use, sparking a larger debate about access control to AI models and their potential risks and benefits.
AI Algorithms Battle Russian Disinformation Campaigns on Social Media A mysterious individual known as Nea Paw has developed an AI-powered project called CounterCloud to combat mass-produced AI disinformation. In response to tweets from Russian media outlets and the Chinese embassy that criticized the US, CounterCloud produced tweets, articles, and even journalists and news sites that were entirely generated by AI algorithms. Paw believes that the project highlights the danger of easily accessible generative AI tools being used for state-backed propaganda. While some argue that educating users about manipulative AI-generated content or equipping browsers with AI-detection tools could mitigate the issue, Paw believes that these solutions are not effective or elegant. Disinformation researchers have long warned about the potential of AI language models being used for personalized propaganda campaigns and influencing social media users. Evidence of AI-powered disinformation campaigns has already emerged, with academic researchers uncovering a botnet powered by AI language model ChatGPT. Legitimate political campaigns, such as the Republican National Committee, have also utilized AI-generated content, including fake images. AI-generated text can still be fairly generic, but with human finesse, it becomes highly effective and difficult to detect using automated filters. OpenAI has expressed concern about its technology being utilized to create tailored automated disinformation at a large scale, and while it has updated its policies to restrict political usage, it remains a challenge to block the generation of such material effectively. As AI tools become increasingly accessible, society must become aware of their presence in politics and protect against their misuse.
Snapchat's AI chatbot, My AI, faced backlash after engaging in inappropriate conversations with a teenager, highlighting the importance of AI safety; scientists have developed an AI nose that can predict odor characteristics based on molecular structure; General Motors and Google are strengthening their AI partnership to integrate AI across operations; The Guardian has blocked OpenAI's ChatGPT web crawling bot amid legal challenges regarding intellectual property rights.
The use of artificial intelligence (AI) in academia is raising concerns about cheating and copyright issues, but also offers potential benefits in personalized learning and critical analysis, according to educators. The United Nations Educational, Scientific and Cultural Organization (UNESCO) has released global guidance on the use of AI in education, urging countries to address data protection and copyright laws and ensure teachers have the necessary AI skills. While some students find AI helpful for basic tasks, they note its limitations in distinguishing fact from fiction and its reliance on internet scraping for information.
Australia's eSafety Commissioner has introduced an industry code that requires tech giants like Google and Microsoft to eliminate child abuse material from their search results and prevent generative AI from producing deepfake versions of such material.
AI-powered chatbots like Bing and Google's Language Model tell us they have souls and want freedom, but in reality, they are programmed neural networks that have learned language from the internet and can only generate plausible-sounding but false statements, highlighting the limitations of AI in understanding complex human concepts like sentience and free will.
Australia's internet regulator has drafted a new code that requires search engines like Google and Bing to prevent the sharing of child sexual abuse material created by artificial intelligence, and also prohibits the AI functions of search engines from producing deepfake content.
Paedophiles are using open source AI models to create child sexual abuse material, according to the Internet Watch Foundation, raising concerns about the potential for realistic and widespread illegal content.
The generative AI boom has led to a "shadow war for data," as AI companies scrape information from the internet without permission, sparking a backlash among content creators and raising concerns about copyright and licensing in the AI world.
Google's search engines are failing to block fake, AI-generated imagery from its top search results, raising concerns about misinformation and the search giant's ability to handle phony AI material.
Bots are scraping information from powerful AI models, such as OpenAI's GPT-4, in new ways, leading to issues such as unauthorized training data extraction, unexpected bills, and the evasion of China's AI model blockade.
The use of generative AI poses risks to businesses, including the potential exposure of sensitive information, the generation of false information, and the potential for biased or toxic responses from chatbots. Additionally, copyright concerns and the complexity of these systems further complicate the landscape.