1. Home
  2. >
  3. AI 🤖
Posted

Google’s new AI-powered search results are ripping off news sites

Google's AI-generated search result summaries, which use key points from news articles, are facing criticism for potentially incentivizing media organizations to put their work behind paywalls and leading to accusations of theft. Media companies are concerned about the impact on their credibility and revenue, prompting some to seek payment from AI companies to train language models on their content. However, these generative AI models are not perfect and require user feedback to improve accuracy and avoid errors.

qz.com
Relevant topic timeline:
Main topic: Google is adding contextual images and videos to its AI-powered Search Generative Experiment (SGE) and showing the date of publishing for suggested links. Key points: 1. Google is enhancing its AI-powered Search Generative Experiment (SGE) by adding contextual images and videos related to search queries. 2. The company is also displaying the date of publishing for suggested links to provide users with information about the recency of the content. 3. Google has made performance improvements to ensure quick access to AI-powered search results. 4. Users can sign up for testing these new features through Search Labs and access them through the Google app or Chrome. 5. Google is exploring generative AI in various products, including its chatbot Bard, Workspace tools, and enterprise solutions. 6. Google Assistant is also expected to incorporate generative AI, according to recent reports.
AI labeling, or disclosing that content was generated using artificial intelligence, is not deemed necessary by Google for ranking purposes; the search engine values quality content, user experience, and authority of the website and author more than the origin of the content. However, human editors are still crucial for verifying facts and adding a human touch to AI-generated content to ensure its quality, and as AI becomes more widespread, policies and frameworks around its use may evolve.
This webinar explores the positive effects of AI on reputation management and provides insights on how to stay ahead of Google's algorithm updates and AI advancements for SEO success.
Google's recently released guidelines for creating helpful content outline the vital criteria marketers need to be aware of in a search world that’s constantly evolving and driven by AI.
Generative AI is not going to replace SEO jobs, but it will change the industry and require adaptation, particularly in prompt customization and the evolution of links. Technical SEOs may have an advantage in handling these changes, and generative AI can save time on content creation. However, careful application and consideration of biases are necessary when using generative AI.
Google's AI-driven Search Generative Experience (SGE) has been generating false information and even defending human slavery, raising concerns about the potential harm it could cause if rolled out to the public.
The Associated Press has released guidance on the use of AI in journalism, stating that while it will continue to experiment with the technology, it will not use it to create publishable content and images, raising questions about the trustworthiness of AI-generated news. Other news organizations have taken different approaches, with some openly embracing AI and even advertising for AI-assisted reporters, while smaller newsrooms with limited resources see AI as an opportunity to produce more local stories.
SEO professionals in 2023 and 2024 are most focused on content creation and strategy, with generative AI being a disruptive tool that can automate content development and production processes, although it has its limitations and standing out from competitors will be a challenge. AI can be leveraged effectively for repurposing existing content, automated keyword research, content analysis, optimizing content, and personalization and segmentation, but marketers should lead with authenticity, highlight their expertise, and keep experimenting to stay ahead of the competition.
AI Algorithms Battle Russian Disinformation Campaigns on Social Media A mysterious individual known as Nea Paw has developed an AI-powered project called CounterCloud to combat mass-produced AI disinformation. In response to tweets from Russian media outlets and the Chinese embassy that criticized the US, CounterCloud produced tweets, articles, and even journalists and news sites that were entirely generated by AI algorithms. Paw believes that the project highlights the danger of easily accessible generative AI tools being used for state-backed propaganda. While some argue that educating users about manipulative AI-generated content or equipping browsers with AI-detection tools could mitigate the issue, Paw believes that these solutions are not effective or elegant. Disinformation researchers have long warned about the potential of AI language models being used for personalized propaganda campaigns and influencing social media users. Evidence of AI-powered disinformation campaigns has already emerged, with academic researchers uncovering a botnet powered by AI language model ChatGPT. Legitimate political campaigns, such as the Republican National Committee, have also utilized AI-generated content, including fake images. AI-generated text can still be fairly generic, but with human finesse, it becomes highly effective and difficult to detect using automated filters. OpenAI has expressed concern about its technology being utilized to create tailored automated disinformation at a large scale, and while it has updated its policies to restrict political usage, it remains a challenge to block the generation of such material effectively. As AI tools become increasingly accessible, society must become aware of their presence in politics and protect against their misuse.
Generative AI has revolutionized various sectors by producing novel content, but it also raises concerns around biases, intellectual property rights, and security risks. Debates on copyrightability and ownership of AI-generated content need to be resolved, and existing laws should be modified to address the risks associated with generative AI.
Google Docs is introducing a new paid "Proofread" feature that uses AI to provide suggestions for conciseness, clarity, wording, and sentence structure to help users compose high-quality content more easily and quickly.
AI technology is making it easier and cheaper to produce mass-scale propaganda campaigns and disinformation, using generative AI tools to create convincing articles, tweets, and even journalist profiles, raising concerns about the spread of AI-powered fake content and the need for mitigation strategies.
Google is expanding the availability of its generative AI-powered search engine, Search Generative Experience (SGE), to India and Japan, allowing the company to test its functionality at scale in different languages and gather user feedback. Google is also improving the appearance of web page links in generative AI responses and seeing high user satisfaction, particularly among younger users who appreciate the ability to ask follow-up questions. This move comes as Microsoft has been offering its own generative AI-powered search engine, Bing, for months, aiming to compete with Google in the AI space.
Google is optimizing its AI-powered overviews in Search results to present links for related information better, making them easier for users to access, and is expanding testing for Search Labs and the Search Generative Experience to India and Japan.
Google is updating its AI-powered search summaries to include sources and links to associated websites, addressing criticism that it doesn't give proper credit or access to third-party sites.
Google celebrates its 25th birthday as the dominant search engine, but the rise of artificial intelligence (AI) and generative AI tools like Google's Bard and Gemini may reshape the future of search by providing quick information summaries at the top of the results page while raising concerns about misinformation and access to content.
Generative artificial intelligence, particularly large language models, has the potential to revolutionize various industries and add trillions of dollars of value to the global economy, according to experts, as Chinese companies invest in developing their own AI models and promoting their commercial use.
Perplexity.ai is building an alternative to traditional search engines by creating an "answer engine" that provides concise, accurate answers to user questions backed by curated sources, aiming to transform how we access knowledge online and challenge the dominance of search giants like Google and Bing.
IBM has introduced new generative AI models and capabilities on its Watsonx data science platform, including the Granite series models, which are large language models capable of summarizing, analyzing, and generating text, and Tuning Studio, a tool that allows users to tailor generative AI models to their data. IBM is also launching new generative AI capabilities in Watsonx.data and embarking on the technical preview for Watsonx.governance, aiming to support clients through the entire AI lifecycle and scale AI in a secure and trustworthy way.
Linguistics experts struggle to differentiate AI-generated content from human writing, with an identification rate of only 38.9%, raising questions about AI's role in academia and the need for improved detection tools.
Generative AI models that "hallucinate" or provide fictional answers to users are seen as a feature rather than a flaw, according to OpenAI CEO Sam Altman, as they offer a different perspective and novel ways of presenting information.
Google will require political advertisements that use artificial intelligence to disclose the use of AI-generated content, in order to prevent misleading and predatory campaign ads.
Researchers from MIT and the MIT-IBM Watson AI Lab have developed a technique that uses computer-generated data to improve the concept understanding of vision and language models, resulting in a 10% increase in accuracy, which has potential applications in video captioning and image-based question-answering systems.
BERT is an AI language model developed by Google that works behind the scenes to improve search results by understanding long, conversational queries and considering the influence of surrounding words.
Catch+Release, a startup that allows brands to license creator content, has introduced an AI-powered search engine that enables users to find user-generated content using natural language queries.
AI-generated content is becoming increasingly prevalent in political campaigns and poses a significant threat to democratic processes as it can be used to spread misinformation and disinformation to manipulate voters.
Google's recent search algorithm update, which allows for AI-generated content, has led to a significant drop in traffic for some website owners, causing frustration and concern over the quality of search results.
The future of AI chatbots is likely to involve less generic and more specialized models, as organizations focus on training data that is relevant to specific industries or areas, but the growing costs of gathering training data for large language models pose a challenge. One potential solution is the use of synthetic data, generated by AI, although this approach comes with its own set of problems such as accuracy and bias. As a result, the AI landscape may shift towards the development of many specific little language models tailored to specific purposes, utilizing feedback from experts within organizations to improve performance.
Microsoft's Bing search engine is receiving several AI improvements, including the integration of OpenAI's DALLE-E 3 model, the ability to provide more personalized answers based on prior chats, and the addition of invisible digital watermarks to AI-generated images for content authenticity. These enhancements aim to enhance user experiences and ensure responsible image generation.