- The AI Agenda is a new newsletter from The Information that focuses on the fast-paced world of artificial intelligence.
- The newsletter aims to provide daily insights on how AI is transforming various industries and the challenges it poses for regulators and content publishers.
- It will feature analysis from top researchers, founders, and executives, as well as provide scoops on deals and funding of key AI startups.
- The newsletter will cover advancements in AI technology such as ChatGPT and AI-generated video, and explore their impact on society.
- The goal is to provide readers with a clear understanding of the latest developments in AI and what to expect in the future.
The main topic is the emergence of AI in 2022, particularly in the areas of image and text generation. The key points are:
1. AI models like DALL-E, MidJourney, and Stable Diffusion have revolutionized image generation.
2. ChatGPT has made significant breakthroughs in text generation.
3. The history of previous tech epochs shows that disruptive innovations often come from new entrants in the market.
4. Existing companies like Apple, Amazon, Facebook, Google, and Microsoft are well-positioned to capitalize on the AI epoch.
5. Each company has its own approach to AI, with Apple focusing on local deployment, Amazon on cloud services, Meta on personalized content, Google on search, and Microsoft on productivity apps.
Companies across various sectors discussed their use of artificial intelligence (AI) and how it could benefit their businesses during Q2 earnings calls, aiming to distract investors from lackluster Q2 results and highlight the potential for AI to boost earnings and sales in the future, according to Goldman Sachs analysts.
AI labeling, or disclosing that content was generated using artificial intelligence, is not deemed necessary by Google for ranking purposes; the search engine values quality content, user experience, and authority of the website and author more than the origin of the content. However, human editors are still crucial for verifying facts and adding a human touch to AI-generated content to ensure its quality, and as AI becomes more widespread, policies and frameworks around its use may evolve.
The Alliance of Motion Picture and Television Producers has proposed guidelines for the usage of artificial intelligence (AI) and data transparency in the entertainment industry, stating that AI-created material cannot be considered literary or intellectually protected, and ensuring that credit, rights, and compensation for AI-generated scripts are given to the original human writer or reworker.
The use of AI in healthcare has the potential to improve efficiency and reduce costs, but it may also lead to a lack of human compassion and communication with patients, which is crucial in delivering sensitive news and fostering doctor-patient relationships.
Major media organizations are calling for new laws to protect their content from being used by AI tools without permission, expressing concerns over unauthorized scraping and the potential for AI to produce false or biased information.
Despite a lack of trust, people tend to support the use of AI-enabled technologies, particularly in areas such as police surveillance, due to factors like perceived effectiveness and the fear of missing out, according to a study published in PLOS One.
Artificial intelligence (AI) is seen as a tool that can inspire and collaborate with human creatives in the movie and TV industry, but concerns remain about copyright and ethical issues, according to Greg Harrison, chief creative officer at MOCEAN. Although AI has potential for visual brainstorming and automation of non-creative tasks, it should be used cautiously and in a way that values human creativity and culture.
The Global Artificial Intelligence Journalism Index (GAIJI) has released its final results, providing a measure of the performance of media outlets using AI journalism technologies to produce, publish, and promote content, with the most AI applications used in the Americas and Europe and leading Arab media outlets including Al-Arabiya and Al-Jazeera.
Researchers at Virginia Tech have used AI and natural language processing to analyze 10 years of broadcasts and tweets from CNN and Fox News, revealing a surge in partisan and inflammatory language that influences public debates on social media and reinforces existing views, potentially driving a wedge in public discourse.
Local journalism is facing challenges due to the decline of revenue from advertising and subscriptions, but artificial intelligence (AI) has the potential to save time and resources for newsrooms and unlock value in the industry by optimizing content and improving publishing processes. AI adoption is crucial for the future of local news and can shape its development while preserving the important institutional and local knowledge that newsrooms provide.
AI Algorithms Battle Russian Disinformation Campaigns on Social Media
A mysterious individual known as Nea Paw has developed an AI-powered project called CounterCloud to combat mass-produced AI disinformation. In response to tweets from Russian media outlets and the Chinese embassy that criticized the US, CounterCloud produced tweets, articles, and even journalists and news sites that were entirely generated by AI algorithms. Paw believes that the project highlights the danger of easily accessible generative AI tools being used for state-backed propaganda. While some argue that educating users about manipulative AI-generated content or equipping browsers with AI-detection tools could mitigate the issue, Paw believes that these solutions are not effective or elegant. Disinformation researchers have long warned about the potential of AI language models being used for personalized propaganda campaigns and influencing social media users. Evidence of AI-powered disinformation campaigns has already emerged, with academic researchers uncovering a botnet powered by AI language model ChatGPT. Legitimate political campaigns, such as the Republican National Committee, have also utilized AI-generated content, including fake images. AI-generated text can still be fairly generic, but with human finesse, it becomes highly effective and difficult to detect using automated filters. OpenAI has expressed concern about its technology being utilized to create tailored automated disinformation at a large scale, and while it has updated its policies to restrict political usage, it remains a challenge to block the generation of such material effectively. As AI tools become increasingly accessible, society must become aware of their presence in politics and protect against their misuse.
AI technology is making it easier and cheaper to produce mass-scale propaganda campaigns and disinformation, using generative AI tools to create convincing articles, tweets, and even journalist profiles, raising concerns about the spread of AI-powered fake content and the need for mitigation strategies.
Newspaper chain Gannett has suspended the use of an artificial intelligence tool for writing high school sports dispatches after it generated several flawed articles. The AI service, called LedeAI, produced reports that were mocked on social media for their repetitive language, lack of detail, and odd phrasing. Gannett has paused its use of the tool across all the local markets that had been using it and stated that it continues to evaluate vendors to ensure the highest journalistic standards. This incident follows other news outlets pausing the use of AI in reporting due to errors and concerns about ethical implications.
Google's AI-generated search result summaries, which use key points from news articles, are facing criticism for potentially incentivizing media organizations to put their work behind paywalls and leading to accusations of theft. Media companies are concerned about the impact on their credibility and revenue, prompting some to seek payment from AI companies to train language models on their content. However, these generative AI models are not perfect and require user feedback to improve accuracy and avoid errors.
Dezeen, an online architecture and design resource, has outlined its policy on the use of artificial intelligence (AI) in text and image generation, stating that while they embrace new technology, they do not publish stories that use AI-generated text unless it is focused on AI and clearly labeled as such, and they favor publishing human-authored illustrations over AI-generated images.
Meta's future growth relies heavily on AI as it aims to optimize its advertising offerings and emerge as a leader in AI-enhanced digital advertising, despite facing regulatory concerns and competition in the fast-moving AI landscape.
AI-assisted content production can help scale content strategy without sacrificing quality by implementing a system based on three key principles: human-AI collaboration, quality enhancement processes, and reducing production time, allowing content creators to generate high-quality articles more efficiently.
The ongoing strike by writers and actors in Hollywood may lead to the acceleration of artificial intelligence (AI) in the industry, as studios and streaming services could exploit AI technologies to replace talent and meet their content needs.
The use of AI in the entertainment industry, such as body scans and generative AI systems, raises concerns about workers' rights, intellectual property, and the potential for broader use of AI in other industries, infringing on human connection and privacy.
Perplexity.ai is building an alternative to traditional search engines by creating an "answer engine" that provides concise, accurate answers to user questions backed by curated sources, aiming to transform how we access knowledge online and challenge the dominance of search giants like Google and Bing.
AI on social media platforms, both as a tool for manipulation and for detection, is seen as a potential threat to voter sentiment in the upcoming US presidential elections, with China-affiliated actors leveraging AI-generated visual media to emphasize politically divisive topics, while companies like Accrete AI are employing AI to detect and predict disinformation threats in real-time.
Artificial intelligence (AI) poses a high risk to the integrity of the election process, as evidenced by the use of AI-generated content in politics today, and there is a need for stronger content moderation policies and proactive measures to combat the use of AI in coordinated disinformation campaigns.
Blogger Samantha North uses AI tools to generate ideas and elements of her blogs, but still values the importance of human expertise and experience in creating valuable content for her readers.
Google will require political advertisements that use artificial intelligence to disclose the use of AI-generated content, in order to prevent misleading and predatory campaign ads.
Sony Pictures Entertainment CEO, Tony Vinciquerra, believes that artificial intelligence (AI) is a valuable tool for writers and actors, dismissing concerns that AI will replace human creativity in the entertainment industry. He emphasizes that AI can enhance productivity and speed up production processes, but also acknowledges the need to find a common ground with unions concerned about job loss and intellectual property rights.
The iconic entertainment site, The A.V. Club, received backlash for publishing AI-generated articles that were found to be copied verbatim from IMDb, raising concerns about the use of AI in journalism and its potential impact on human jobs.
China is using artificial intelligence to manipulate public opinion in democratic countries and influence elections, particularly targeting Taiwan's upcoming presidential elections, by creating false narratives and misinformation campaigns. AI technology enables China to produce persuasive language and imagery, making disinformation campaigns more plausible and harder to detect. The reports from RAND and Microsoft highlight the increasing sophistication of China's cyber and influence operations, which utilize AI-generated content to spread misleading narratives and establish Chinese state media as an authoritative voice.
Wikipedia founder Jimmy Wales is not concerned about the threat of AI, stating that current models like ChatGPT "hallucinate far too much" and struggle with grounding and providing accurate information. However, he believes that AI will continue to improve and sees potential for using AI technology to develop useful tools for Wikipedia's community volunteers.
Artificial intelligence should not be used in journalism due to the potential for generating fake news, undermining the principles of journalism, and threatening the livelihood of human journalists.
The Royal Photographic Society conducted a survey among its members, revealing that 95% believe traditional photography is still necessary despite the advancement of AI-generated images, and 81% do not consider images created by AI as "real photography," expressing concerns about stolen content and potential increase in fake news.
Artificial intelligence (AI) has become the new focus of concern for tech-ethicists, surpassing social media and smartphones, with exaggerated claims of AI's potential to cause the extinction of the human race. These fear-mongering tactics and populist misinformation have garnered attention and book deals for some, but are lacking in nuance and overlook the potential benefits of AI.
English actor and broadcaster Stephen Fry expresses concerns over AI and its potential impact on the entertainment industry, citing examples of his own voice being duplicated for a documentary without his knowledge or consent, and warns that the technology could be used for more dangerous purposes such as generating explicit content or manipulating political speeches.
AI-generated content is becoming increasingly prevalent in political campaigns and poses a significant threat to democratic processes as it can be used to spread misinformation and disinformation to manipulate voters.
More than half of journalists surveyed expressed concerns about the ethical implications of AI in their work, although they acknowledged the time-saving benefits, highlighting the need for human oversight and the challenges faced by newsrooms in the global south.
Amazon will require publishers who use AI-generated content to disclose their use of the technology, small businesses are set to benefit from AI and cloud technologies, and President Biden warns the UN about the potential risks of AI governance, according to the latest AI technology advancements reported by Fox News.
AI technology has the potential to assist writers in generating powerful and moving prose, but it also raises complex ethical and artistic questions about the future of literature.
The New York Times is implementing enhanced reporter bios to foster trust with readers and highlight the human aspect of their work as misinformation and generative AI become more prevalent in the media landscape.