1. Home
  2. >
  3. AI 🤖
Posted

AI Widespread in Newsrooms Despite Concerns Over Bias and Accuracy: Survey

  • Global survey by JournalismAI shows AI adoption is widespread in newsrooms despite concerns about bias and accuracy.

  • Over 75% of surveyed newsrooms use AI in news gathering, production, and distribution to increase efficiency and reach wider audiences.

  • However, over 60% worried about ethical implications like biased coverage and misrepresenting marginalized groups.

  • News distribution sees widest AI use for search optimization and tailoring content, while news production uses AI for translation, proofreading, headlines.

  • Only about one-third of newsrooms have an AI strategy, even as tech giants rapidly adopt AI despite legal and ethical issues.

theverge.com
Relevant topic timeline:
The main topic of the article is the backlash against AI companies that use unauthorized creative work to train their models. Key points: 1. The controversy surrounding Prosecraft, a linguistic analysis site that used scraped data from pirated books without permission. 2. The debate over fair use and copyright infringement in relation to AI projects. 3. The growing concern among writers and artists about the use of generative AI tools to replace human creative work and the push for individual control over how their work is used.
Main Topic: Increasing use of AI in manipulative information campaigns online. Key Points: 1. Mandiant has observed the use of AI-generated content in politically-motivated online influence campaigns since 2019. 2. Generative AI models make it easier to create convincing fake videos, images, text, and code, posing a threat. 3. While the impact of these campaigns has been limited so far, AI's role in digital intrusions is expected to grow in the future.
### Summary A debate has arisen about whether AI-generated content should be labeled as such, but Google does not require AI labeling as it values quality content regardless of its origin. Human editors and a human touch are still necessary to ensure high-quality and trustworthy content. ### Facts - Over 85% of marketers use AI in their content production workflow. - AI labeling involves indicating that a piece of content was generated using artificial intelligence. - Google places a higher emphasis on content quality rather than its origin. - The authority of the website and author is important to Google. - Google can detect AI-generated content but focuses on content quality and user intent. - Human editors are needed to verify facts and ensure high-quality content. - Google prioritizes natural language, which requires a human touch. - As AI becomes more prevalent, policies and frameworks may evolve.
Summary: AI ethics refers to the system of moral principles and professional practices used to guide the development and use of artificial intelligence technology, with top concerns for marketers including job security, privacy, bias and discrimination, misinformation and disinformation, and intellectual property issues, and there are five steps that can be taken to maintain ethical AI practices within teams and organizations.
The Alliance of Motion Picture and Television Producers has proposed guidelines for the usage of artificial intelligence (AI) and data transparency in the entertainment industry, stating that AI-created material cannot be considered literary or intellectually protected, and ensuring that credit, rights, and compensation for AI-generated scripts are given to the original human writer or reworker.
The use of AI in healthcare has the potential to improve efficiency and reduce costs, but it may also lead to a lack of human compassion and communication with patients, which is crucial in delivering sensitive news and fostering doctor-patient relationships.
Artificial intelligence (AI) is seen as a tool that can inspire and collaborate with human creatives in the movie and TV industry, but concerns remain about copyright and ethical issues, according to Greg Harrison, chief creative officer at MOCEAN. Although AI has potential for visual brainstorming and automation of non-creative tasks, it should be used cautiously and in a way that values human creativity and culture.
The Global Artificial Intelligence Journalism Index (GAIJI) has released its final results, providing a measure of the performance of media outlets using AI journalism technologies to produce, publish, and promote content, with the most AI applications used in the Americas and Europe and leading Arab media outlets including Al-Arabiya and Al-Jazeera.
Researchers at Virginia Tech have used AI and natural language processing to analyze 10 years of broadcasts and tweets from CNN and Fox News, revealing a surge in partisan and inflammatory language that influences public debates on social media and reinforces existing views, potentially driving a wedge in public discourse.
Local journalism is facing challenges due to the decline of revenue from advertising and subscriptions, but artificial intelligence (AI) has the potential to save time and resources for newsrooms and unlock value in the industry by optimizing content and improving publishing processes. AI adoption is crucial for the future of local news and can shape its development while preserving the important institutional and local knowledge that newsrooms provide.
Artificial intelligence (AI) tools can put human rights at risk, as highlighted by researchers from Amnesty International on the Me, Myself, and AI podcast, who discuss scenarios in which AI is used to track activists and make automated decisions that can lead to discrimination and inequality, emphasizing the need for human intervention and changes in public policy to address these issues.
AI technology is making it easier and cheaper to produce mass-scale propaganda campaigns and disinformation, using generative AI tools to create convincing articles, tweets, and even journalist profiles, raising concerns about the spread of AI-powered fake content and the need for mitigation strategies.
Newspaper chain Gannett has suspended the use of an artificial intelligence tool for writing high school sports dispatches after it generated several flawed articles. The AI service, called LedeAI, produced reports that were mocked on social media for their repetitive language, lack of detail, and odd phrasing. Gannett has paused its use of the tool across all the local markets that had been using it and stated that it continues to evaluate vendors to ensure the highest journalistic standards. This incident follows other news outlets pausing the use of AI in reporting due to errors and concerns about ethical implications.
The ongoing strike by writers and actors in Hollywood may lead to the acceleration of artificial intelligence (AI) in the industry, as studios and streaming services could exploit AI technologies to replace talent and meet their content needs.
The use of AI in the entertainment industry, such as body scans and generative AI systems, raises concerns about workers' rights, intellectual property, and the potential for broader use of AI in other industries, infringing on human connection and privacy.
The use of AI in radio broadcasting has sparked a debate among industry professionals, with some expressing concerns about job loss and identity theft, while others see it as a useful tool to enhance creativity and productivity.
AI on social media platforms, both as a tool for manipulation and for detection, is seen as a potential threat to voter sentiment in the upcoming US presidential elections, with China-affiliated actors leveraging AI-generated visual media to emphasize politically divisive topics, while companies like Accrete AI are employing AI to detect and predict disinformation threats in real-time.
Artificial intelligence (AI) poses a high risk to the integrity of the election process, as evidenced by the use of AI-generated content in politics today, and there is a need for stronger content moderation policies and proactive measures to combat the use of AI in coordinated disinformation campaigns.
News Corp CEO Robert Thomson criticizes the left-wing bias and inaccuracies produced by AI-generated content, highlighting the threat it poses to the news industry and its potential to distribute damaging content.
AI systems are becoming increasingly adept at turning text into realistic and believable speech, raising questions about the ethical implications and responsibilities associated with creating and using these AI voices.
The iconic entertainment site, The A.V. Club, received backlash for publishing AI-generated articles that were found to be copied verbatim from IMDb, raising concerns about the use of AI in journalism and its potential impact on human jobs.
Artificial intelligence should not be used in journalism due to the potential for generating fake news, undermining the principles of journalism, and threatening the livelihood of human journalists.
Artificial intelligence (AI) has become the new focus of concern for tech-ethicists, surpassing social media and smartphones, with exaggerated claims of AI's potential to cause the extinction of the human race. These fear-mongering tactics and populist misinformation have garnered attention and book deals for some, but are lacking in nuance and overlook the potential benefits of AI.
AI-generated content is becoming increasingly prevalent in political campaigns and poses a significant threat to democratic processes as it can be used to spread misinformation and disinformation to manipulate voters.
AI technology has the potential to assist writers in generating powerful and moving prose, but it also raises complex ethical and artistic questions about the future of literature.
The New York Times is implementing enhanced reporter bios to foster trust with readers and highlight the human aspect of their work as misinformation and generative AI become more prevalent in the media landscape.
AI tools in science are becoming increasingly prevalent and have the potential to be crucial in research, but scientists also have concerns about the impact of AI on research practices and the potential for biases and misinformation.
Artificial intelligence (AI) has the potential to facilitate deceptive practices such as deepfake videos and misleading ads, posing a threat to American democracy, according to experts who testified before the U.S. Senate Rules Committee.
The proliferation of fake news generated by AI algorithms poses a threat to media outlets and their ability to differentiate between true and false information, highlighting the need for human curation and the potential consequences of relying solely on algorithms.
Artificial intelligence in the world of journalism is expected to significantly evolve and impact the industry over the next decade, according to Phillip Reese, an associate professor of journalism at Sacramento State.
AI-generated content is causing concern among writers, as it is predicted to disrupt their livelihoods and impact their careers, with over 1.4 billion jobs expected to be affected by AI in the next three years. However, while AI may change the writing industry, it is unlikely to completely replace writers, instead augmenting their work and providing tools to enhance productivity, according to OpenAI's ChatGPT.
Hollywood writers have reached a groundbreaking agreement that establishes guidelines for the use of artificial intelligence (AI) in film and television, ensuring that writers have control over the technology and protecting their roles from being replaced by AI. This contract could serve as a model for other industries dealing with AI.
China's use of artificial intelligence (AI) to manipulate social media and shape global public opinion poses a growing threat to democracies, as generative AI allows for the creation of more effective and believable content at a lower cost, with implications for the 2024 elections.
AI-generated disinformation poses a significant threat to elections and democracies worldwide, as the line between fact and fiction becomes increasingly blurred.
Summary: Artificial intelligence technology is making its way into the entertainment industry, with writers now having the freedom to incorporate AI software into their creative process, raising questions about its usefulness and the ability to differentiate between human and machine-generated content.
The rise of false and misleading information on social media, exacerbated by advances in artificial intelligence, has created an authenticity crisis that is eroding trust in traditional news outlets and deepening social and political divisions.
Newspapers and other data owners are demanding payment from AI companies like OpenAI, which have freely used news stories to train their generative AI models, in order to access their content and increase traffic to their websites.
Virtual news anchors, powered by artificial intelligence (AI), are on the rise around the world, with countries like South Korea, India, Greece, Kuwait, and Taiwan introducing AI newsreaders, but it remains to be seen whether these virtual presenters are here to stay or if they are just a passing marketing gimmick.
The impact of AI on publishing is causing concerns regarding copyright, the quality of content, and ownership of AI-generated works, although some authors and industry players feel the threat is currently minimal due to the low quality of AI-written books. However, concerns remain about legal issues, such as copyright ownership and AI-generated content in translation.
The publishing industry is grappling with concerns about the impact of AI on book writing, including issues of copyright, low-quality computer-written books flooding the market, and potential legal disputes over ownership of AI-generated content. However, some authors and industry players believe that AI still has a long way to go in producing high-quality fiction, and there are areas of publishing, such as science and specialist books, where AI is more readily accepted.
The publishing industry is grappling with concerns about the impact of AI on copyright, as well as the quality and ownership of AI-generated content, although some authors and industry players believe that AI writing still has a long way to go before it can fully replace human authors.
The Israel-Hamas conflict is being exacerbated by artificial intelligence (AI), which is generating a flood of misinformation and propaganda on social media, making it difficult for users to discern what is real and what is fake. AI-generated images and videos are being used to spread agitative propaganda, deceive the public, and target specific groups. The rise of unregulated AI tools is an "experiment on ourselves," according to experts, and there is a lack of effective tools to quickly identify and combat AI-generated content. Social media platforms are struggling to keep up with the problem, leading to the widespread dissemination of false information.