The main topic of the article is Kickstarter's struggle to formulate a policy regarding the use of generative AI on its platform. The key points are:
1. Generative AI tools used on Kickstarter have been trained on publicly available content without giving credit or compensation to the original creators.
2. Kickstarter is requiring projects using AI tools to disclose relevant details about how the AI content will be used and which parts are original.
3. New projects involving the development of AI tech must detail the sources of training data and implement safeguards for content creators.
4. Kickstarter's new policy will go into effect on August 29 and will be enforced through a new set of questions during project submissions.
5. Projects that do not properly disclose their use of AI may be suspended.
6. Kickstarter has been considering changes in policy around generative AI since December and has faced challenges in moderating AI works.
The Alliance of Motion Picture and Television Producers has proposed guidelines for the usage of artificial intelligence (AI) and data transparency in the entertainment industry, stating that AI-created material cannot be considered literary or intellectually protected, and ensuring that credit, rights, and compensation for AI-generated scripts are given to the original human writer or reworker.
Major media organizations are calling for new laws to protect their content from being used by AI tools without permission, expressing concerns over unauthorized scraping and the potential for AI to produce false or biased information.
Salesforce has released an AI Acceptable Use Policy that outlines the restrictions on the use of its generative AI products, including prohibiting their use for weapons development, adult content, profiling based on protected characteristics, medical or legal advice, and more. The policy emphasizes the need for responsible innovation and sets clear ethical guidelines for the use of AI.
Google will require political advertisements that use artificial intelligence to disclose the use of AI-generated content, in order to prevent misleading and predatory campaign ads.
Indie Game Studios' imprint Stronghold Games has sparked controversy by using generative AI in the production of expansions for their popular board game Terraforming Mars, which has already raised over $1.3 million on Kickstarter. The use of AI in the game's development has raised concerns about copyright infringement and artist compensation, but the company sees the technology as a cost-saving and time-saving tool that can revolutionize the industry.
FryxGames CEO defends the use of AI in Terraforming Mars project but announces that the company's next board game will not include AI, highlighting the ethical and copyright concerns of using AI in artwork.
Getty Images is reaffirming its stance against AI-generated content by banning submissions created with Adobe's Firefly-powered generative AI tools, a move that contrasts with competitor Shutterstock's allowance of AI-generated content.
Amazon has introduced new guidelines requiring publishers to disclose the use of AI in content submitted to its Kindle Direct Publishing platform, in an effort to curb unauthorized AI-generated books and copyright infringement. Publishers are now required to inform Amazon about AI-generated content, but AI-assisted content does not need to be disclosed. High-profile authors have recently joined a class-action lawsuit against OpenAI, the creator of the AI chatbot, for alleged copyright violations.
As AI technology progresses, creators are concerned about the potential misuse and exploitation of their work, leading to a loss of trust and a polluted digital public space filled with untrustworthy content.
Google is introducing a new policy to defend users of its generative AI systems on Google Cloud and Workspace platforms against intellectual property violation claims, covering both the use of copyrighted works for training AI and the output generated by the systems.
Artificial intelligence (AI) is increasingly being used to create fake audio and video content for political ads, raising concerns about the potential for misinformation and manipulation in elections. While some states have enacted laws against deepfake content, federal regulations are limited, and there are debates about the balance between regulation and free speech rights. Experts advise viewers to be skeptical of AI-generated content and look for inconsistencies in audio and visual cues to identify fakes. Larger ad firms are generally cautious about engaging in such practices, but anonymous individuals can easily create and disseminate deceptive content.
Free and cheap AI tools are enabling the creation of fake AI celebrities and content, leading to an increase in fraud and false endorsements, making it important for consumers to be cautious and vigilant when evaluating products and services.