Main topic: Copyright concerns and potential lawsuits surrounding generative AI tools.
Key points:
1. The New York Times may sue OpenAI for allegedly using its copyrighted content without permission or compensation.
2. Getty Images previously sued Stability AI for using its photos without a license to train its AI system.
3. OpenAI has begun acknowledging copyright issues and signed an agreement with the Associated Press to license its news archive.
### Summary
A federal judge in the US ruled that an AI-generated artwork is not eligible for copyright protection since it lacks human authorship.
### Facts
- The judge agreed with the US Copyright Office's rejection of a computer scientist's attempt to copyright an artwork generated by an AI model.
- The judge stated that copyright protection requires human authorship and that works absent of human involvement have been consistently denied copyright protection.
- The ruling raises questions about the level of human input needed for copyright protection of generative AI and the originality of artwork created by systems trained on copyrighted pieces.
- The US Copyright Office has issued guidance on copyrighting AI-generated images based on text prompts, generally stating that they are not eligible for protection.
- The agency has granted limited copyright protection to a graphic novel with AI-generated elements.
- The computer scientist plans to appeal the ruling.
Generative AI is starting to impact the animation and visual effects industry, with companies like Base Media exploring its potentials, but concerns about job security and copyright infringement remain.
The Alliance of Motion Picture and Television Producers has proposed guidelines for the usage of artificial intelligence (AI) and data transparency in the entertainment industry, stating that AI-created material cannot be considered literary or intellectually protected, and ensuring that credit, rights, and compensation for AI-generated scripts are given to the original human writer or reworker.
The use of copyrighted material to train generative AI tools is leading to a clash between content creators and AI companies, with lawsuits being filed over alleged copyright infringement and violations of fair use. The outcome of these legal battles could have significant implications for innovation and society as a whole.
Salesforce has released an AI Acceptable Use Policy that outlines the restrictions on the use of its generative AI products, including prohibiting their use for weapons development, adult content, profiling based on protected characteristics, medical or legal advice, and more. The policy emphasizes the need for responsible innovation and sets clear ethical guidelines for the use of AI.
Amazon's promotional art for the upcoming Fallout series appears to be AI-generated, sparking controversy and backlash from artists.
A federal judge in the US rejected an attempt to copyright an artwork created by an AI, ruling that copyright law only protects works of human creation. However, the judge also acknowledged that as AI becomes more involved in the creation process, challenging questions about human input and authorship will arise.
Dezeen, an online architecture and design resource, has outlined its policy on the use of artificial intelligence (AI) in text and image generation, stating that while they embrace new technology, they do not publish stories that use AI-generated text unless it is focused on AI and clearly labeled as such, and they favor publishing human-authored illustrations over AI-generated images.
Artists Kelly McKernan, Karla Ortiz, and Sarah Andersen are suing makers of AI tools that generate new imagery on command, claiming that their copyrights are being violated and their livelihoods threatened by the use of their work without consent. The lawsuit may set a precedent for how difficult it will be for creators to stop AI developers from profiting off their work, as the technology advances.
Artificial intelligence (AI) image generation tools, such as Midjourney and DALL·E 2, have gained popularity for their ability to create photorealistic images, artwork, and sketches with just a few text prompts. Other image generators like DreamStudio, Dream by WOMBO, and Canva offer unique features and styles for generating a wide range of images. However, copyright issues surrounding AI-generated images have led to ongoing lawsuits.
AI-generated images in Copy Magazine reveal the uncanny perfection of fashion photography and serve as a warning to break free from repeating past styles, prompting questions about ethics and copyright in AI image generation.
Adobe has released its Firefly AI tools, including AI art generators and color correction, for all subscribers to its Creative Cloud apps, allowing users to create deepfakes and modify images with text prompts.
The generative AI boom has led to a "shadow war for data," as AI companies scrape information from the internet without permission, sparking a backlash among content creators and raising concerns about copyright and licensing in the AI world.
The Royal Photographic Society conducted a survey among its members, revealing that 95% believe traditional photography is still necessary despite the advancement of AI-generated images, and 81% do not consider images created by AI as "real photography," expressing concerns about stolen content and potential increase in fake news.
Google's search engines are failing to block fake, AI-generated imagery from its top search results, raising concerns about misinformation and the search giant's ability to handle phony AI material.
The US Copyright Office has ruled for the third time that AI-generated art cannot be copyrighted, raising questions about whether AI-generated art is categorically excluded from copyright protection or if human creators should be listed as the image's creator. The office's position, which is based on existing copyright doctrine, has been criticized for being unscalable and a potential quagmire, as it fails to consider the creative choices made by AI systems similar to those made by human photographers.
Getty Images has partnered with Nvidia to launch Generative AI by Getty Images, a tool that allows users to create images using Getty's library of licensed photos, offering full copyright indemnification for commercial use and providing realistic-looking human figures.
AI-generated images have the potential to create alternative history and misinformation, raising concerns about their impact on elections and people's ability to discern truth from manipulated visuals.
Microsoft is introducing a new AI-powered image generation tool called Paint Cocreator, which allows users to create digital images by describing them with text prompts. The tool generates three variations of artwork for users to choose from and includes content filtering to block inappropriate images.
Crowdfunding site BackerKit has announced a new policy that prohibits the use of solely AI-generated content on its platform, in response to concerns about content ownership and ethical data sourcing, following criticism of Terraforming Mars’ Kickstarter campaign that raised over $2 million using AI art.
AI-altered images of celebrities are being used to promote products without their consent, raising concerns about the misuse of artificial intelligence and the need for regulations to protect individuals from unauthorized AI-generated content.
Microsoft's Bing Image Creator, an AI-based tool, is being used by users to generate images of popular characters like Kirby flying planes into skyscrapers, raising concerns about the limitations of AI moderation.
Getty Images has developed an AI tool that respects artists' copyrights by training it exclusively on licensed data, ensuring creators are rewarded as the tool grows in popularity over time.
Microsoft Bing AI's new image-generating feature, powered by OpenAI's DALL-E 3, has allowed users to create images of beloved characters such as Disney's Mickey Mouse perpetrating the 9/11 terror attacks, raising concerns about copyright infringement and the ethics of AI-generated content.
AI-generated stickers are causing controversy as users create obscene and offensive images, Microsoft Bing's image generation feature leads to pictures of celebrities and video game characters committing the 9/11 attacks, a person is injured by a Cruise robotaxi, and a new report reveals the weaponization of AI by autocratic governments. On another note, there is a growing concern among artists about their survival in a market where AI replaces them, and an interview highlights how AI is aiding government censorship and fueling disinformation campaigns.
Generative AI tools, including Facebook's AI sticker generator, are being used to create controversial and inappropriate content, such as violent or risqué scenes involving politicians and fictional characters, raising concerns about the misuse of such technology.
Bing's Image Creator software has implemented broad and strict rules to ensure trust and safety, but it is applying those rules in a way that goes beyond expectations, potentially limiting creative expression and raising concerns about AI's impact on important contexts such as medicine and hiring.
The latest upgrade to Microsoft's Bing Image Creator, incorporating OpenAI's DALL-E 3, has sparked concerns about the potential misuse and implications of AI-generated images, including political ads, nonconsensual imagery, and creative industries.
Adobe has announced updates to its generative AI image creation service, Firefly, including a new model that is better at rendering humans and larger in size, as well as introducing new controls and features for users to enhance their workflows.
Adobe has announced updates to its AI image synthesis features, including the launch of Firefly 2, Firefly Design Model, and Firefly Vector Model, with improved image quality and new capabilities, such as text-to-vector image generation.
Adobe has announced upgrades to its Firefly family of generative AI tools, including improvements to image generation in Photoshop, the introduction of generative AI to Adobe Illustrator designs, and the addition of text prompt abilities to Adobe Express layouts. The new AI model offers better image quality and detail through training on more images, and users can steer generation with photography parameters.
The U.S. Space Force has temporarily banned the use of web-based generative AI due to security concerns, suspending the creation of text, images, and other media using government data until new guidelines are released, according to an internal memo.
Google is introducing a new policy to defend users of its generative AI systems on Google Cloud and Workspace platforms against intellectual property violation claims, covering both the use of copyrighted works for training AI and the output generated by the systems.
The rise of AI image generation tools has sparked debate within the creative community, with some artists embracing their use for inspiration and idea generation, while others question the potential oversimplification of art through technology. Many artists see AI as a powerful tool to enhance their creative process, but also acknowledge the need for a strong artistic voice and concept. However, legal issues surrounding ownership and copyright of AI-generated artwork still remain unresolved.
Microsoft, Adobe, and other major companies are pledging to add metadata to AI-generated images to indicate their machine-made nature using a special symbol, in an effort to combat misinformation and provide transparency about the origins of the images.
Google is adding a new feature to its search engine that allows users to generate images using text prompts, similar to Microsoft's Bing, but with strict content filtering to prevent misuse and offensive content.
Adobe is expanding its AI-powered Firefly tool across its Creative Cloud suite, causing concerns among creative professionals about the future role of designers and artists, as well as potential cannibalization of Adobe's consumer base. However, Adobe believes AI will benefit creatives of all levels across the industry and enable the production of more visual content. The text-to-media capability, which is part of AI tools, allows for faster idea generation and serves as a baseline for final products that still require human skills and expertise. Ultimately, the impact of AI on the design and art industry remains to be seen.