### Summary
A federal judge in the US ruled that an AI-generated artwork is not eligible for copyright protection since it lacks human authorship.
### Facts
- The judge agreed with the US Copyright Office's rejection of a computer scientist's attempt to copyright an artwork generated by an AI model.
- The judge stated that copyright protection requires human authorship and that works absent of human involvement have been consistently denied copyright protection.
- The ruling raises questions about the level of human input needed for copyright protection of generative AI and the originality of artwork created by systems trained on copyrighted pieces.
- The US Copyright Office has issued guidance on copyrighting AI-generated images based on text prompts, generally stating that they are not eligible for protection.
- The agency has granted limited copyright protection to a graphic novel with AI-generated elements.
- The computer scientist plans to appeal the ruling.
Major media organizations are calling for new laws to protect their content from being used by AI tools without permission, expressing concerns over unauthorized scraping and the potential for AI to produce false or biased information.
The use of copyrighted material to train generative AI tools is leading to a clash between content creators and AI companies, with lawsuits being filed over alleged copyright infringement and violations of fair use. The outcome of these legal battles could have significant implications for innovation and society as a whole.
Salesforce has released an AI Acceptable Use Policy that outlines the restrictions on the use of its generative AI products, including prohibiting their use for weapons development, adult content, profiling based on protected characteristics, medical or legal advice, and more. The policy emphasizes the need for responsible innovation and sets clear ethical guidelines for the use of AI.
The US Copyright Office has initiated a public comment period to explore the intersection of AI technology and copyright laws, including issues related to copyrighted materials used to train AI models, copyright protection for AI-generated content, liability for infringement, and the impact of AI mimicking human voices or styles. Comments can be submitted until November 15.
UK publishers have called on the prime minister to protect authors' intellectual property rights in relation to artificial intelligence systems, as OpenAI argues that authors suing them for using their work to train AI systems have misconceived the scope of US copyright law.
Artists Kelly McKernan, Karla Ortiz, and Sarah Andersen are suing makers of AI tools that generate new imagery on command, claiming that their copyrights are being violated and their livelihoods threatened by the use of their work without consent. The lawsuit may set a precedent for how difficult it will be for creators to stop AI developers from profiting off their work, as the technology advances.
Microsoft has announced its Copilot Copyright Commitment, assuring customers that they can use the output generated by its AI-powered Copilots without worrying about copyright claims, and the company will assume responsibility for any potential legal risks involved.
Microsoft will pay legal damages on behalf of customers using its artificial intelligence products if they are sued for copyright infringement for the output generated by such systems, as long as customers use the built-in "guardrails and content filters" to reduce the likelihood of generating infringing content.
Adobe has joined other companies in committing to safe AI development and has proposed a federal anti-impersonation law that would allow creators to seek damages from individuals using AI to impersonate them or their style for commercial purposes, which would make the impersonator, not the tool's vendor, the target of legal action.
Microsoft President Brad Smith supports the idea of federal licenses and a new regulatory agency for powerful AI platforms, in contrast to tech giants like Google, who prefer voluntary guidelines with limited penalties.
Microsoft will assume responsibility for potential legal risks arising from copyright infringement claims related to the use of its AI products and will provide indemnification coverage to customers.
Microsoft inadvertently exposed 38TB of personal data, including sensitive information, due to a data leak during the uploading of training data for AI models, raising concerns about the need for improved security measures as AI usage becomes more widespread.
Microsoft is introducing Microsoft Copilot, an AI-powered companion that will provide assistance and improve productivity across Windows 11, Microsoft 365, Bing, and Edge, with capabilities such as natural language interactions, personalized search, and AI-powered shopping experiences. Copilot will roll out as part of the Windows 11 update on September 26 and will be available in various Microsoft products. Additionally, Microsoft is unveiling new Surface devices and announcing the general availability of Microsoft 365 Copilot and Microsoft 365 Chat for enterprise customers on November 1, 2023.
Microsoft announced that it will bundle its CoPilot AI into a single, unified assistant across all of its products, aiming to transform the relationship between technology and users in a new era of personal computing.
The Authors Guild, representing prominent fiction authors, has filed a lawsuit against OpenAI, alleging copyright infringement and the unauthorized use of their works to train AI models like ChatGPT, which generates summaries and analyses of their novels, interfering with their economic prospects. This case could determine the legality of using copyrighted material to train AI systems.
As AI technology progresses, creators are concerned about the potential misuse and exploitation of their work, leading to a loss of trust and a polluted digital public space filled with untrustworthy content.
Microsoft's recent updates focused on AI-driven features like Copilot and Bing Chat, but while these advancements are impressive, concerns over privacy outweigh the benefits.
Big tech firms, including Google and Microsoft, are engaged in a competition to acquire content and data for training AI models, according to Microsoft CEO Satya Nadella, who testified in an antitrust trial against Google and highlighted the race for content among tech firms. Microsoft has committed to assuming copyright liability for users of its AI-powered Copilot, addressing concerns about the use of copyrighted materials in training AI models.
Companies utilizing generative AI technologies are taking different approaches when it comes to addressing the intellectual property risks associated with copyright infringement, with some vendors pledging to protect customers from legal fees and damages, while others shield themselves and leave customers responsible for potential liabilities. The terms of service agreements vary among vendors, and although some are committing to defending customers against copyright lawsuits, others limit their liability or provide indemnity only under certain conditions.
Bing's Image Creator software has implemented broad and strict rules to ensure trust and safety, but it is applying those rules in a way that goes beyond expectations, potentially limiting creative expression and raising concerns about AI's impact on important contexts such as medicine and hiring.