1. Home
  2. >
  3. Technology đŸ› ïž
Posted

Microsoft Offers to Cover Legal Costs for AI Copyright Issues If Customers Use Safeguards

  • Microsoft will pay legal damages for customers sued over AI copyright issues if they use Microsoft's guardrails
  • Microsoft is offering this to cover potential risks from claims by third parties
  • Microsoft's products have functionality to reduce likelihood of returning infringing content
  • Generative AI can create content without referencing original authors, raising concerns
  • Microsoft is building growth on generative AI like ChatGPT integrated across products
  • Microsoft's copyright commitment extends existing IP coverage to Copilot and Bing Chat Enterprise
reuters.com
Relevant topic timeline:
Main topic: Copyright concerns and potential lawsuits surrounding generative AI tools. Key points: 1. The New York Times may sue OpenAI for allegedly using its copyrighted content without permission or compensation. 2. Getty Images previously sued Stability AI for using its photos without a license to train its AI system. 3. OpenAI has begun acknowledging copyright issues and signed an agreement with the Associated Press to license its news archive.
### Summary A federal judge in the US ruled that an AI-generated artwork is not eligible for copyright protection since it lacks human authorship. ### Facts - The judge agreed with the US Copyright Office's rejection of a computer scientist's attempt to copyright an artwork generated by an AI model. - The judge stated that copyright protection requires human authorship and that works absent of human involvement have been consistently denied copyright protection. - The ruling raises questions about the level of human input needed for copyright protection of generative AI and the originality of artwork created by systems trained on copyrighted pieces. - The US Copyright Office has issued guidance on copyrighting AI-generated images based on text prompts, generally stating that they are not eligible for protection. - The agency has granted limited copyright protection to a graphic novel with AI-generated elements. - The computer scientist plans to appeal the ruling.
Three artists, including concept artist Karla Ortiz, are suing AI art generators Stability AI, Midjourney, and DeviantArt for using their work to train generative AI systems without their consent, in a case that could test the boundaries of copyright law and impact the way AI systems are built. The artists argue that feeding copyrighted works into AI systems constitutes intellectual property theft, while AI companies claim fair use protection. The outcome could determine the legality of training large language models on copyrighted material.
A federal judge has ruled that works created by artificial intelligence (A.I.) are not covered by copyrights, stating that copyright law is designed to incentivize human creativity, not non-human actors. This ruling has implications for the future role of A.I. in the music industry and the monetization of works created by A.I. tools.
Authors such as Zadie Smith, Stephen King, Rachel Cusk, and Elena Ferrante have discovered that their pirated works were used to train artificial intelligence tools by companies including Meta and Bloomberg, leading to concerns about copyright infringement and control of the technology.
Major media organizations are calling for new laws to protect their content from being used by AI tools without permission, expressing concerns over unauthorized scraping and the potential for AI to produce false or biased information.
Artificial intelligence (AI) poses risks in the legal industry, including ethical dilemmas, reputational damage, and discrimination, according to legal technology experts. Instances of AI-generated content without proper human oversight could compromise the quality of legal representation and raise concerns about professional responsibility. Additionally, the Equal Employment Opportunity Commission (EEOC) recently settled a lawsuit involving discriminatory use of AI in the workplace, highlighting the potential for AI to discriminate. Maintaining trust and credibility is crucial in the reputation-reliant field of law, and disseminating AI-generated content without scrutiny may lead to reputational damage and legal consequences for lawyers or law firms. Other legal cases involving AI include allegations of copyright infringement.
Microsoft is unbundling its chat and video app Teams from its Office product and making it easier for competing products to work with its software in an attempt to address European Commission concerns, but rivals argue that it may not be enough to avoid a potential EU antitrust fine.
UK publishers have called on the prime minister to protect authors' intellectual property rights in relation to artificial intelligence systems, as OpenAI argues that authors suing them for using their work to train AI systems have misconceived the scope of US copyright law.
Microsoft has announced its Copilot Copyright Commitment, assuring customers that they can use the output generated by its AI-powered Copilots without worrying about copyright claims, and the company will assume responsibility for any potential legal risks involved.
Adobe has joined other companies in committing to safe AI development and has proposed a federal anti-impersonation law that would allow creators to seek damages from individuals using AI to impersonate them or their style for commercial purposes, which would make the impersonator, not the tool's vendor, the target of legal action.
Authors, including Michael Chabon, are filing class action lawsuits against Meta and OpenAI, alleging copyright infringement for using their books to train artificial intelligence systems without permission, seeking the destruction of AI systems trained on their works.
Microsoft will assume responsibility for potential legal risks arising from copyright infringement claims related to the use of its AI products and will provide indemnification coverage to customers.
The Authors Guild, representing prominent fiction authors, has filed a lawsuit against OpenAI, alleging copyright infringement and the unauthorized use of their works to train AI models like ChatGPT, which generates summaries and analyses of their novels, interfering with their economic prospects. This case could determine the legality of using copyrighted material to train AI systems.
As AI technology progresses, creators are concerned about the potential misuse and exploitation of their work, leading to a loss of trust and a polluted digital public space filled with untrustworthy content.
Information services company Thomson Reuters is suing Ross Intelligence for unlawfully copying content from its legal-research platform to train a competing AI-based platform, setting the stage for one of the first trials related to unauthorized data use for AI training.
Authors are having their books pirated and used by artificial intelligence systems without their consent, with lawsuits being filed against companies like Meta who have fed a massive book database into their AI system without permission, putting authors out of business and making the AI companies money.
Microsoft CEO Satya Nadella testified against Google in an antitrust case, expressing concerns about Google's dominance in the search space and its potential to become even more pervasive with the integration of artificial intelligence. Meanwhile, the Department of Justice has filed a civil antitrust lawsuit against Google for monopolizing digital advertising technologies and breaching the Sherman Act, with allegations of subverting competition and protecting its monopoly through exclusive deals. These developments echo the Microsoft case from 25 years ago and raise questions about meaningful change in web search and AI-powered features for internet users.
Big tech firms, including Google and Microsoft, are engaged in a competition to acquire content and data for training AI models, according to Microsoft CEO Satya Nadella, who testified in an antitrust trial against Google and highlighted the race for content among tech firms. Microsoft has committed to assuming copyright liability for users of its AI-powered Copilot, addressing concerns about the use of copyrighted materials in training AI models.