1. Home
  2. >
  3. Technology šŸ› ļø
Posted

Research Shows Current AI Watermarking Techniques Are Vulnerable to Manipulation

  • Researchers found current AI watermarking techniques are easy to remove or fake.

  • Watermarks help detect if content is AI-generated to combat misuse. Major tech companies are developing watermarking.

  • University of Maryland team showed watermarks are trivial to evade or add to non-AI images.

  • Other researchers had similar findings - watermarks removable through simulated attacks.

  • Timing is crucial for improved watermarking before 2024 election where deep fakes could spread misinformation.

engadget.com
Relevant topic timeline:
Google has announced a new tool, called SynthID, which embeds a digital "watermark" into AI-generated images, making it harder to spread fake images and disinformation.
US Senator Pete Ricketts is introducing a bill that would require watermarks on AI-generated content in order to provide transparency to consumers and differentiate between real and AI-generated information.
Adobe, IBM, Nvidia, and five other companies have endorsed President Joe Biden's voluntary artificial intelligence commitments, including watermarking AI-generated content, as part of an initiative aimed at preventing the misuse of AI's capabilities for harmful purposes.
Current watermarking methods for AI images are unreliable and easily evaded, according to a study by University of Maryland computer science professor Soheil Feizi and his coauthors.
Microsoft has integrated OpenAI's DALL-E 3 model into its Bing Image Creator and Chat services, adding an invisible watermark to AI-generated images, as experts warn of the risks of generative AI tools being used for disinformation; however, some researchers question the effectiveness of watermarking in combating deepfakes and misinformation.
Google has announced that it will protect users of generative AI systems on its Google Cloud and Workspace platforms from allegations of intellectual property infringement, aligning with other companies such as Microsoft and Adobe.
Google has pledged to protect users of its generative AI products from copyright violations, but it has faced criticism for excluding its Bard search tool from this initiative, raising questions about accountability and the protection of creative rights in the field of AI.