Anthropic Sets Policy Restricting Political Use of Claude Chatbot
• Anthropic announced a policy prohibiting political candidates from using its Claude AI chatbot to impersonate themselves or for targeted political campaigns. Violations will result in warnings and service suspension.
• The policy aims to restrict the use of AI to mass generate false and misleading political information. Other companies like Meta and OpenAI have similar restrictions.
• Anthropic's acceptable use policy bars using its AI for political campaigning, lobbying, and voter suppression tactics. The company red-teams its systems to prevent misuse.
• For US users, Anthropic will redirect voting information requests to TurboVote instead of providing potentially biased AI-generated responses.
• The policy aligns with broader tech industry efforts to address challenges of AI in politics, like the FCC's ban on deepfake AI voices in robocalls.