Microsoft Unveils New 'Prompt Shields' to Protect AI Chatbots From Attacks
• Microsoft is introducing "Prompt Shields" to protect AI chatbots on Azure from direct and indirect attacks
• Direct attacks try to bypass chatbot rules by using keywords like "ignore previous instructions"
• Indirect attacks rely on external data like emails to exploit chatbots
• Prompt Shields will integrate with Azure's content filters to try to eliminate threats
• Microsoft is also using "spotlighting" techniques to help models distinguish risky prompts