Microsoft Unveils New Azure AI Safety Tools to Build Responsible Generative AI
• Announcing new Azure AI tools to help build secure, trustworthy generative AI apps Prompt Shields, Groundedness detection, Safety system messages, Safety evaluations, Risk monitoring
• Prompt Shields detect direct "jailbreak" and indirect prompt attacks before they impact models
• Groundedness detection identifies "hallucinations" - false information in model outputs
• Safety evaluations assess app vulnerability to attacks and risk of generating harmful content
• Risk monitoring provides insights into filter-blocked inputs/outputs to inform safety improvements