Microsoft Taps OpenAI to Build Security Copilot AI, Works to Refine Responses Before Public Release
-
Microsoft built Security Copilot, one of its most important AI products, by tapping into OpenAI's GPT-4 model and cherry-picking good examples from the outputs.
-
Early on, Microsoft pitched the capabilities to government customers, even though the AI exhibited issues like hallucinations.
-
To improve the product, Microsoft incorporated its own security data to help ground the AI's responses.
-
Security Copilot is designed as a closed-loop system that learns from user feedback over time to provide more useful answers.
-
Microsoft continues to adjust and improve Security Copilot's responses based on interactions as it prepares to make the product generally available.