Microsoft unveils PyRIT framework to automate security 'red teaming' of AI systems.
-
Microsoft is releasing an open automation framework, PyRIT, to help security professionals red team generative AI systems.
-
Red teaming generative AI is more complex than traditional software due to the need to assess both security and responsible AI risks simultaneously.
-
PyRIT automates repetitive red teaming tasks and identifies risky areas for further manual review.
-
PyRIT has customizable components for targeting systems, encoding risks to test, scoring outputs, attacking, and tracking interactions.
-
Microsoft aims for PyRIT to help raise the security standards of generative AI across the industry.