Sarah Bird, Microsoft’s chief product officer of responsible AI, tells The Verge in an interview that her team has designed several new safety features that will be easy to use for Azure customers who aren’t hiring groups of red teamers to test the AI services they built. Microsoft says these LLM-powered tools can detect potential vulnerabilities, monitor for hallucinations “that are plausible yet unsupported,” and block malicious prompts in real time for Azure AI customers working with any model hosted on the platform.
Microsoft explained that the system is designed to integrate directly into existing AI development workflows without requiring complex configuration. This allows developers to focus on building applications, while safety mechanisms operate in the background to continuously monitor potential risks.
One of the key highlights is the hallucination detection capability, which addresses a common challenge in large language model–based applications. By identifying responses that appear plausible but lack factual support, the system helps reduce the risk of misleading information being delivered to end users.
In addition, Microsoft emphasized that these safety features are model-agnostic, meaning Azure AI customers can apply them across different models hosted on the platform. This approach reflects Microsoft’s broader effort to promote safer and more responsible AI adoption, particularly for organizations deploying AI at scale.