
Microsoft is launching a series of AI agents for its Security Copilot program designed to help professionals more easily protect their organizations against today’s security threats. Announced on Monday, Microsoft built six of the new agents, while five come from third-party partners. All will be available for preview starting in April.
Integrated with the software giant’s security products, the six Microsoft-created agents aim to help security teams handle high-volume security and IT tasks. Taking their cues from Microsoft’s Zero Trust framework, these agents will also learn from user feedback and adapt to internal workflows.
Also: Navigating AI-powered cyber threats in 2025: 4 expert security tips for businesses
The six Microsoft agents are described as follows:
- Phishing Triage Agent in Microsoft Defender: This agent prioritizes Microsoft Defender phishing alerts to distinguish real threats from false positives. Offering simple explanations for its decisions, this agent can also improve its detective skills based on your feedback.
- Alert Triage Agent in Microsoft Purview: This agent prioritizes Microsoft Purview alerts related to data loss and insider risks. Based on your feedback, it can also improve its behavior.
- Conditional Access Optimization Agent in Microsoft Entra: This agent looks for new users and apps in Microsoft Entra that aren’t covered by existing policies. It suggests the necessary updates to patch security holes and offers quick fixes to deal with identity and authentication methods.
- Vulnerability Remediation Agent in Microsoft Intune: This agent for Microsoft Intune prioritizes security vulnerabilities, uncovers app and policy configuration issues, and suggests the right Windows patches to apply.
- Threat Intelligence Briefing Agent in Security Copilot: This agent works with Security Copilot to share relevant and urgent threat intelligence based on your organization’s environment and exposure to specific risks.
Next up are the five third-party agents, all of which will be available in Security Copilot.
- Privacy Breach Response Agent by OneTrust: This agent analyzes data breaches and offers guidelines on how your organization can meet regulatory requirements.
- Network Supervisor Agent by Aviatrix: This agent scans and analyzes security risks related to VPN, gateway, and Site2Cloud connection outages and failures.
- SecOps Tooling Agent by BlueVoyant: This agent looks at your security operations center and controls and provides advice on how to improve them.
- Alert Triage Agent by Tanium: This agent places security alerts within certain contexts to help you decide how to handle each one.
- Task Optimizer Agent by Fletch: This agent prioritizes the most critical security alerts so you can determine how to address each one.
Officially launched about a year ago, Microsoft Security Copilot uses AI to monitor and analyze security threats that could impact your organization. Like any AI, the product tries to automate as much of the process as possible. The primary goal is to free up IT and security staffers from repetitive or time-consuming tasks. But this type of AI can also offer guidance to help staff determine how and where to focus their efforts, allowing them to respond to security threats more quickly and effectively.
Also: AI bots scraping your data? This free tool gives those pesky crawlers the run-around
Security Copilot is offered on a pay-as-you-go model, allowing organizations to start small and increase their usage as needed. The tool’s cost is billed monthly through a Security Compute Unit (SCU) at $4 per hour. Estimating one SCU for 24 hours daily for an entire month, Microsoft pegs the monthly cost at around $2,920.
“Today’s security professional has a perpetual onslaught of alerts and issues coming at them, often with limited context,” Kris Bondi, CEO and co-founder of security company Mimoto, told ZDNET. “While AI agents aren’t able to detect a threat, they should be able to help in responding to what has been found. An AI agent can be trained that when presented with specific cues to automatically execute a multi-step response. Removing some percentage of what security professionals must analyze would help what is currently an overwhelming list of tasks.”
However, today’s AI technology is prone to error. A tool like Security Copilot can fail to catch legitimate security threats and trigger false positives. That’s why human intervention is always needed. Plus, this security product remains relatively new, and many organizations are still trying to figure out how to adopt it.
Also: How AI agents help hackers steal your confidential data – and what to do about it
“AI agents promise improved threat response, but results from baseline models haven’t been overwhelming, with many customers reporting that even high-tier solutions miss significant numbers of threats,” J. Stephen Kowski, Field CTO at SlashNext Email Security+, told ZDNET. “Microsoft’s Security Copilot shows promise, but adoption has been slower than expected due to lingering questions about data handling, required services, and licensing costs.”
Want more stories about AI? Sign up for Innovation, our weekly newsletter.
+ There are no comments
Add yours