Microsoft Expands Security Copilot with AI Agents
5:53
Mon | Mar 24, 2025 | 4:21 PM PDT

Microsoft announced a major expansion of its Security Copilot platform today, introducing a suite of AI agents designed to automate common security operations tasks and reduce the burden on cybersecurity professionals. The update also includes new protections for AI workloads across multi-cloud environments and tools to manage the risks of "shadow AI."

"Today, we're unveiling new capabilities in Microsoft Security Copilot, including new AI agents that help security teams reduce the mean time to resolve incidents by automating key workflows," Microsoft said in its official announcement. "These agents are designed to act on behalf of security professionals, handling routine tasks like triage and investigation, freeing up experts to focus on more strategic work."

While the announcement reflects growing momentum around AI-driven security operations, cybersecurity professionals remain cautious. Many see promise but also critical pitfalls if these agents are deployed without proper controls.

AI agents are assistants, not autonomous defenders

Kris Bondi, CEO and Co-Founder of Mimoto, said she believes AI agents can significantly help with the overwhelming volume of alerts most teams face—but only under the right conditions.

"While AI agents aren't able to detect a threat, they should be able to help in responding to what has been found," Bondi said. "It's important that these AI agents are monitored and have the ability to rollback any tasks executed… and to allow a human to be inserted into a process, if needed."

In her view, AI should act as a force multiplier—not a replacement—for human decision-making. The goal is to reduce the load, not remove humans from the loop.

False positives could make the problem worse

Some security professionals are concerned that AI agents, while designed to streamline operations, may actually introduce more work in the form of false positives and unreliable automation.

"Instead of reducing effort, they could actually increase workload through false positives and the need for human oversight," said J Stephen Kowski, Field CTO at SlashNext Email Security+.

Kowski acknowledged the 24/7 monitoring capabilities of Microsoft Security Copilot sound promising in theory, but in practice, the baseline models miss threats and require extensive human validation.

"Results from baseline models haven't been overwhelming… even high-tier solutions miss significant numbers of threats," he added.

AI agents are identities—and they need governance

One of the most pressing concerns from industry leaders is that AI agents often operate as non-human identities (NHIs)—with broad system access but minimal oversight.

"The real risk isn't AI itself, but the fact that organizations don't manage these non-human identities with the same security controls as human users," said Guy Feinberg, Growth Product Manager at Oasis Security.

Feinberg warns that AI agents are vulnerable to the same manipulation tactics used against people—like social engineering and prompt injection—and should therefore be subject to least privilege access, activity monitoring, and identity governance policies.

"You can't stop attackers from manipulating AI, just like you can't stop them from phishing employees," Feinberg said. "The solution is better governance and security for all identities—human and non-human alike."

Automation without guardrails can amplify risk

As AI speeds up how fast systems can act, it also raises the stakes if something goes wrong.

"An AI agent with broad privileges can do major damage if an attacker manages to hijack it," said Akhil Mittal, Senior Manager at Black Duck. "They save time and reduce manual errors, but that speed also raises the stakes."

Mittal highlights a growing concern in DevSecOps circles: AI-driven automation can accelerate the spread of malicious code or misconfigurations across environments in seconds. Without clear controls—like secret rotation, access limits, and detailed audit logs—organizations risk turning helpful automation into high-speed compromise vectors.

Security teams are interested but not yet sold

Despite Microsoft's integration of Security Copilot into its ecosystem, many security teams are still evaluating whether it makes sense for their operations.

"Adoption has been slower than expected due to lingering questions about data handling, required services, and licensing costs," said Kowski. "Most organizations continue to rely on specialized tools that provide clear value and proven protection."

The platform's natural language interface and deep Microsoft 365 / Defender integration are appealing. But until the AI models prove reliable in detecting and responding to real-world threats—especially sophisticated phishing and social engineering attacks—many organizations remain hesitant to fully embrace Copilot as a core security tool.

The bottom line: AI is here, but oversight is non-negotiable

Microsoft's expansion of Security Copilot with AI agents is a significant step in the evolution of autonomous security operations. With automation now playing a central role in modern defense strategies, the move reflects how AI is becoming essential—not optional—in dealing with growing volumes of threats.

Still, cybersecurity professionals agree: the deployment of AI agents must come with strict guardrails, thoughtful integration, and continuous human oversight.

As Feinberg put it: "Treat AI agents like human users. Assign them only the permissions they need and continuously monitor their activity."

Because in the end, AI will be manipulated; it's how well you prepare for it that matters.

Follow SecureWorld News for more stories related to cybersecurity.

Comments