How context can help cyber defenders win the AI arms race
As AI accelerates the pace of cyberattacks, defenders have one crucial advantage.
When we look at how quickly the latest exploits can be put together using generative tools, there really aren't many options left for organisations that want to secure themselves, Mandy Andress, CISO of Elastic, told me when I spoke with her last month.
To stay ahead, businesses need to embrace AI to tip the scales back in their favour. But how exactly is AI impacting cybersecurity today, what can organisations do about it, and what's the role of humans? Here's what I learned.
Old threats, new speeds
AI agents that magically weave through networks or find exploits in hardened systems are the kind of ideas some might conjure when we talk about AI in cybersecurity. The reality, at least for now, is far more mundane.
Raising the bar on basic cybersecurity hasn't changed much, because threat actors are not short on low-hanging fruit. For organisations, this means taking care of fundamental considerations such as patching, ensuring least privileged access, and adopting a zero trust paradigm, among others.
"Don't get distracted from the fundamentals would be my core piece of advice. The fundamentals that we talk about, whether certainly least privilege from an access perspective, changing, removing all the default configurations, changing default passwords – all these continue to be exploited by threat vectors, because we don't do a good job with the basics," said Andress.
That said, AI has massively sped up the cadence of cyberattacks when it comes to offence. Andress noted how a working exploit could potentially be created using a new CVE in a matter of minutes using an AI tool and unleashed in the wild. In the meantime, AI agents can autonomously scrape public data about a target, draft highly tailored emails or voice scripts, and even iterate and perform A/B tests at scale to see what gets the best response.
This means defenders must move much faster, and traditional patching cycles that often take days or even weeks to push out are no longer viable. Instead, businesses need to either consider deploying hardened systems or establish near continuous patching across the organisation to stay safe.
Move fast, don't break things
With AI working so quickly, are humans, well, too slow? Turns out we're still needed, though expect some changes to how things are done. Within a Security Operations Centre (SOC), rote activities could well be automated, something Andress doesn't consider a bad thing. This could reduce manual, error‑prone work for SOC analysts and lower the traditionally high turnover.
The use of AI will free up humans for more complex investigations, interpretation of alerts, and strategy. Who knows, humans could also end up overseeing AI agents in the future. Much like most of the roles in cybersecurity today didn't exist in the early days of computers, the cybersecurity roles that will emerge in the years to come will probably be hard to imagine right now.
Andress puts it this way: "I don't see agents and AI fully replacing the software threat detection and response teams as we know them today. I do see a lot of what we often define as Level 1 cybersecurity analyst activities being automated."
She cautioned against adopting AI too hastily, however. As organisations incorporate AI into their workflows, they will need to assign permissions, open up API access, or grant secrets to them to get things done. This represents a huge security risk, especially as agent use explodes in the enterprise.
AI agents must be managed properly and secured with appropriate guardrails to ensure they do not overstep their intended scope or decision-making boundaries, says Andress.
Why context wins
It's easy to feel the deck is stacked against defenders, especially when even skilled analysts lack full visibility. But Andress believes that will change. Their biggest future asset? Context.
Assuming a future world where cybersecurity systems have access to diverse data sources, it would be possible to do proper threat modelling instead of analysing a limited aspect of a system. Defenders could then see attacks across the entire organisation, with individual alerts coalesced into distinct risk events presented within a single console.
"What we always lack in our organisations is the full context and visibility of what's happening, and then the ability to analyse and stitch all the pieces together. AI would be able to do that, and do that at machine speed, and give us insights and understanding of what's happening in our environments that we don't have today."
Put simply, organisations with real-time analysis of their environments could react much faster and be far more proactive about the controls and defences they need in place.
Even outside of incident response, context can prevent blind spots and human tunnel vision. Andress shared an anecdote to illustrate how AI can surface deeper insights that humans might miss:
"One of our engineers was testing our natural language capability by asking questions through MCP. He asked: 'What devices do I have assigned to me?' He expected his laptops to show up. They did, but the query also listed his YubiKeys that were registered."
While this seems relatively minor, it illustrates a larger point. In the typical enterprise, connecting a user to hardware assets usually means digging through separate systems or logs. By pulling those disparate threads together instantly, the system provided a complete picture rather than just a partial one.
That's the contextual edge Andress is talking about. In an arms race defined by speed, not having to manually hunt for that information could make all the difference. The cybersecurity landscape will keep evolving, but if defenders can harness that context, the scales may yet tip in their favour.