For years, the central debate in security operations was a staffing question. Build an internal SOC or hand detection and response to a managed provider? That question had a clean logic to it. You weighed cost against control, expertise against overhead, and arrived somewhere defensible.
That debate is over. Not because it was resolved, but because AI made it irrelevant.
A new whitepaper authored by Oliver Rochford, Lead Analyst at Cyberfuturists and a former Research Director at Gartner, argues that organizations are now confronting a fundamentally different kind of decision. Produced in collaboration with Daylight Security, Security Operations at the Nexus of AI makes a case that will be uncomfortable for many security leaders: the most consequential choice they face today is not which tool to buy or which MDR provider to retain. It is who owns decisions when machines are making them, and most organizations are letting that question answer itself through vendor contracts and procurement processes rather than deliberate strategy.
Rochford draws on more than 4,000 engagements with security practitioners, vendors, and enterprise leaders to support a core thesis that cuts against much of the current AI hype. AI is not replacing human judgment in security operations. It is changing what human judgment is asked to do, when it is applied, and how much it can be trusted when it arrives. The difference between an organization that understands this and one that does not is not just operational. It is a governance gap with real liability implications.
Three Shifts Driving the MomentThe report identifies three structural shifts driving this moment. First, AI has changed the economics of managed detection and response in ways that benefit providers who built around AI from the beginning rather than bolting it onto legacy architecture. Expert analysts at AI-native MDRs can now serve a quality of coverage that was previously priced out of reach for most organizations. Second, the question of who runs the SOC has given way to a more complex question: who owns the decisions when AI is making them? That shift has outpaced most evaluation frameworks, which still prioritize alert volume, response times, and analyst certification counts. Third, and perhaps most practically disruptive, the boundary between security tools and security services has collapsed. Buying an AI SOC platform is not a technology procurement decision. It is an operating model commitment, one that embeds workflow dependencies and governance assumptions that compound over time and become expensive to reverse.
Two Models, One Deliberate ChoiceThe report presents two primary operating models. In the first, an organization deploys AI SOC tools internally and owns configuration, tuning, and decision governance. This model suits mature security teams. The report suggests eleven or more analysts as a rough threshold, along with strong detection engineering capability and genuine appetite for the complexity of governing AI decisions. That last requirement deserves emphasis. Governing AI decisions is not the same skill set as running a SOC. Organizations that treat it as equivalent are likely to find out the hard way.
In the second model, an organization engages an AI-enabled MDR provider. The MDR owns day-to-day AI governance and bears accountability for AI-driven decisions made on the customer's behalf. The customer retains oversight rights but is largely purchasing the provider's judgment alongside the provider's technology. This model is better suited to organizations without dedicated SOC capacity, or those that prefer predictable costs over governance complexity. A third, hybrid path exists for organizations with uneven maturity across security domains or those actively building internal capability while maintaining coverage.
What distinguishes the report from most AI-in-security commentary is its insistence that the tool-versus-service framing is no longer the right frame. Both models now involve AI-driven triage, probabilistic verdict generation rather than deterministic rule-firing, and human oversight of machine-generated conclusions. The more useful question, in Rochford's framing, is who makes which decisions and who owns the outcomes. That question has direct implications for incident post-mortems, regulatory compliance, and contractual liability, yet it is almost entirely absent from how most organizations evaluate security vendors today.
The Visibility ProblemThe report dedicates significant attention to where AI decisions happen and how visible they are. Alert suppression sits at the most dangerous intersection: lowest visibility, highest stakes. When an AI system suppresses an alert, no human sees the original signal. If that suppression is wrong, the failure surfaces during incident response. There is no intermediate moment of review. The report's standard for AI-enabled MDRs is that suppression decisions must be made visible, scored by confidence level, and auditable after the fact. Anything less, it argues, is a governance gap.
The cascade problem receives equally serious treatment. Even in environments where humans make final decisions, AI shapes the investigation before that decision is reached. The context AI assembles defines what an analyst sees. What AI omits may never enter the investigation at all. This is not a hypothetical concern. Cognitive science is well-established on the anchoring effect: analysts are more likely to confirm an AI assessment than to contradict it. AI-generated investigation summaries become the official record in post-incident reviews and regulatory disclosures. The report is precise about this: having a human in the loop does not constitute effective governance if the human is working within constraints defined by the AI itself.
What AI-Native MDR Looks Like in PracticeDaylight Security is cited in the report as an illustration of AI-native MDR design in practice. Rather than relying on predefined detection queries, Daylight builds a knowledge graph encoding organizational context, including assets, relationships, and behavioral norms, and uses that graph to evaluate every event. AI derives a verdict for each event. When confidence is high, events resolve automatically. When confidence falls below threshold, events surface to a human analyst with the full evidence package assembled, including observable artifacts that support the AI's classification rather than a confidence score alone. That distinction matters. It enables genuine analyst evaluation rather than simple ratification.
What the Report Asks of CISOsFor CISOs reading the report's recommendations, the practical ask is significant. They are being told to treat AI adoption as an operating model decision rather than a technology upgrade, to ask vendors not how many alerts they can handle but who is accountable when the AI is wrong, and to treat the AI supply chain as a first-class operational risk. That last point reflects a concern that extends beyond the MDR relationship itself. Many security platforms depend on third-party foundation models and cloud-based AI services. An MDR whose detection capability depends on a specific foundation model API carries exposure to pricing changes, deprecation risk, and upstream model updates that can alter behavior without notice. The customer inherits that exposure whether or not it appears in the contract.
The report also includes a detailed CISO evaluation checklist spanning decision ownership, explainability, failure behavior, AI supply chain resilience, and adaptability over time, along with a maturity alignment framework that maps organizational size and SOC capability to the operating model most likely to serve them well.
The underlying message is not that AI fails to deliver. The case studies and practitioner interviews woven through the report describe genuine capability gains: scope and depth of investigation that human teams could not sustain at scale, detection rules that previously required months of false-positive testing now deployable in days, and analyst time redirected from routine triage toward genuine judgment work. The message, rather, is that those gains do not arrive automatically. They require the right operating model, deliberately chosen and properly governed.
Most organizations, Rochford concludes, are not making that choice. They are inheriting it.
Written By Jake Smiths