The idea of an AI agent sitting inside your Security Operations Centre, autonomously investigating alerts, triaging incidents, and even executing response actions, is no longer the stuff of science fiction. It is happening right now, and the pace of adoption is accelerating.
Vendors across the cybersecurity industry are racing to embed agentic AI capabilities into their platforms. The promise is seductive: an AI that does not just flag suspicious activity but actively works through the investigation, correlates evidence, and takes action, all at machine speed. For security teams that are chronically understaffed and overwhelmed by alert volumes, this sounds like exactly what they need.
And in many ways, it is. But the conversation around AI agents in the SOC needs to be more nuanced than the marketing materials suggest. Because when you give an AI system autonomy to act on your security infrastructure, the quality of the data it operates on becomes the single most important factor in determining whether that agent helps you or harms you.
What AI Agents Actually Do
It is worth being precise about what we mean by AI agents in a security context. Unlike traditional automation, which follows predetermined playbooks and executes fixed sequences of actions, an AI agent is designed to reason its way through a problem. It takes in data, assesses the situation, decides what additional information it needs, gathers that information, and then recommends or executes a course of action.
In a SOC, this might look like an agent that receives an alert about a suspicious login attempt, checks the user’s authentication history, cross-references the source IP against threat intelligence feeds, examines whether the user’s endpoint shows signs of compromise, and then either escalates the alert to a human analyst or initiates an automated containment action such as disabling the account or isolating the device.
That workflow, performed by a human, might take thirty minutes to an hour depending on the complexity. An AI agent can do it in seconds. The efficiency gains are real and significant.
The Data Quality Problem
Here is where it gets complicated. An AI agent is only as reliable as the data it has access to and the quality of that data. If the agent is querying a fragmented data landscape where logs from different sources use different schemas, where enrichment is patchy or inconsistent, and where critical context is missing because certain data sources were never onboarded, then the agent’s reasoning is fundamentally compromised.
Consider a scenario where an AI agent investigates a potential lateral movement alert. If the endpoint data is in one format, the network data in another, and the identity logs in yet another, the agent either needs to spend processing time reconciling these formats on the fly (introducing latency and potential errors) or it operates on an incomplete picture. Neither outcome is acceptable when the agent has the autonomy to take action.
This is why normalised, enriched, and consistently structured data is not just a nice technical feature. It is a safety mechanism. When all your security telemetry follows a common schema such as the Open Cybersecurity Schema Framework (OCSF), an AI agent can correlate events across sources with confidence. It can reason across endpoints, networks, cloud workloads, and identity systems because the data speaks a common language. Remove that consistency and the agent is making decisions based on incomplete or misinterpreted information.
Autonomy Requires Guardrails
The level of autonomy you grant an AI agent should be directly proportional to your confidence in the data it is working with and the controls you have in place around its actions.
This means building guardrails into the process. At the most basic level, this involves defining clear boundaries around what an AI agent can and cannot do without human approval. Perhaps it can investigate and enrich alerts autonomously, but containment actions such as isolating a host or disabling an account require a human to confirm. Perhaps it can close low-severity alerts that meet specific criteria, but anything above a defined threshold gets escalated.
More sophisticated guardrails involve monitoring the agent’s own behaviour. Just as you would audit a human analyst’s decisions, you should be logging and reviewing the decisions your AI agents make. What data did they access? What reasoning did they follow? What action did they take, and was it proportionate? Without this audit trail, you have an autonomous system operating inside your security infrastructure with no accountability.
The Role of Workflow Automation
AI agents do not operate in isolation. Their effectiveness is amplified significantly when they are integrated into well-designed workflow automation platforms. When an AI agent identifies a threat that requires a coordinated response across multiple systems, the ability to trigger automated workflows that span ticketing, communication, remediation, and documentation turns a single detection into a fully orchestrated response.
The combination of AI-driven investigation with deterministic workflow automation creates a powerful operating model. The AI handles the reasoning and decision-making, while the automation platform handles the execution with precision and repeatability. Neither component works as well in isolation as they do together.
A Measured Approach
The temptation with any new technology is to rush to adoption. But AI agents in the SOC deserve a more measured approach. Organisations should start by ensuring their data architecture is sound. If your security telemetry is not normalised, enriched, and query able from a single platform, then deploying AI agents on top of that foundation is premature.
Once the data layer is solid, begin with limited autonomy. Let the agents investigate and recommend before you let them act. Monitor their outputs closely, compare their conclusions with those of experienced analysts, and build confidence over time. Gradually expand their scope as trust is established through evidence, not assumption.
The organisations that get the most value from AI agents will be those that combine a strong data foundation, clear governance, well-defined guardrails, and a culture that views AI as a powerful augmentation to human expertise rather than a replacement for it.
Final Thoughts
AI agents in the SOC represent a genuine step change in how security operations can function. The speed, consistency, and scale they offer are real advantages that no security leader should ignore. But they are not magic, and they are not safe by default. The difference between a helpful AI agent and a dangerous one is not the sophistication of the model. It is the quality of the data it operates on and the rigour of the governance around it.
Get the data right. Build the guardrails. Then let the agents do what they do best.
HOOP Cyber builds the data foundations and automated workflows that make AI agents effective and safe. To learn more about our approach to AI-ready security operations, visit www.hoopcyber.com or contact our team.
If you have attended a cyber security conference, read a vendor whitepaper, or even glanced at LinkedIn in the past two years, you could be forgiven for thinking that artificial intelligence has already solved most of the problems facing security operations teams. The messaging is relentless: AI-powered threat detection. AI-driven incident response. AI that replaces entire tiers of your SOC.
Some of this is real. Some of it is aspirational. And some of it is, to put it diplomatically, marketing getting ahead of engineering.
For security practitioners, the challenge is separating what AI genuinely does well in security operations today from what remains a work in progress. Getting this distinction right matters, because organisations that invest based on hype rather than reality risk wasting budget, creating false confidence, and in the worst cases, introducing new risks into their environment.
What AI Does Well Right Now
Let’s start with the positives, because there are genuine areas where AI is delivering measurable value in security operations today.
Pattern recognition and anomaly detection. This is where AI has arguably made its most meaningful contribution to security operations so far. Machine learning models are exceptionally good at identifying patterns across large volumes of data and flagging deviations from established baselines. Whether it is detecting unusual network traffic, identifying anomalous user behaviour, or spotting indicators of compromise across endpoint telemetry, AI excels at the kind of large-scale pattern matching that would be impossible for human analysts to perform manually.
Natural language querying. The ability to search security data using natural language rather than writing complex query strings is a genuine quality-of-life improvement for analysts. When your data lake supports natural language search, an analyst can ask a question in plain English and receive structured results without needing to know KQL, DQL, or SQL. This lowers the barrier to effective investigation and makes data accessible to a wider range of team members.
Alert triage and prioritisation. AI models that have been properly trained on normalised, enriched data can significantly reduce the volume of alerts that require human attention. By scoring alerts based on contextual factors such as asset criticality, user behaviour history, and threat intelligence correlation, AI can surface the incidents that genuinely matter and suppress the noise. For SOC teams drowning in false positives, this is not a marginal improvement. It is transformational.
Automation of routine tasks. AI-enhanced workflow automation can handle repetitive, well-defined tasks such as enriching alerts with threat intelligence, gathering contextual information during an investigation, or executing standard response playbooks. When integrated with workflow automation platforms, AI can orchestrate multi-step processes that previously required significant analyst time.
Where the Hype Outpaces the Reality
Now for the harder conversation:
Contextual reasoning. AI models can identify that something is unusual, but they struggle with understanding why it matters in the specific context of your organisation. A login from an unusual location might be a compromised credential, or it might be a senior executive travelling for business. AI can flag the anomaly, but the contextual judgement required to assess its significance still largely depends on human expertise. Models are improving in this area, but we are not yet at the point where AI can reliably replace the institutional knowledge that experienced analysts bring.
Understanding business risk. Security does not exist in a vacuum. Every detection, every response action, and every decision to escalate or contain has implications for business operations. AI models do not understand your business priorities, your regulatory obligations, your commercial relationships, or the political dynamics within your organisation. They cannot weigh the operational impact of isolating a critical production server against the security risk of leaving it connected. That calculation requires human judgement informed by business context that no model currently possesses.
Replacing experienced analysts. Perhaps the most persistent and damaging piece of hype around AI in security is the suggestion that it can replace experienced human analysts. It cannot. What it can do, and this is genuinely valuable, is augment their capabilities, freeing them from repetitive work so they can focus on the complex investigations and strategic decisions that require creativity, intuition, and deep expertise. Framing AI as a replacement rather than an augmentation does a disservice to both the technology and the people.
The Uncomfortable Prerequisite
There is one factor that underpins every one of the genuine AI successes listed above: data quality. Every area where AI delivers real value in security operations depends on having clean, normalised, enriched, and consistently structured data. Pattern recognition fails when patterns are obscured by inconsistent schemas. Alert prioritisation is unreliable when enrichment data is incomplete. Natural language querying produces poor results when the underlying data is fragmented across siloed tools.
This is the uncomfortable prerequisite that many AI vendors gloss over in their marketing. Their models may be brilliant, but they need a solid data foundation to work properly. Organisations that invest in AI tools without first addressing their data architecture are likely to be disappointed with the results.
A Grounded Path Forward
None of this should be read as an argument against AI in security operations. Quite the opposite. AI is already delivering genuine value, and its capabilities will only improve. But the organisations that benefit most will be those that approach it with clear eyes and realistic expectations.
That means investing in data architecture first, ensuring that your security telemetry is normalised, enriched, and stored in formats that are accessible and cost-effective. It means deploying AI where it demonstrably adds value today, particularly in pattern recognition, alert triage, and task automation. It means keeping humans firmly in the loop for contextual reasoning, business risk assessment, and strategic decision-making. And it means being willing to challenge vendors who promise more than the technology can currently deliver.
The AI revolution in security operations is real, but it is a marathon, not a sprint. The practitioners who succeed will be those who build solid foundations, set realistic expectations, and measure success in outcomes rather than marketing claims.
HOOP Cyber helps organisations build the data foundations that make AI in security operations actually work. To explore how we can support your journey, contact us via and speak with our team.
Every cyber security vendor on the planet is talking about AI right now. Their marketing materials talk about faster detection. Smarter triage. Automated response. Predictive threat intelligence. The pitch is often compelling and, in many cases, genuinely exciting.
But here is the question that too few organisations are asking: is your data actually ready for AI?
Because the uncomfortable truth is that AI in cybersecurity is only as good as the data it is built on. And for most organisations, that data is fragmented, inconsistently formatted, siloed across multiple tools, and buried in legacy SIEM architectures that were never designed for the kind of analysis that modern AI demands.
The Data Problem Nobody Wants to Talk About
Most security teams are drowning in data. Logs from endpoints, firewalls, cloud workloads, identity systems, email gateways, and dozens of other sources pour into the SOC every second of every day. The volume alone is staggering. But volume is not the same as value.
The real challenge is that this data arrives in different formats, uses different schemas, and often contains gaps, duplications, or inconsistencies that make meaningful correlation almost impossible without significant manual effort. When an analyst tries to investigate an incident, they are not just searching for a needle in a haystack. They are searching across multiple haystacks, each one constructed differently, with no common language between them.
Now imagine asking an AI model to work with that same data. Machine learning algorithms require clean, normalised, consistently structured inputs in order to identify patterns, detect anomalies, and make reliable predictions. Feed them fragmented, inconsistent data and you do not get intelligence. You get noise dressed up as insight.
Why the Security Data Lake Changes the Game
This is where the concept of a purpose-built security data lake becomes not just useful but essential. A well-architected security data lake, such as one built on Amazon Security Lake using the Open Cybersecurity Schema Framework (OCSF), solves the foundational data problem that sits beneath every AI ambition.
By normalising data at the point of ingestion, converting logs from dozens of disparate sources into a single, consistent schema, you create a unified dataset that is genuinely ready for advanced analytics. Every IP address, every user identity, every event type follows the same structure regardless of where it originated. This is not just a nice-to-have for tidy reporting. It is the prerequisite for any AI or machine learning model to function effectively in a security context.
Enrichment at the point of ingestion takes this further. When data is automatically tagged against frameworks such as MITRE ATT&CK or NIST2 before it even reaches a dashboard, it arrives with context already attached. An AI model working with enriched, normalised data can immediately begin correlating events, identifying patterns, and surfacing insights that would take human analysts hours or days to piece together manually.
Architecture First, AI Second
The organisations that will get the most value from AI in their security operations are not necessarily those with the biggest budgets or the most advanced tooling. They are the ones that invest in getting their data architecture right first.
Think of it this way: you would not build a house by starting with the roof. Yet that is precisely what many organisations are doing when they rush to deploy AI-powered security tools on top of messy, fragmented data foundations. The tools might be brilliant in isolation, but they cannot compensate for an underlying architecture that fails to deliver clean, structured, query able data at scale.
A data-centric approach to security operations recognises that the architecture is the enabler. When your data pipelines are modular and well-orchestrated, when your storage is efficient and cost-effective, when your search capability can federate queries across distributed data sources, you have built an environment where AI can thrive. Without that foundation, even the most sophisticated AI model is operating with one hand tied behind its back.
The Cost Equation Matters Too
There is a practical dimension to this as well. Traditional SIEM architectures charge based on data volume, which means that as organisations ingest more data to feed AI models, their costs escalate dramatically. This creates a perverse incentive to limit the data you collect, which directly undermines the effectiveness of any AI-driven analysis.
A security data lake model, where data is stored in compressed, efficient formats such as Parquet and queried using federated search capabilities, decouples storage costs from compute costs. You can retain vast volumes of telemetry for long periods without the financial penalty that legacy SIEMs impose. This means your AI models can draw on deeper, richer datasets, improving their accuracy and reliability over time.
What CISOs Should Be Asking Right Now
If your board or executive team is asking about AI in cyber security, and they almost certainly are, the first question to address is not which AI tool to buy. It is whether your data architecture can support AI effectively.
Some practical questions to consider: Is your security telemetry normalised to a common schema such as OCSF? Are you enriching data at the point of ingestion, or relying on analysts to add context manually after the fact? Can you search across all your data sources from a single interface? Is your data architecture cost-effective enough to retain the volume of data that AI models need to be trained and tuned effectively? Can you scale your data ingestion without scaling your costs proportionally?
If the answer to most of those questions is no, then the priority is clear: fix the data architecture before investing heavily in AI tools that will underperform without it.
Building for the Future
The good news is that building an AI-ready security data lake is not a theoretical exercise. Organisations are doing it right now, using services like Amazon Security Lake combined with specialist data engineering expertise to create modular, scalable, cost-effective data platforms that serve as the foundation for current operations and future AI capabilities.
The organisations that get this right will be the ones that move beyond the AI hype and into genuine, measurable improvements in detection speed, response accuracy, and operational efficiency. Those that skip the data architecture step will continue to struggle, regardless of how many AI-powered tools they deploy.
The future of security operations is undeniably AI-enabled. But the path to that future runs directly through your data architecture. Get the foundations right, and AI becomes a genuine force multiplier. Neglect them, and it remains an expensive promise that never quite delivers.
HOOP Cyber specialises in building AI-ready security data architectures powered by Amazon Security Lake. To find out how we can help your organisation lay the right foundations, contact us via and get in touch with our team.
Today is an inflection point for cyber security with Anthropic’s announcement of Project Glasswing marking one of the most significant developments in cyber security this year. At the heart of the initiative is Claude Mythos Preview, a frontier AI model that has already identified thousands of previously unknown zero-day vulnerabilities across every major operating system and web browser. The oldest bug it surfaced had been sitting quietly in OpenBSD for 27 years.
The coalition behind the project reads like a who’s who of the technology industry: Apple, Microsoft, Google, AWS, CrowdStrike, Palo Alto Networks, Cisco, Broadcom, NVIDIA, JPMorganChase, and the Linux Foundation. These are organisations that have built entire security empires on proprietary AI and threat intelligence, and they are now publicly acknowledging the need for a collaborative, model driven approach to finding what their own tools have missed.
It is an inflection point for cyber security.
What Project Glasswing Means in Practice
Project Glasswing uses Claude Mythos Preview to systematically hunt for vulnerabilities across critical infrastructure before adversaries can find them. The model operates agentically, reading source code, forming hypotheses about potential flaws, running live tests to confirm or reject those hypotheses, and producing detailed bug reports with proof-of-concept exploits and reproduction steps.
The results so far have been striking. Thousands of zero-day vulnerabilities, many of them critical, have been discovered in software that had already undergone extensive human led security review. That tells us something important: even well-resourced security programmes have blind spots, and AI driven analysis is now capable of seeing what human reviewers have not.
Why This Matters for the Organisations HOOP Cyber Works With
At HOOP Cyber, we work with organisations across sectors to strengthen their cyber security posture, from strategy and governance through to technical assurance and threat intelligence. Project Glasswing reinforces a message we have been delivering to our clients for some time: the threat landscape is shifting faster than traditional security approaches can keep pace with, and AI is now central to both sides of the equation.
For the organisations we support, the implications of Glasswing are threefold. First, the volume and severity of newly disclosed vulnerabilities is about to increase significantly. Patching cycles, vulnerability management programmes, and risk assessment processes will need to adapt accordingly. Second, the sophistication of AI augmented attacks will continue to grow. The same capabilities that allow Mythos Preview to find and fix vulnerabilities at scale could, in the wrong hands, be used to exploit them. Anthropic themselves have been candid about this, warning that frontier AI capabilities are likely to advance substantially in the coming months. Third, and perhaps most critically, this development highlights the importance of foundational cyber hygiene. AI powered vulnerability discovery is only as effective as the visibility and asset management that underpins it.
The Visibility Gap: You Cannot Patch What You Cannot See
This is where Project Glasswing becomes particularly relevant for the organisations HOOP Cyber serves. The enterprises most at risk from what comes next are not necessarily those without mature security operations centres or enterprise patching pipelines. They are the organisations running operational technology networks where firmware has not been updated since the equipment was installed. Clinical environments where a connected infusion pump or imaging system sits outside every mobile device management policy ever written. Industrial floors where the programmable logic controller communicating with a SCADA system was never designed with a security model at all, because when it was built, nobody imagined it would one day be networked.
For those environments, the question was never just whether we can find the vulnerability. It has always been whether the asset register or security architecture even knows the device exists. You cannot patch what you cannot see. You cannot segment what you have not inventoried. You cannot respond to a compromise in an asset that is not visible.
How HOOP Cyber Is Helping Clients Prepare
At HOOP Cyber, we help organisations build the foundations that make initiatives like Glasswing actionable. That means working with our clients to achieve continuous, real-time visibility across IT, OT and IoT environments. It means ensuring asset registers are accurate and current, that network diagrams reflect reality rather than aspiration, and that vulnerability management processes are mature enough to act on the intelligence that AI powered discovery will generate.
Project Glasswing is genuinely impressive, and the early results are real. But AI powered vulnerability discovery at scale only closes the gap if defenders already know where to look. Our role at HOOP Cyber is to make sure they do.
Looking Ahead
The window of advantage that Glasswing provides is measured in months, not years. Anthropic have been transparent about this. As frontier AI capabilities proliferate, the organisations that will be best positioned are those that have already invested in the fundamentals: asset visibility, robust vulnerability management, and a security architecture that is designed for the reality of their environments rather than the idealised version of them.
HOOP Cyber stands ready to help organisations navigate this new chapter. If you would like to discuss what Project Glasswing means for your organisation and how to prepare, get in touch with our team via
Attackers are using AI to craft highly personalised, convincing phishing campaigns at scale. This article examines what has changed, why conventional detection is struggling to keep up, and what a stronger response looks like.
Phishing has always been the path of least resistance for attackers. It requires no sophisticated exploit, no zero-day vulnerability and no costly infrastructure. It requires only that a human being be deceived into taking an action they should not. For decades, security professionals have responded with a combination of technical controls and awareness training, teaching people to spot the telltale signs of a phishing attempt: the odd sender address, the generic greeting, the clumsy grammar, the slightly-off branding.
Those signals are becoming unreliable. The arrival of accessible, powerful artificial intelligence has fundamentally changed what phishing looks like and how it is produced. Attacks that once required significant skill, time and manual effort can now be generated at scale, personalised to individual targets, written in flawless prose and timed with a precision that would previously have been impossible. The assumptions that underpin much of our current defence posture no longer hold.
Understanding what has changed, and what that means for both technical controls and human awareness programmes, is now an urgent priority for every security team.
What Has Actually Changed
To appreciate the scale of the shift, it helps to understand what producing a convincing phishing campaign used to require. A well-crafted spear phishing attack against a senior executive, for example, would have demanded considerable investment: researching the target, understanding their professional relationships, mimicking the writing style of a trusted colleague, getting the cultural and linguistic register exactly right. This kind of attack was effective but time-consuming, limiting how many targets a threat actor could realistically pursue.
Large language models (LLMs) have collapsed that time investment close to zero. An attacker with access to a commercially available AI tool, or one of the growing number of purpose-built malicious variants emerging in underground markets, can now:
Generate highly personalised phishing emails from nothing more than a name, job title and organisation scraped from LinkedIn or a company website.
Produce content in multiple languages, with native-level fluency, removing the linguistic errors that once served as a reliable detection signal.
Replicate the writing style, tone and vocabulary of a specific individual by training on their publicly available communications.
Rapidly iterate across thousands of variations of the same campaign, allowing attackers to evade signature-based detection systems that rely on identifying known patterns.
Generate convincing pretexts grounded in real, current events, pulling in timely context that makes the message appear plausible and relevant.
The practical result is that the volume, quality and personalisation of phishing attacks are all increasing simultaneously. This is not a marginal improvement for attackers. It is a step change.
Why Conventional Defences Are Struggling
Traditional anti-phishing defences operate on a model that AI-generated content is increasingly designed to circumvent. Understanding the specific limitations of each layer helps clarify where the gaps now lie.
Email filtering and signature-based detection
Conventional email security gateways rely heavily on known indicators of compromise: blacklisted domains, recognised malicious URLs, known sender patterns and, increasingly, the linguistic fingerprints of previously identified phishing templates. AI-generated phishing content is novel by design. Because each campaign can be generated fresh, it does not match existing signatures. Polymorphic content that varies automatically across sends is particularly effective at evading filters trained on historical samples.
Security awareness training based on visual cues
The advice most employees have received about identifying phishing centres on observable signals: check the sender address, look for spelling mistakes, hover over links before clicking, be suspicious of urgent requests. These remain valid principles, but they are insufficient when the attack is grammatically perfect, sent from a convincingly spoofed or legitimately compromised account, and references accurate details about the recipient and their organisation. Training that focuses primarily on spotting imperfection will not prepare people for attacks that have no obvious imperfections.
Frequency-based anomaly detection
Some security tools look for anomalous volumes of similar messages as an indicator of a phishing campaign in progress. AI-generated campaigns can be deliberately low-volume and highly targeted, sending small numbers of uniquely crafted messages to specific individuals rather than blasting a single template to thousands of recipients. This targeted approach not only evades volume-based detection but also increases the likelihood of success, since the message is specifically tailored to its recipient.
DMARC, DKIM and SPF
Email authentication protocols remain an important foundational control, but they address sender authentication, not content. A phishing email sent from a legitimately compromised account, or from a domain that closely resembles a trusted one, can pass authentication checks while still being entirely malicious. Authentication controls are necessary but not sufficient on their own.
The Business Email Compromise Dimension
Nowhere is the impact of AI-generated phishing more acutely felt than in business email compromise (BEC). BEC attacks, where an attacker impersonates a trusted internal or external party to authorise fraudulent financial transactions or extract sensitive information, have long been among the most financially damaging forms of cybercrime.
AI amplifies the threat in several ways. Voice cloning technology now allows attackers to supplement email-based deception with convincing audio messages, creating multi-channel attacks that are substantially more persuasive than written communication alone. The combination of a realistic email from a spoofed executive account, followed by a voicemail that sounds genuinely like that executive, represents a level of social engineering sophistication that was effectively inaccessible to most attackers only a few years ago.
AI also enables attackers to conduct far more thorough research on their targets before making contact. Organisational hierarchies, ongoing projects, financial processes, supplier relationships and individual communication styles can all be inferred from public information sources. The resulting attacks are contextually rich in a way that makes them extremely difficult for recipients to identify as fraudulent.
What a Stronger Response Looks Like
None of this means that phishing defence is a lost cause. It means that the approach needs to evolve, across both technical controls and human-centred programmes, to match the changed threat landscape.
AI-powered detection to counter AI-powered attacks
The most immediate technical priority is deploying detection capabilities that can keep pace with AI-generated content. Machine learning models trained specifically to identify AI-generated text, and to detect subtle semantic and behavioural anomalies that escape signature-based filters, are now available and maturing quickly. These tools do not rely on matching known patterns; they assess the statistical properties of content and the behavioural context in which it arrives. Layering AI-driven detection over existing email security infrastructure is increasingly straightforward and represents one of the highest-value investments a security team can make.
Behavioural and contextual signals over content-only analysis
Because content alone is now an unreliable indicator, detection needs to incorporate a wider range of signals. Who is the sender in relation to the recipient? Is this communication pattern consistent with past behaviour? Is the request being made consistent with established processes? Does the timing correlate with other unusual activity in the environment? This kind of contextual, behavioural analysis, particularly when applied at the point of data ingestion and enrichment rather than retrospectively, allows security teams to surface suspicious communications even when the content itself appears entirely legitimate.
Rethinking awareness training for the AI era
Security awareness training needs to shift its emphasis from spotting imperfection to developing scepticism and process discipline. The most resilient defence against AI-generated phishing is not a human who can identify a poorly written email; it is a human who knows never to bypass the established verification process for a financial transaction, regardless of how convincing the request appears. Training programmes that build genuine security culture, reinforce procedural controls and create psychological safety around questioning unusual requests will outperform those focused primarily on identifying attack indicators.
Simulations should also evolve. Running phishing simulations using AI-generated content, rather than the same recognisable templates that employees have seen many times before, provides a more accurate picture of actual susceptibility and a more realistic learning experience.
Zero trust principles applied to high-risk actions
For the categories of action that are most targeted by phishing, particularly financial authorisations, credential changes and sensitive data access, zero trust principles provide an important structural safeguard. Requiring out-of-band verification for significant financial instructions, enforcing multi-party approval for high-value transactions and applying step-up authentication for sensitive system access all reduce the potential impact of a successful phishing attack, even when the initial deception succeeds.
Rapid detection and response capability
Because no combination of preventive controls will eliminate phishing entirely, the speed and effectiveness of detection and response becomes a critical variable. Integrating phishing reporting workflows with security orchestration and response tooling, ensuring that reported emails are triaged and investigated quickly, and that indicators of compromise are actioned across the environment in near real time, all contribute to minimising the dwell time and blast radius of a successful attack.
The Strategic Imperative
AI-generated phishing is not a future threat to be monitored and prepared for at some later date. It is a present and growing reality that is already affecting organisations across every sector and geography. The defenders who are best positioned to manage it are those who recognise that the threat has qualitatively changed, not just grown in volume, and who are updating their defences accordingly.
That means investing in detection capabilities that match the sophistication of the attacks. It means building awareness programmes that develop judgement and culture rather than just pattern recognition. And it means ensuring that the data infrastructure underpinning security operations can provide the contextual, behavioural signals that make the difference between detecting a sophisticated attack early and discovering it far too late.
The attackers have adopted AI. The defence needs to do the same, thoughtfully, systematically and with a clear understanding of where human judgement remains irreplaceable.
HOOP Cyber specialises in specialises in data-centric security operations, helping organisations build the foundations for AI-ready SOC environments through Amazon Security Lake, SIEM modernisation and data normalisation services.Contact us via to book a discovery call today.
How Amazon Security Lake moves beyond being a passive repository to becoming the active data backbone for agentic AI workflows in security operations, enabling autonomous investigation, enrichment and response at scale.
For most of its history, the security data lake has been a place where data goes to be stored. Logs arrive, are normalised and compressed, are indexed and retained, and wait patiently for the moment when an analyst or a scheduled query arrives to retrieve them. The value proposition has been clear and genuine: cheaper storage, longer retention, better cross-source correlation than a traditional SIEM can provide. But in architectural terms, the data lake has largely been a passive participant in security operations. Data flows in; queries flow out; humans do the thinking in between.
That model is changing. The emergence of agentic AI, systems that do not merely respond to queries but autonomously plan, reason, act and adapt across multi-step workflows, is beginning to reframe what a security data lake is for. Amazon Security Lake, built on the Open Cybersecurity Schema Framework (OCSF) and deeply integrated with the broader AWS ecosystem, is particularly well positioned to make this transition.
Understanding what that means in practice, and what it requires to get right, is one of the more important conversations in security architecture today.
What Agentic AI Actually Means in a Security Context
The term agentic AI is used with varying degrees of precision, so it is worth being clear about what it means and, just as importantly, what it does not mean in a security operations context.
A conventional AI model in security is a classifier or a scorer. It receives an input, applies a trained model to it and produces an output: a risk score, a category, a recommendation. It does this one event at a time, and it does only what it is directly asked to do. A human analyst reviews the output and decides what to do next.
An agentic AI system operates differently. Given a goal rather than a single input, it plans a sequence of actions to achieve that goal, executes those actions using available tools and data sources, evaluates the results, adjusts its approach based on what it finds and continues until the goal is achieved or it determines that human escalation is required. It is, in essence, an autonomous investigator that can run in parallel across many threads simultaneously, without waiting for human direction at each step.
In a security operations context, this might look like an agent that is triggered by an initial alert and then autonomously: queries the relevant log data in Amazon Security Lake to gather the full event timeline; enriches the findings with threat intelligence and asset context; correlates the activity with other events across the estate; checks whether similar patterns have been seen before; assesses the severity and likely intent; and either closes the alert with a documented rationale, escalates to a human analyst with a fully assembled evidence package, or initiates a predefined containment action.
The analyst who receives an escalation from an agentic system is not starting an investigation from scratch. They are reviewing the findings of an investigation that has already been substantially completed and deciding whether to act on it. That shift, from initiating to adjudicating, is where the operational leverage of agentic AI lies.
Why the Data Layer Is the Agent Layer
Agentic AI systems are only as capable as the data they can access and reason over. This is not a peripheral consideration; it is the central architectural constraint that determines whether an agentic security operations capability delivers its potential or falls short of it.
An AI agent tasked with investigating a suspicious authentication event needs to be able to query login history across identity systems, network flows, endpoint telemetry and cloud API activity, correlate those events across time, enrich them with user context and threat intelligence, and do all of this rapidly enough to be operationally useful. If the underlying data is fragmented across siloed stores, inconsistently formatted, sparsely enriched or slow to query, the agent cannot perform the investigation effectively. Its conclusions will be incomplete; its false positive rate will be higher than necessary and its value to the security team will be diminished.
Amazon Security Lake addresses this constraint directly. By centralising security data in a single, queryable store, normalising it to the OCSF schema so that events from different sources can be compared and correlated consistently, and making it accessible through standard interfaces including Amazon Athena and Amazon OpenSearch, it provides the data foundation that agentic AI workflows require.
Three properties of Amazon Security Lake are particularly significant for agentic use cases.
Schema consistency through OCSF
OCSF provides a common language for security events. When authentication events, network connections, process executions and file operations all conform to the same schema, an AI agent can reason across them without needing to translate between source-specific formats at query time. This consistency is what makes cross-source correlation tractable for an autonomous system operating at speed. An agent that has to navigate inconsistent field names and varying data structures will make mistakes and take longer; one working with consistently structured data can move with confidence and precision.
Query performance at scale
Agentic workflows are query intensive. An agent investigating a single alert may issue dozens of queries across multiple data sources in the course of a single investigation thread. If each query is slow, the cumulative latency makes real-time agentic response impractical. Amazon Security Lake’s use of Apache Parquet columnar storage, combined with partitioning strategies optimised for security query patterns, ensures that the query performance required to support agentic workflows is achievable without prohibitive cost.
Integration with the AWS AI ecosystem
Amazon Security Lake does not exist in isolation. Its native integration with Amazon Bedrock, AWS’s managed foundation model service, creates a direct pathway from the data layer to the AI reasoning layer. Bedrock agents can be configured to query Security Lake as a tool, pull relevant data as part of an investigation workflow, reason over the results using a foundation model and take further actions based on what they find. The same integration extends to Amazon GuardDuty, Amazon Detective and AWS Security Hub, creating a coherent, AWS-native agentic security architecture that does not require complex custom integration work to stand up.
The Architecture of an Agentic SOC on AWS
Translating the concept of an agentic SOC into a concrete architecture requires thinking about four interconnected layers: the data layer, the enrichment layer, the agent layer and the governance layer. Each one is necessary; none is sufficient alone.
The Data Layer: Amazon Security Lake
The data layer is the foundation. Amazon Security Lake aggregates security data from AWS-native sources including CloudTrail, VPC Flow Logs, Route 53 query logs and AWS Security Hub findings, as well as from third-party sources via the OCSF-compatible subscriber model. Data arrives, is normalised to OCSF, is stored in Parquet format in Amazon S3 and is made queryable via Athena or OpenSearch.
Getting this layer right means investing in comprehensive source coverage, consistent normalisation and enrichment at ingestion. Every gap in source coverage is a potential blind spot for the agents operating above it. Every inconsistency in normalisation is a potential source of error in agent reasoning. The quality of agentic security operations is determined here, at the data layer, before the agents begin their work.
The Enrichment Layer: Context at Ingestion
Enrichment transforms raw normalised events into contextually meaningful data. In an agentic architecture, enrichment at ingestion is substantially more valuable than enrichment at query time, because it means that every event arriving in the data store already carries the context an agent needs to reason about it, without requiring an additional lookup that adds latency and complexity to the investigation workflow.
Useful enrichment for agentic security operations includes threat intelligence tagging against known malicious indicators, asset classification that identifies what role a particular system plays and what data it holds, user behavioural context that establishes normal patterns against which anomalies can be assessed, and MITRE ATT&CK tactic and technique classification that situates individual events within the broader framework of attacker behaviour.
The Agent Layer: Amazon Bedrock Agents
Amazon Bedrock Agents provides the orchestration framework for building agentic workflows on AWS. Agents built on Bedrock can be given access to tools, which in a security operations context typically means API access to query Amazon Security Lake, retrieve threat intelligence, interact with ticketing and case management systems, call AWS Systems Manager for remediation actions and communicate findings to analysts via notification services.
Agent design in a security context requires careful thought about goal specification, tool access scope and decision boundaries. An agent tasked with investigating suspicious network activity needs to know what data sources to query, in what order, and what conditions should trigger escalation versus autonomous closure. These workflows can be built incrementally, starting with fully supervised agents that recommend actions for human approval, and progressively extending autonomy as confidence in the agent’s judgement is established.
The Governance Layer: Human Oversight by Design
The governance layer is not an afterthought. It is the set of controls, boundaries and audit mechanisms that determine what agents can do autonomously, what requires human authorisation and how every agent action is logged and attributable. In a regulatory environment where accountability for security decisions matters, the ability to demonstrate that autonomous actions were taken within defined, approved parameters, and to reconstruct the reasoning behind every agent decision, is not optional.
Amazon CloudTrail provides the audit backbone, logging every API call made by an agent across the AWS environment. Defined approval workflows ensure that high-impact actions, such as isolating a workload or revoking credentials, require explicit human authorisation regardless of the agent’s confidence in its assessment. And regular review of agent decision logs allows security teams to identify where agent judgement is consistently sound and where human oversight remains essential.
What Agentic Security Operations Changes in Practice
The practical impact of a well-implemented agentic SOC architecture on Amazon Security Lake is measurable across several dimensions.
Investigation throughput increases substantially. Where a human analyst can actively investigate one alert at a time, an agentic system can run parallel investigations across hundreds of alerts simultaneously, triage them, close the clear negatives with documented rationale and surface the genuine threats for human review. The volume of alerts that receive thorough investigation, rather than a superficial check driven by time pressure, rises dramatically.
Mean time to detect and mean time to respond both improve, because the investigation that previously began when an analyst picked up an alert has already been substantially completed by the time the alert reaches human review. The analyst is not starting cold; they are reviewing a detailed findings package and making a decision.
Coverage consistency improves. Human analysts, under pressure and working long shifts, inevitably apply varying levels of thoroughness to different alerts. An agentic system applies the same investigative rigour to every alert it handles, regardless of volume, time of day or analyst workload. The quality floor of investigation rises.
And the nature of analyst work changes. The proportion of time spent on mechanical data gathering and repetitive triage falls; the proportion spent on genuine judgement, complex investigation and adversary understanding rises. For many security professionals, this represents a more satisfying and professionally developmental working experience, with positive implications for retention in a market where experienced analysts are consistently in short supply.
The Realistic Starting Point
An agentic SOC built on Amazon Security Lake is not a single deployment event. It is an architectural evolution that can and should be approached incrementally, with each stage delivering operational value before the next is undertaken.
For organisations with Amazon Security Lake already in place, the natural starting point is identifying the investigation workflows that are most repetitive, most time-consuming and most clearly defined. These are the workflows most amenable to initial agentic automation: the processes where the steps are known, the data sources are clear and the decision criteria are explicit. Building an agent for a well-understood workflow, running it in supervised mode alongside human investigators, and validating its outputs against human judgement is how confidence in agentic systems is established.
For organisations that have not yet deployed Amazon Security Lake, the data foundation work and the agentic capability work can be planned together from the outset. Designing the enrichment pipeline, the OCSF normalisation strategy and the source coverage plan with agentic use cases in mind from the beginning avoids the rework that comes from retrofitting an agentic layer onto a data architecture that was not designed to support it.
In either case, the progression from passive data store to active decision engine does not happen by accident. It requires deliberate investment in the data layer, thoughtful agent design and a governance framework that earns organisational trust in autonomous security operations over time.
The Direction of Travel
The security operations centre of the next five years will look substantially different from the one of today. Not because human analysts will have been replaced, but because the proportion of investigative work that requires human initiation and human execution at every step will have fallen significantly. The most capable security teams will be those that have built the data foundations to support autonomous investigation, deployed agentic systems that can operate reliably within defined boundaries, and redirected human expertise towards the judgement calls, adversary understanding and strategic security work that AI cannot replicate.
Amazon Security Lake, as the normalised, enriched, centrally queryable data backbone of an AWS-native security architecture, is the logical starting point for that evolution. The organisations investing in it now, not merely as a data store but as the foundation of an agentic security capability, are building an operational advantage that will compound over time.
The data store is becoming a decision engine. The question for security leaders is not whether to make that transition, but how to make it with the rigour, the governance and the data quality that the opportunity demands.
HOOP Cyber specialises in specialises in data-centric security operations, helping organisations build the foundations for AI-ready SOC environments through Amazon Security Lake, SIEM modernisation and data normalisation services.Contact us via to book a discovery call today.
A clear, accessible introduction to agentic AI for security audiences: what distinguishes an AI agent from a conventional automation or ML tool, what it can do autonomously in a security context, and what the realistic near-term use cases look like.
Every significant technology shift in cybersecurity arrives with its own vocabulary, and that vocabulary tends to travel faster than the understanding it is supposed to convey. Cloud-native, zero trust, AI-powered: each of these terms became ubiquitous before most security professionals had a clear, shared sense of what they meant in practice, what they required to implement well, and where the gap between marketing language and operational reality actually lay.
Agentic AI is at exactly that stage right now. The term is appearing in vendor materials, conference agendas and analyst reports at an accelerating rate. Some of what is being described under that label represents a genuine and significant shift in how security operations can be conducted. Some of it is rebranded automation with a new coat of paint. And the distance between the two is consequential for any security leader trying to make informed decisions about where to invest.
This article is an attempt to cut through the noise. What does agentic AI actually mean, technically and operationally? How is it genuinely different from the automation and machine learning tools that security teams are already using? And, perhaps most importantly, what does it realistically look like when it is applied in a security operations context today, as opposed to in the aspirational future that vendors tend to describe?
Three Things That Are Not Agentic AI
The clearest way to define what agentic AI is may be to start with what it is not. Three categories of existing technology are frequently conflated with it, and the distinctions matter.
Automation and SOAR
Security Orchestration, Automation and Response (SOAR) platforms have been a fixture of mature SOC environments for several years. They are excellent at executing predefined workflows: if this alert type fires, run this playbook, query this source, send this notification, create this ticket. The key word is predefined. A SOAR playbook does what its author wrote it to do, in the order it was written, without deviation. It cannot reason about a situation it has not encountered before, adapt its approach based on intermediate findings, or decide that a different sequence of actions would be more appropriate given what it has discovered. It is fast, reliable and consistent, but it is not intelligent. It executes; it does not think.
Machine Learning Detection Models
ML-based detection tools, whether they are detecting anomalous network behaviour, scoring alerts for risk, or classifying email content, are pattern-recognition systems. They are trained on data, they learn to identify signals associated with particular outcomes, and they apply that learning to new inputs. They are genuinely valuable and represent a meaningful advance over purely rule-based detection. But an ML model, however sophisticated, produces an output and stops. It does not then go and gather more information based on that output, reason about what the information means, decide what to do next and then do it. It classifies; it does not investigate.
Chatbots and Conversational AI Assistants
The natural language interfaces now being built into many security platforms, allowing analysts to query data or summarise alerts in plain English, are a useful and increasingly capable category of tooling. But responding to a question, however intelligently, is a single-step interaction. The user asks, the system answers, and control returns to the user. An agentic system does not wait to be asked what to do next. It decides for itself, acts on that decision, evaluates the result and continues towards its goal. Conversational AI assists; it does not act autonomously.
What Actually Defines an AI Agent
An AI agent, properly defined, is a system that can pursue a goal through a sequence of self-directed actions, using available tools and information, adjusting its approach based on what it discovers along the way, without requiring human instruction at each step.
Four properties distinguish a genuine AI agent from the tools described above.
Goal-directed reasoning
An agent is given an objective, not a script. Rather than executing a fixed sequence of steps, it reasons about what actions are most likely to achieve its goal given its current state of knowledge, takes those actions, observes the results and updates its reasoning accordingly. This ability to plan under uncertainty and adapt as new information becomes available is what separates agentic behaviour from rule-based or playbook-driven automation.
Tool use
Agents are not limited to reasoning in the abstract. They can use tools: querying databases, calling APIs, retrieving documents, sending notifications, triggering actions in other systems. In a security context, the tools available to an agent might include the ability to query a security data lake, retrieve threat intelligence, look up asset information, create or update a case management ticket, run a containment action through a cloud management API, or page an analyst through an alerting system. The richness of an agent’s tool access determines the scope of what it can accomplish autonomously.
Multi-step operation
A single query and response is not agentic behaviour. What makes a system agentic is its ability to chain actions across multiple steps, where each step is informed by the results of the previous one. An agent investigating a suspicious process execution on an endpoint does not simply retrieve the process log and stop. It then queries what else that process communicated with, checks whether those destinations are associated with known threat infrastructure, looks at what other endpoints may have run the same process, assesses the timeline of activity and builds a picture of potential scope, all within a single autonomous investigation thread.
Autonomous decision-making within defined boundaries
An agent makes decisions. Not all possible decisions, and not without constraints, but within its defined operating parameters it chooses what to do next without being told. This is the property that creates both the operational leverage and the governance requirements that come with agentic AI. An agent that can decide to escalate, to close, to gather more data, or to initiate a containment action is doing something qualitatively different from a tool that presents options for a human to choose between.
Why the SOC Is Particularly Well Suited to Agentic AI
Security operations is not an obvious first choice for autonomous AI systems. It involves high-stakes decisions, ambiguous information, adversarial conditions and significant consequences for errors in either direction. Missing a genuine threat is costly; acting incorrectly on a false positive can disrupt legitimate operations and erode trust in security tooling.
And yet the SOC is in many respects an excellent environment for agentic AI, precisely because its core challenge is one that agentic systems are well suited to address.
The fundamental problem in a security operations centre is the gap between the volume of events that need to be investigated and the human capacity available to investigate them. That gap is structural and growing. The attack surface expands continuously; the volume of security telemetry grows with it; the supply of experienced security analysts does not keep pace. The result is a sustained, systemic shortfall in investigative capacity that manifests as alert backlogs, superficial triage, delayed detection and analyst burnout.
Agentic AI addresses this gap directly by extending investigative capacity without the constraints of human bandwidth. An agent can run continuously, investigate in parallel across many alerts simultaneously, apply consistent rigour regardless of alert volume, and do so at a speed that human investigators cannot match. It does not get tired, does not become desensitised to repeated alert types and does not cut corners under time pressure.
It also operates in an environment that is, compared to many other domains, relatively well structured. Security data, particularly when normalised to a common schema such as OCSF, is amenable to systematic querying and correlation. The logic of a security investigation, while it requires judgement, follows recognisable patterns. And the tools available to security agents, data lakes, threat intelligence APIs, case management systems, cloud management interfaces, are increasingly well-documented and accessible. These structural features make security operations a more tractable domain for agentic AI than many others.
Realistic Near-Term Use Cases
The most useful question for security leaders assessing agentic AI is not what is theoretically possible but what is actually deliverable today, in real operational environments, with the tools and data infrastructure that exist now. The answer is more substantial than the hype sometimes suggests, and more bounded than the most ambitious vendor claims imply.
Autonomous alert triage and investigation
This is the most mature and immediately deployable agentic use case in security operations. An agent receives an alert, queries the relevant security data to gather the full event context, enriches the findings with threat intelligence and asset information, correlates the activity with related events across the environment, assesses severity and likely intent, and produces a documented investigation finding. Alerts assessed as low risk are closed with a rationale; alerts assessed as significant are escalated to a human analyst with a complete evidence package already assembled.
The operational impact is immediate and measurable. Analysts receive fewer alerts that require their attention, and the alerts they do receive come with substantially more context than they would have had if the analyst had initiated the investigation themselves. Mean time to investigate falls; coverage consistency improves; the quality of human attention is directed where it adds most value.
Threat hunting on a continuous basis
Traditional threat hunting is a periodic, analyst-driven activity: a skilled analyst forms a hypothesis about attacker behaviour, constructs queries to test that hypothesis against historical data and pursues the investigation until they either find something or exhaust the hypothesis. It is valuable but resource-intensive, and in most organisations it happens far less frequently than security teams would like.
Agentic AI can transform threat hunting from a periodic activity into a continuous one. Agents can be tasked with running defined hunting hypotheses against the security data lake on an ongoing basis, pursuing any promising findings, escalating confirmed or probable discoveries and logging negative results for future reference. The breadth of hypothesis coverage that a single human hunter can achieve in a week, an agentic system can cover continuously, across a much larger dataset.
Incident scoping and evidence gathering
When a significant security incident is confirmed, one of the most time-consuming early tasks is establishing its scope: which systems are affected, what data may have been accessed or exfiltrated, how the attacker moved through the environment and what the timeline of activity looks like. This work is critical for effective containment and recovery, but it is largely mechanical, involving systematic querying across multiple data sources to build a complete picture.
Agentic AI systems can run this scoping work in parallel with the human incident response team, covering a broader range of data sources more quickly than human investigators working sequentially. The result is a more complete and more rapidly assembled picture of incident scope, which directly improves the quality and speed of containment decisions.
Vulnerability context and prioritisation
Organisations routinely face vulnerability backlogs containing thousands of identified issues that cannot all be remediated immediately. Prioritising which vulnerabilities to address first requires combining the raw vulnerability data with contextual information about the affected assets: their exposure, their criticality, whether there is active exploitation of the vulnerability in the wild and whether there are compensating controls in place that reduce the effective risk.
Agents can automate the assembly of this contextual picture for each vulnerability, querying asset inventory systems, threat intelligence feeds and internal security controls data to produce a risk-adjusted prioritisation that reflects actual organisational exposure rather than generic severity scores alone. Security and IT teams receive a prioritised remediation list that is genuinely actionable rather than mechanically generated from CVSS scores.
Post-incident reporting and documentation
The documentation work that follows a security incident, assembling timelines, recording actions taken, producing reports for internal governance and external obligations, is both important and time-consuming. It is also largely a matter of organising and presenting information that already exists in logs, tickets and communications, rather than generating genuinely new analysis. Agentic systems can draft initial post-incident reports from the accumulated evidence and action records of an investigation, substantially reducing the time analysts spend on documentation and freeing them to focus on the lessons learned analysis that genuinely benefits from human judgement.
The Governance Question That Cannot Be Avoided
Any serious introduction to agentic AI in security must address the governance dimension, because it is the dimension that most directly determines whether autonomous security operations create value or create risk.
An agent that can take actions has the potential to take wrong actions. It can close an alert that warranted investigation. It can escalate in a way that disrupts legitimate operations. In more autonomous configurations, it can trigger containment actions based on a misassessment of the situation. These are not hypothetical risks; they are the predictable failure modes of any system that makes decisions under uncertainty.
Managing them requires deliberate design rather than hopeful assumption. Specifically, it requires clear definition of the boundary between what an agent can do autonomously and what requires human authorisation. In practice, most organisations beginning their agentic AI journey set this boundary conservatively: agents can query, correlate, enrich and recommend, but any action that affects systems, users or data requires explicit human approval. As confidence in agent judgement is established through operational experience, that boundary can be extended thoughtfully.
It also requires comprehensive audit capability. Every action taken by an agent, every query issued, every decision made and every rationale recorded should be logged in a way that allows it to be reviewed, understood and, if necessary, challenged. In a regulated environment, the ability to demonstrate that autonomous security actions were taken within approved parameters and for documented reasons is not optional.
Governance is not a constraint on agentic AI. It is the condition that makes agentic AI trustworthy enough to be genuinely useful.
Why It Matters Now
The question of timing is legitimate. Agentic AI has been discussed as a future capability for long enough that healthy scepticism about whether it has truly arrived is reasonable. The answer in 2026 is that it has, in a qualified but meaningful sense.
The foundation model capabilities that underpin agentic reasoning have matured substantially over the past two years. The infrastructure for connecting agents to tools and data sources, including platforms such as Amazon Bedrock Agents, has moved from experimental to production ready. The security data platforms needed to give agents something useful to work with, including Amazon Security Lake and its OCSF-normalised data model, are deployed and operating in real enterprise environments. And a growing body of early-adopter experience is producing the operational knowledge needed to implement agentic security systems with realistic expectations and appropriate governance.
None of this means that every organisation should deploy fully autonomous agentic security operations immediately. It means that the building blocks are sufficiently mature that security leaders who begin the journey now, starting with well-defined, well-governed use cases and expanding incrementally, will be substantially better positioned in two or three years than those who wait for the technology to mature further before engaging with it.
The SOC of the near future will be one in which autonomous agents handle the systematic, repeatable, data-intensive work of security investigation, and human analysts focus their irreplaceable capabilities on the judgement, creativity and adversary understanding that no AI system can replicate. Getting there requires starting somewhere. And the starting point is understanding, clearly and practically, what agentic AI is and what it can realistically do.
That understanding is now available. The next step is deciding what to do with it.
HOOP Cyber specialises in specialises in data-centric security operations, helping organisations build the foundations for AI-ready SOC environments through Amazon Security Lake, SIEM modernisation and data normalisation services.Contact us via to book a discovery call today.
In an industry that is never short of announcements, genuinely transformative moments can be easy to miss. The news that HOOP Cyber, one of the UK’s most innovative specialist security data operations firms, has joined the FSP Consulting Services Group, is one that deserves to be heard clearly.
To explore what it means, Lisa Ventura MBE FCIIS sat down with Simon Johnson, CEO and Founder of HOOP Cyber, for a wide-ranging conversation about the partnership, the rationale behind it, and what lies ahead for customers and the broader industry.
The Problem HOOP Cyber Was Built to Solve
When Simon Johnson founded HOOP Cyber three years ago, he had a clear and pressing problem in his sights. Having spent years working across security operations, SIEM, and threat intelligence, he had seen the same challenge play out time and again at organisations of all sizes: security data volumes had grown exponentially, but the tools and architectures used to manage them had not kept pace.
“My SIEM was fantastic and it was great ten years ago when I had 100 gigabytes of data, but now I’ve got two or three terabytes of data a day and I’m struggling.”- Simon Johnson, CEO and Founder, HOOP Cyber.
For many security teams, what had once been a quick and efficient way to interrogate data had become, in Simon’s words, “a genuine data engineering millstone around the neck.” Storing, querying, and drawing insight from vast quantities of security data had become slower, more complex, and significantly more costly.
HOOP Cyber was built specifically to address this reality, developing a set of approaches centred on what the firm calls modernised security operations. At its core is the conviction that cybersecurity is, fundamentally, a data problem. The company’s flagship offering, HOOP Lake, powered by Amazon Security Lake, transforms the way security teams can stream, store, search, enrich, and comply across their entire estate of security events, using standardised schemas such as OCSF and natural language query capabilities to make security data genuinely accessible and actionable.
Why FSP? Why Now?
Simon spoke candidly about the journey that led him here, and the qualities that made FSP the right partner at the right moment.
As HOOP Cyber grew, it found itself working with some very large customers. Delivering outstanding outcomes at that scale required significant capacity and capability that, as a focused specialist firm, was not easy to build alone. Simon began actively exploring how to deliver at the pace and scale his clients needed, while preserving the values and culture that had made HOOP Cyber distinctive.
FSP Consulting Services Group is a 450-person cyber consulting business headquartered in Reading, UK. It has earned a strong industry reputation not only for technical excellence across enterprise transformation and cyber security, but also for being recognised as an outstanding place to work, a quality that Simon found was lived out genuinely by the people he encountered day to day.
“Culture is absolutely key to any successful environment. I think I definitely felt like the culture is a perfect fit.” – Simon Johnson, CEO and Founder, HOOP Cyber.
Beyond the culture question, FSP offered the capacity and multi-domain capability to take HOOP Cyber’s work further than it could travel alone. Where HOOP had historically operated within the specific domain of security operations, FSP opens the door to a broader set of disciplines including data engineering, AI adoption, and enterprise transformation, creating a proposition that can support clients across and beyond the CISO’s remit.
What This Means for Existing Customers
For those already working with HOOP Cyber, the message from Simon is very reassuring. The core focus remains unchanged: building modernised security operations with a data-centric approach. What changes is the scale at which HOOP can deliver, and the breadth of expertise it can bring to bear.
Key Points for Existing HOOP Cyber Clients:
The same specialist focus on security data operations and SIEM modernisation continues
Delivery can now happen at significantly greater scale, backed by a 450-person consultancy
Clients gain access to FSP’s broader capabilities across cyber, data engineering, and AI adoption
Services extend across the UK and, in time, across EMEA and the US
FSP’s vendor-agnostic approach means clients receive the right solution for their specific environment
As Simon put it simply, for existing clients it is “nothing changes. Hopefully it’s only just a bit more of the same good stuff.” The expanded platform means that HOOP Cyber can now help clients not only with the technical mechanics of managing security data, but also with how they securely adopt AI, understand the business case and return on investment behind AI investments, and navigate the intersection of data, security, and digital transformation.
The AWS Partnership: A Strategic Differentiator
One of the most significant aspects of HOOP Cyber’s heritage that it carries into the FSP Group is its strategic partnership with Amazon Web Services. This is, in Simon’s view, one of the most compelling elements of the combined offering.
AWS has been a strong advocate for HOOP Cyber’s growth, particularly in the areas of security data modernisation and Amazon Security Lake. FSP, for its part, had been looking to deepen its AWS capabilities to serve the many clients who have substantial investments in AWS environments or who run SaaS applications within AWS infrastructure.
The result is a focused ambition: to build a world-class AWS partnership that can offer FSP clients a completely new level of experience around the adoption of AWS solutions, cloud migrations, and the broader AWS ecosystem. It is a natural extension of the data-centric security work HOOP Cyber has always done, now with the scale and reach of a larger enterprise consultancy behind it.
The Future of Security Data Operations
The conversation with Lisa Ventura MBE FCIIS closed with a forward-looking question: what does the future of security data operations actually look like? Simon offered a grounded and thoughtful perspective, shaped by years of working at the coalface of the problem.
The fundamental challenge will not change. Security remains a data problem. The questions that will increasingly preoccupy security teams are architectural and strategic: should data be centralised or left distributed? How can data lake solutions such as Amazon Security Lake enable aggregation without the cost and complexity of traditional approaches? How can open schemas like OCSF make data simpler and faster to query? Can federated search and query capabilities decouple data storage from data interrogation?
“It’s critical to be able to have access to the right data, to ask that data the right question at the right time.” – Simon Johnson, CEO and Founder, HOOP Cyber.
AI will inevitably play an increasingly prominent role, and Simon acknowledged this candidly, noting that the industry will “continue to see a splattering of AI here, there, and everywhere.” Detection as code, doing more with less, and smarter automation are themes that will persist. But underneath all of it, the data problem endures, and that is precisely where HOOP Cyber and FSP are focused.
A Purposeful, People-Centred Approach
Throughout the conversation not only the strategic logic of the partnership emphasised, but also the spirit in which it had been forged. This is a combination built on shared values, a genuine alignment of culture, and a belief that the best outcomes for clients come from organisations where people are genuinely invested in each other’s success.
As Lisa observed in closing, this is precisely the kind of purposeful, people-centred approach to cybersecurity that the industry needs considerably more of. The joining of HOOP Cyber and FSP Consulting Services Group is not simply a corporate transaction; it is a signal of intent, and one that the security data operations space would do well to pay close attention to.
Find Out More
Whether you are an existing HOOP Cyber client, a prospective partner, or simply following the evolution of security data operations, this is a chapter worth watching closely.
These Terms and Conditions of Use (“Terms”) govern your access to and use of the website located at www.hoopcyber.com (the “Website”), which is owned and operated by HOOP Cyber Ltd (“HOOP Cyber”, “we”, “us”, or “our”). HOOP Cyber is a cyber data engineering consultancy, now part of the FSP group, providing data-driven security operations solutions to organisations worldwide.
By accessing or using this Website, you agree to be bound by these Terms in their entirety. If you do not accept these Terms, you must cease using the Website immediately. These Terms apply to all visitors, users, and others who access the Website.
We reserve the right to amend these Terms at any time. Any changes will be posted on this page with an updated effective date. Your continued use of the Website following the posting of revised Terms constitutes your acceptance of those changes. We therefore encourage you to review this page periodically.
2. About HOOP Cyber
HOOP Cyber Ltd is registered in England and Wales. Our primary areas of expertise include Security Operations (SecOps) architecture modernisation, SIEM deployment and optimisation, Amazon Security Lake services, data source mapping, cost optimisation services, and security maturity assessment. We partner closely with Amazon Web Services (AWS) and CrowdStrike to deliver cutting-edge security solutions to our clients.
HOOP Cyber operates as part of the FSP group of companies. References to “HOOP Cyber” throughout these Terms include all relevant associated entities within the FSP group where applicable.
3. Acceptable Use of the Website
You agree to use the Website only for lawful purposes and in a manner that does not infringe the rights of, restrict, or inhibit anyone else’s use and enjoyment of the Website. Prohibited behaviour includes, but is not limited to:
Using the Website in any manner that could disable, overburden, damage, or impair it, or interfere with any other party’s use of the Website.
Transmitting any material that is defamatory, offensive, or otherwise objectionable.
Attempting to gain unauthorised access to any part of the Website, the server on which it is stored, or any server, computer, or database connected to the Website.
Conducting or facilitating any attack on the Website, including a denial-of-service attack or distributed denial-of-service attack.
Using automated tools, bots, scrapers, or other means to access, harvest, or collect information from the Website without our express written consent.
Transmitting any unsolicited or unauthorised advertising or promotional material.
Knowingly transmitting data, sending or uploading any material that contains viruses, Trojans, worms, spyware, adware, or any other harmful programs or code designed to adversely affect the operation of any computer software or hardware.
Any breach of acceptable use may result in the immediate suspension of your access to the Website and may be reported to the relevant law enforcement authorities.
4. Intellectual Property Rights
All content published and made available on this Website, including but not limited to text, graphics, logos, icons, images, datasheets, downloadable resources, and the overall compilation of the Website, is the property of HOOP Cyber Ltd or its content suppliers and is protected by applicable intellectual property laws, including UK copyright law.
You may access and use the content on this Website for your own personal and non-commercial use only, provided that you keep all copyright and proprietary notices intact. You must not copy, reproduce, republish, download, post, broadcast, transmit, or otherwise use the content on the Website for commercial purposes without obtaining a licence from us or our licensors to do so.
The HOOP Cyber name, FSP name, associated logos, and product names including “HOOP Lake” are trade marks of HOOP Cyber Ltd and/or FSP. Nothing on the Website grants any licence or right to use these trade marks.
Third-party trade marks, logos, and service names referenced on the Website, including those of Amazon Web Services (AWS) and CrowdStrike, remain the property of their respective owners and are used for identification purposes only. Nothing in these Terms grants any rights in respect of those third-party marks.
5. Third-Party Links and Resources
The Website may contain links to third-party websites, social media platforms (including LinkedIn), and other external resources. These links are provided for your information and convenience only. We have no control over the content of those websites and accept no responsibility for them, including any loss or damage that may arise from your use of them.
The inclusion of any link does not imply endorsement by HOOP Cyber of the linked site or any association with its operators. You should review the terms and conditions and privacy policies of any third-party websites that you visit.
6. Disclaimer of Warranties
The Website and its content are provided on an “as is” and “as available” basis without any warranties of any kind, whether express or implied. To the fullest extent permitted by law, HOOP Cyber disclaims all representations and warranties, including but not limited to:
That the Website will be uninterrupted, timely, secure, or error-free.
That any information or content on the Website is accurate, complete, reliable, or up to date.
That the Website is free from viruses or other harmful components.
That defects in the Website will be corrected.
That the Website or the servers that make it available are free of viruses or other harmful components.
Nothing in these Terms affects your statutory rights as a consumer under applicable UK law.
7. Limitation of Liability
To the fullest extent permitted by applicable law, HOOP Cyber Ltd and its directors, employees, agents, or partners shall not be liable for any indirect, incidental, special, consequential, or punitive loss or damages arising out of or in connection with your access to, or use of, or inability to access or use, the Website or any content contained therein.
This includes, without limitation, any loss of profits, loss of business, loss of data, loss of goodwill, loss of anticipated savings, or any other economic or consequential loss. This limitation applies regardless of whether such loss was foreseeable or whether we had been advised of its possibility.
Nothing in these Terms excludes or limits HOOP Cyber’s liability for death or personal injury caused by its negligence, fraud or fraudulent misrepresentation, or any other liability that cannot be excluded or limited under English law.
8. Cybersecurity Information and Content
The Website contains information relating to cybersecurity, security operations, data engineering, and related technology topics. This content is provided for general informational and educational purposes only and does not constitute professional security advice, consultancy, or services. The content reflects the views and expertise of HOOP Cyber at the time of publication and may not be current at the time of your access.
Whilst HOOP Cyber endeavours to ensure that all information on the Website is accurate and up to date, we make no representation or warranty regarding its completeness or accuracy. You should not act upon any information on the Website without first seeking appropriate professional advice tailored to your specific circumstances.
Any datasheets, white papers, or other resources made available for download on the Website remain the intellectual property of HOOP Cyber Ltd and are provided solely for your personal, non-commercial reference. You must not redistribute, resell, or sub-licence such materials without prior written consent from HOOP Cyber.
9. Privacy and Data Protection
Your use of the Website is also governed by our Privacy Policy, which is incorporated into these Terms by reference and available at www.hoopcyber.com/privacy-policy. The Privacy Policy sets out how HOOP Cyber collects, uses, and protects any personal data that you provide to us or that we collect through your use of the Website, including through the use of cookies, log files, and analytics tools.
HOOP Cyber is committed to complying with its obligations under the UK General Data Protection Regulation (UK GDPR) and the Data Protection Act 2018. By using the Website, you consent to such processing and you warrant that all data provided by you is accurate.
10. Cookies
The Website uses cookies and similar tracking technologies to improve your browsing experience, analyse usage patterns, and assist with site administration. By continuing to use the Website without adjusting your browser settings, you are agreeing to our use of cookies in accordance with our Privacy Policy.
We use Google Analytics to help us understand how visitors engage with the Website. This service may use cookies and pixels to collect non-personally identifiable information about your use of the Website. You may opt out of Google Analytics tracking at any time by visiting the Google Analytics Opt-Out page and installing the relevant browser add-on. Further information about managing cookies can be found in your browser settings or at www.allaboutcookies.org.
11. User Submissions and Contact Forms
If you submit any information to us via the Website’s contact form, including your name, email address, or enquiry details, you agree that HOOP Cyber may use this information to respond to your enquiry and, where you have consented, to send you relevant communications about our services. Any personal data submitted will be handled in accordance with our Privacy Policy.
You warrant that any information you submit is accurate and not misleading, and that you have the right to submit such information. You must not use the Website’s contact facilities to transmit any commercial or unsolicited communications, or to harass or harm others.
12. Website Availability and Maintenance
Whilst HOOP Cyber endeavours to maintain the availability of the Website, we do not guarantee that it will be available at all times, nor that it will be free from errors, interruptions, or delays. We reserve the right to suspend, withdraw, or restrict access to all or part of the Website for operational, technical, or business reasons, with or without notice.
We may update, amend, or modify the content of the Website at any time without prior notice. We are not obligated to update any information on the Website, and no representation is made that the information provided will remain current.
13. Careers and Recruitment Information
The Website includes a careers section where HOOP Cyber may advertise job opportunities. Any information you submit in connection with a job application will be handled in accordance with our Privacy Policy and applicable data protection legislation. We do not accept unsolicited approaches from recruitment agencies unless we have an established agency agreement in place.
Job descriptions and role requirements are provided for information purposes and may be subject to change. Posting of a vacancy does not constitute a guarantee of employment or an offer of a contract of employment.
14. Governing Law and Jurisdiction
These Terms and any dispute or claim arising out of or in connection with them or their subject matter (including non-contractual disputes or claims) shall be governed by and construed in accordance with the laws of England and Wales.
You irrevocably agree that the courts of England and Wales shall have exclusive jurisdiction to settle any dispute or claim arising out of or in connection with these Terms or their subject matter, save that HOOP Cyber reserves the right to bring proceedings in any jurisdiction where a breach of these Terms has occurred or is occurring.
15. Severability
If any provision of these Terms is found by a court of competent jurisdiction to be invalid, unlawful, or unenforceable, that provision shall be severed from the remainder of these Terms, which shall continue to be valid and enforceable to the fullest extent permitted by law.
16. Entire Agreement
These Terms, together with our Privacy Policy and any other policies published on the Website, constitute the entire agreement between you and HOOP Cyber in relation to your use of the Website. They supersede all previous agreements, representations, and understandings between us.
No failure or delay by HOOP Cyber in exercising any right or remedy provided under these Terms or by law shall constitute a waiver of that right or remedy or prevent or restrict its further exercise.
17. Contact Information
If you have any questions, concerns, or requests relating to these Terms and Conditions, please contact us using the details below:
We’re thrilled to announce that HOOP Cyber has joined FSP Consulting Services Group, an award winning enterprise transformation and cyber security consultancy. This marks an exciting evolution for our team, our technology, and most importantly, the value we deliver to our clients.
Why This Matters
Since launching HOOP Cyber, we’ve been on a mission to solve the security data challenge. We’ve helped organisations modernise their security operations through Amazon Security Lake, transform unwieldy SIEM costs into efficient data lake architectures, and build automated data pipelines that give security analysts time to think rather than just time to react.
Joining FSP accelerates everything we’ve been building. FSP’s comprehensive capabilities across cyber security, cloud, data, and AI complement our specialist security data operations expertise. Their awardwinning culture and numerous #1 and top rankings by Best CompaniesTM and Great Place to WorkTM ,aligns perfectly with our commitment to building sustainable, people centred security operations.
What This Means for Our Clients
If you’re a HOOP Cyber client, you’ll continue to work with the team you know whilst gaining access to FSP’s broader portfolio of services. Need help with broader cyber strategy alongside your security data lake implementation? FSP’s Virtual CISO and Cyber Strategy teams can help. Looking to integrate your security data operations with wider cloud transformation initiatives? FSP’s cloud and platform engineering expertise is now available to you. Want managed services to support your security operations long-term? FSP’s comprehensive managed services portfolio has you covered.
Your HOOP Cyber projects continue uninterrupted, with the same expertise and dedication you’ve experienced, now backed by FSP’s extensive resources and capabilities.
What This Means for Our Partners
Our strategic partnerships with AWS, Query, Tenzir, Cyware, Tines, DataBee, and Silent Push remain central to how we deliver value. These partnerships become even stronger within FSP’s broader technology ecosystem, creating new opportunities for integrated solutions that address the full spectrum of security and digital transformation challenges.
What This Means for Our Team
For the HOOP Cyber team, joining FSP means joining an organisation that genuinely lives its values of “people first, purpose led, performance driven”. FSP’s investment in professional development through the FSP Academy, their commitment to work-life balance, and their recognition as a world-class workplace creates an environment where our team can continue to grow, innovate, and deliver exceptional outcomes for clients.
Looking Forward
The security landscape continues to evolve rapidly. Data volumes are exploding, threats are becoming more sophisticated, and organisations need security operations that are both effective and sustainable. By combining HOOP Cyber’s specialist security data operations expertise with FSP’s comprehensive capabilities, we’re better positioned than ever to help organisations navigate these challenges.
We remain committed to the principles that have guided HOOP Cyber from the start: solving real problems with pragmatic solutions, respecting the humans who operate security systems, and building architectures that are sustainable, cost-effective, and genuinely improve security outcomes.
Thank you to our clients, partners, and supporters who have been part of the HOOP Cyber journey so far. We’re excited about this next chapter and the enhanced value we can deliver together with FSP.
For any questions about what this means for your projects or partnerships, please reach out to us at the usual contact details. We’re here, we’re excited, and we’re ready to do even better work together.