How to Actually Improve Your Incident Response (In the Age of AI)

Tools change, vendors pivot, and the ‘AI-powered’ flavor of the month will eventually be replaced by the next big thing, but the logic of a breach remains the same. And the fundamentals of a world-class investigator? Those are timeless.

However, fundamentals in this field are broader and deeper than the word suggests. To be effective, you have to master a massive surface area: enterprise environments, telemetry, attacker behavior, tooling and engineering, analysis and forensic tradecraft required to tie it all together under pressure. If you understand what normal looks like, and you’ve seen what malicious looks like, then you’ll be able to investigate with confidence and without wasting much time. That level of confidence only comes from experience, not from a dashboard.

Upskilling in this new era isn’t just about learning a new interface or becoming a “prompt engineer.” It’s about reclaiming the hours AI saves you to sharpen your high-level intuition, adversarial mindset, and strategic decision-making.

To actually improve your IR skills today, you have to look past the AI-generated summaries and understand the underlying mechanics of an attack. Here is how to evolve your craft.

Key Takeaways

  • Strong incident response still depends on analyst fundamentals, not just better tools.
  • You need visibility before you need detections: telemetry gaps break investigations.
  • Tool fluency helps, but investigation skill comes from judgment, exposure, and tradecraft.
  • The fastest way to improve is to get closer to realistic environments, attacks, and evidence.
  • AI raises the ceiling for capable analysts, but it does not replace foundational skill.

So how do we get there?

Understand the Environment You’re Defending

The first real complexity for an analyst is the enterprise environment itself. Not everyone comes in with an IT admin background, or has spent time with Active Directory, domain controllers, service accounts, group policies, and the rest of it. But that’s exactly the terrain an attacker is operating on. Everything revolves around accounts, privileges, and the ability to access resources or execute processes to move toward an objective.

If you don’t know what normal looks like in your environment, detecting malicious activity becomes guesswork. And the opposite problem is just as bad. If everything looks malicious, you’ve also got a problem.

You Can’t Detect What You Can’t See

Many times we can’t even determine what happened and how, because environments simply don’t generate the telemetry needed to answer the question. Network, endpoint, cloud, identity – there’s a lot of ground to cover, and the default settings rarely cut it.

This is where an experienced analyst or engineer earns their pay before anything has even happened. You need to know what knobs to turn to enhance visibility, create detections, and make malicious activity visible.

A couple of examples that pay for themselves the first time you need them:

IT teams often push back on this because it generates more data and more cost. Fair. But every experienced analyst will tell you the same thing: without it, you’re typically unable to answer basic questions about what happened. Especially, if we are looking into a time frame that goes beyond the typical 2 week retention period of EDR telemetry.

From there it’s about ingesting that data into the tooling of your choice – open source or commercial, doesn’t really matter. Each has trade-offs, but in an ideal world it shouldn’t matter to the analyst. You know what to look for; you figure out the how in whatever tool is in front of you. Don’t let the tool guide you, you guide the tool. And when you hit a tool’s gaps, that’s not failure, that’s a sign you’re competent enough to trust your process (and file an RFE with the vendor…).

Think Like the Attacker

Understanding the environment is the foundation. The next layer is understanding what attackers actually do in it.

I’m a huge proponent of thinking like the attacker – meaning every defender should understand what it takes to execute an attack, use a specific tool, and walk through the same decisions an operator has to make. Only then can you anticipate where to look and what to look for.

ransomware lifecycle Understanding the adversary: How ransomware attacks happen – IBM Security

The patterns repeat themselves once you’ve seen them a few times. Attackers abuse LOLBINs (living-off-the-land binaries) to blend in and avoid introducing malware. They run discovery commands to map the environment. They go after credentials – NTLM hashes, session tokens, Kerberos tickets – and use what they get for privilege escalation and lateral movement. They drop persistence (you’ll literally see the same three types in almost every case – run keys, scheduled tasks, startup scripts), hide files, spawn processes, and eventually try to stage and exfiltrate data.

None of that is exotic. But you have to have seen it to recognize it quickly.

Tool Fluency Is Not Investigation Skill

This is the hardest lesson to accept early in your career, because tool fluency feels like progress. You learn Sentinel’s KQL quirks, you memorize Cybereason’s process tree view, you get faster pivoting between them. That’s real skill and it matters. But there is more to it and that again requires knowledge and critical thinking – don’t trust your tools blindly.

I once saw a case where Microsoft Defender fired a generic ransomware alert – no context, vague details, nothing to pivot on. The analyst had to peel the onion layer by layer, and what it came down to was a handful of failed TCP handshakes to an IP that had been reported as malicious. Nothing to do with ransomware. Just probing – and there is always probing going on, which should never trigger a ransomware incident.

The skill wasn’t in the tool. The skill was knowing what you’re actually looking for, and being flexible enough about how you look that the tool becomes almost incidental. Use different tools on purpose. You’ll understand which ones are the right ones for the job.

The shortcut to that kind of intuition is exposure. If you’ve seen an attack – better yet, if you’ve run one yourself in a lab – you know what it leaves behind, and you have a real sense of how to detect and analyze it. It also helps that frameworks such as Sigma and Yara are generic detection languages that you should be able to apply virtually anywhere these days for logs or binary code detection. Add the ability to pull malicious code off the wire or extract what matters out of a piece of malware or shellcode, and you’ve got a tool belt that travels with you regardless of which SIEM or EDR is in front of you.

Tool fluency is not investigation skill. AI can accelerate analysis, but it cannot replace analyst judgment built through real exposure.

Want more reps on realistic investigations?

Get as Close to the Scene as You Can

So with all that said: the way to actually improve your incident response is to get as close to the scene as possible. A (simulated) corporate environment, doing attacker things, deploying the tools and visibility you need to collect, process, and analyze the evidence.

You can build a surprising amount of this yourself. Spin up a few Windows hosts, stand up a domain, install and configure Sysmon, ship logs into Splunk or ELK, then add a Kali VM and run an attack with a simple C2 framework – PowerShell Empire, Sliver, take your pick. You’ll learn more in a weekend of doing that than years of clicking on the same alerts in a SOC.

Tip: Here is a free Build Your Lab tutorial from bare bones to setting up a SOC-grade attack and defense lab including:

  • 1 domain controller
  • 2 Windows endpoints
  • Sysmon with a known config
  • Windows event forwarding or agent-based log shipping
  • Splunk/ELK & Velociraptor
  • C2 Attack: PowerShell, scheduled task persistence, lateral movement, exfil staging

Where Experience Actually Shows Up

Most cases I’ve seen go sideways went sideways before the investigation even started.

The friction point is almost always the same: the intersection between IT admins and analysts, where data has to come off systems and environments in a way nobody on either side has practiced. A remote host in a branch office. A Microsoft 365 tenant nobody has pulled unified audit logs from before. A Linux server the admin hasn’t touched in two years. A laptop on the other side of the world that the user is still working on.

Knowing which tools to reach for in each of those situations is its own skill. So is scoping the collection. What do you actually need? Sometimes it’s a full disk image. Sometimes it’s a lightweight triage collection, logs or a memory capture. Get it wrong in one direction and you’re staring at terabytes you can’t move, can’t process, and can’t finish triaging before the client wants an update. Get it wrong in the other direction and you won’t be able to answer questions and have gaps in your analysis.

And then there’s the part nobody talks about: During data collection, the transfer and processing step where large datasets quietly fail, time out, or corrupt themselves halfway through. That eats hours you don’t have.

In a mature org, there is a collection plan, and the processing pipeline already sorted, because they’ve done it before – or at least in similar environments that looked roughly like this one. In an immature org, you figure it all out live, on the call, while the clock runs.

Then you get to the investigation itself. Artifact processing. Correlation across sources. Knowing where to look next. Knowing when to stop pulling a thread because it’s not going anywhere. That part is genuinely hard to teach in a classroom. It comes from working cases and realistic investigation scenarios.

How AI is Changing the Incident Response Lifecycle

So how does any of this hold up in the age of AI?

Pretty well, actually – because all of the above is the foundation everything else rests on. If you haven’t built that foundation through real incidents or lab simulations, how are you going to evaluate what an AI agent is proposing, or sanity-check what your agentic SOC stack is doing? A lot of the rote work will be handled for you. But you’ll still have to think outside the box, fill the gaps in your stack, decide what’s worth investigating versus ignoring – and do it at a faster cadence than today.

As one example, look at the proposed future SOC architecture outlined in Revolutionizing Security Operations: The Path Toward AI-Augmented SOCs.

Future SOC Architecture

Whether you see that as promising or concerning, the direction is clear: security operations are moving toward environments that are faster, more automated, and more complex than what many teams deal with today. That does not reduce the need for analyst skill. It raises the bar for it.

In that kind of SOC, foundational capability matters even more. Analysts still need to understand the environment, know what telemetry matters, recognize attacker behavior, and apply sound judgment when the tooling is incomplete, wrong, or moving faster than they can fully verify in the moment.

Yes, there are companies building AI that can perform at an L3 analyst level with high confidence. That is real, and it is happening now.

The human doesn’t disappear. We adapt. AI handling more of the rote work raises the floor on what a human responder is capable of delivering. Embrace it, adapt, and make sure you’re bringing it.

Latest proof as of this writing (April 2026):

Increase people and capacity
Plan for repurposing existing staff (within the security org, but also — and especially — within engineering teams) and/or onboarding additional headcount and contractor capacity to handle the anticipated increases in triage, remediation, and incidents, while protecting experienced staff from burnout, especially as the first wave of Glasswing patches hits.

The “AI Vulnerability Storm” — Building a “Mythos-ready” Security Program

What to Do Next

Take action. Build and test this in a lab. Run LLMs and agentic workflows against realistic activity and figure out where they actually help, where they improve your analysis, and where they break your investigation. Every hour spent hands-on is worth more than just reading about it.

If you have the time and resources, build that environment yourself. If not, the answer is still the same: train on realistic investigations that let you work with the telemetry, artifacts, and decisions that matter. Every team should have the chance to train in realistic environments, because you do not want the first real attack to be where that learning begins.

For more advanced and hands-on analyst training:

Scroll to Top

Training Waitlist

Join our waitlist and get notified when training becomes available.

Contact Information
Professional Experience
I'm interested in

*By submitting this form, you’re agreeing that we will contact you and to receive our free email newsletter. (You’ll never be spammed and you can unsubscribe at any time.) We do not share your information with third-parties.