Column

Reinventing and scaling the SOC with AI: helping humans, not replacing them

When it comes to cybersecurity, there are no rules.

You can’t write rules that will differentiate good guys from bad guys on the Internet. That’s because the bad guys keep changing tactics, learning from their mistakes, and getting smart. You can’t write rules that will filter out all the malicious or phishing emails. You can’t write rules that will filter out malware in email attachments, or block fake websites, or say, “This is a safe packet payload, and this is a dangerous packet payload.”

Well, a minor correction: You can write rules, but they won’t work well enough to replace security operations center analysts, particularly the Tier 1 analysts that perform triage, with too many false positives, and too many false negatives. Fortunately, the goal isn’t to replace SOC analysts, but to help them be better, faster, and more effective at their jobs.

Think about bomb-detection dogs. Their job isn’t to replace human explosives experts. Instead, dogs are used to augment humans – to do a job faster, at less cost, and with greater accuracy than humans could do alone.

We don’t have malware-detecting dogs (yet), but artificial intelligence techniques, such as machine learning, can learn to protect users, organizations, devices, applications and networks. Applied AI can work better than any rule-based system in the SOC.

Before exploring how AI can solve the SOC problem, let’s review some of the trickier aspects of challenge, which is its inability to scale.

The SOC Problem is a People Problem

Want two words to define the SOC problem? Economic inefficiency. That’s according to Malcolm Harkins, Chief Security and Trust Office at Cylance.

“Over the past couple of decades of security operations, we’ve produced a growing need for labor. Thus, we have created the labor shortage because the existing controls have been insufficient and ineffective,” Harkins says. “In the SOC that’s resulted in a level of alert fatigue and what I call decision maker dementia for the executives who are pulled in too many different directions with competing priorities.”

Alert fatigue? Decision maker dementia? Yes, says Harkins: “SOC executives can’t figure out how to scale. We’ve not focused on the economic inefficiencies we have created with the approach we have taken to security. With the SOC revolution done the right way, we can gain the economic efficiencies back.”

A related problem is scaling the SOC horizontally by adding more operations centers around the world to gain 24-hour follow-the-sun coverage, says Rishi Bhargava, Co-Founder of Demisto. “Distributed SOCs create problems because major incidents can’t be handled by one person, because the investigation can’t continue when that analyst goes home. So, how do you do the hand-off? How does the collaboration happen?”

A third issue is skillsets, and maintaining training without all the alert fatigue. Greg Kiker, Global Senior Cybersecurity Consultant for Atos SE, explains,You’ve got Tier 1 analysts, who you need the most of a lot of the time in a SOC, going through alerts. However, Tier 1 analysts don’t want to stay Tier 1 analysts for long. They’re in shortage; they want a lot of money for what they do, and the turnover rate and training demands are huge.”

Greg Martin, CEO and Co-Founder of JASK, agrees: “This is exactly the problem with security operations! We’re built on a flawed model. The Tier 1, Tier 2, Tier 3 security analyst model is outdated and needs to change. It doesn’t keep up with the current state of the threats we are facing.”

“We have technology that we can automate handling alerts and doing triage,” Martin continues. We no longer need level 1 SOC analysts. We need to give this work to machines and put our humans into higher-level roles within the SOC.”

Enter the Bomb-Sniffing Dogs

It’s won’t be with a bark, but rather with a beep: Artificial intelligence, particularly machine learning, Big Data analytics, and behavioral analytics, can and should sniff out anomalies. Not based on rules, but by patterns: Something looks wrong. This application is acting like it’s been hacked. This user isn’t behaving in the normal way. This wireline device has changed IP address, and that shouldn’t happen, so maybe it’s being spoofed. This server might have a bot on it.

How does one add AI, and AI-based automation, to a security operations system? You start by understanding and reducing the problem, explains Cylance’s Harkins.

“First and foremost, automation has to start with a complete look at your control architecture. Unless you have the right set of preventative controls you’re going to have more response and reaction. So, you’ve got to do that because that will then lower the number of things that you need to deal with, in other words, reduce the number of alerts.”

Harkins continues, “Even though you’ve reduced the potential for harm, you must recognize that you can’t eliminate all risk. You’ve simply cleared the clutter. Now you can start figuring out how to instrument your network, and then expand that instrumentation for additional coverage where your prevention capabilities don’t fully take a foothold.”

Atos’ Kiker stresses the need to have openness everywhere. “When we choose AI and other technologies they have to be open now. They have to provide APIs. They have to work together. This is a huge baseline that’s needed to start in automation. Not every technology is open.

“I think security is getting to openness,” agrees Demisto’s Bhargava, but he worries about interoperability and orchestration between automation and AI tools, ensuring they can work from a single policy playbook. “If you build a playbook, don’t lock the playbook within your silo. So, it’s like a lot of the environment where automation is being deployed, these playbooks are written in Python. Bad idea. Can you take Python script from one tool and run in another? No. Do it in exchangeable formats. For the industry, but especially for enterprises, openness should be part of the criteria — otherwise they are locked in.”

The Elephant in the Room

“AI is real. It’s here. It’s powerful technology. So let’s address the elephant in the room, which is AI,” said Cylance’s Harkins. “AI’s not going to solve world hunger today but when applied to tasks like identifying malware or automating tedious SOC jobs, it’s a very powerful tool.”

Don’t be left out, Harkins warns: “Companies that are not investing in AI are simply going to be left behind. Where we see AI really accelerating and providing value in the early days – because it is very much the early days of us using AI in cybersecurity – is around human augmentation.”

The right applications of AI won’t replace humans, Harkins says, but make the humans significantly more efficient and effective. “When we use AI like that, maybe the humans will be doing new roles in the SOC that they weren’t doing before – because they didn’t have time before. That is a really good thing. That is not a threatening thing. That is progress and that is what we need today.

Don’t underestimate the power of human augmentation, observed JASK’s Chief Technology Officer, J.J. Guy, citing Kasparov’s Law:

Weak human + machine + better process is superior to strong human + machine + inferior process.

“It took 20 years for IBM to develop Deep Blue’s chess-playing ability to where it could beat Gary Kasparov,” recalls Guy. “However, what emerged is that a mediocre chess player paired with Deep Blue was able to beat Gary Kasparov long before Deep Blue could do that by itself. A human using intuition and judgment, aided by the artificial intelligence, blew Gary Kasparov out of the water.

The goal of AI in cybersecurity isn’t to beat humans, but to help them, Bhargava adds. “You’re not trying to use AI to beat the smartest analyst in your SOC. That’s not the goal. But can you use the AI help and the automation help to get that baseline a little bit above so that the baseline work is done by these tools? Yes. You escalate the analyst and then you take that baseline above in the next year and then you move it up. So, this is how you make progress.”

Not Going to the Dogs

Bomb-detection dogs make the humans much more efficient at finding explosives because dogs have brains that can be trained, and possess olfactory sensors that people lack. Sure, dogs get occasionally distracted (squirrel!), and their canine brains don’t work the same way as human brains. That’s okay. Dogs don’t need to be like humans. They need to be good at being bomb-sniffing dogs – and therefore, augment human soldiers or security agents to be much more effective at detecting threats.

That’s the best use of AI to help scale the SOC. After all, dogs don’t know rules. They just know that something smells like something they’ve been trained to detect. Bark!

Advertisements

Categories: Column, Cyber security

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.