loader image
SOC automation

Why you should adopt automation in your SOC

Paul-Arthur Jonville

The Security Operations Center (SOC) is indispensable to any today’s organization. Every day, it manages a barrage of attacks, issues alerts when abnormalities are detected, and coordinates responses to these events to keep the company safe and running.

This situation would be manageable if all that was happening were just one or two incidents at a time. This isn’t the case. The average enterprise faces thousands of attacks per day. Because SOCs often lack automation, there’s no way around human intervention in response-time critical tasks like triage and incident investigation, someone has to work through each event manually.

As a result, analysts working in SOCs commit more time each day to manage these tasks, often repetitive. It leads to exhaustion among SOCs ranks. In addition to this, cyber attackers are also sophisticating, using innovation to strengthen and multiply their attacks. SOCs, who are already under a lot of stress, will be forced to deal with an ever-increasing number of occurrences events in the future, potentially even harder to detect and combat.

We think that good automation is the best method to address these issues. SOCs would cover a broader attack surface and reduce repetitive activities by utilizing automation (malware analysis, data enrichment, incident response). However, people often argue about the potential hazards or threats of automation: Artificial intelligence (AI)/Machine learning (ML) capacity overconfidence, can it replace humans or even poor implementation, and how do the organizations keep growing talents?

What we want to say is that not all automation is good automation. Here, we promote appropriate automation, targeted and well-planned through a Security orchestration, automation, and response (SOAR). It’s not about putting humans out of SOCs but empowering them.

Automation wrongly inspires anxiety because of misconceptions.

At first glance, the advent of automation in societies creates anxiety.

The OECD suggests that automation and AI would eradicate or substantially transform nearly half of all the jobs that currently exist in the next two decades.

The fear about the sustainability of their jobs is widespread across all the different sectors of the economy. As an example, according to studies underdone by the ILOS Institute across Europe, 56% of Europeans surveyed think that automation will bring significant changes to the world of work in the foreseeable future, more than any other technology that triggered the previous revolutions.

In short, mentioning the word automation creates anxiety. People often think about how automation would impact their jobs instead of how it could improve their roles.

Indeed, there’s anxiety about the coming AI revolution. Will it be like the former technology revolutions – creating millions of jobs, elevating the general wealth produced by humanity, lifting millions of people out of poverty, or will it bring rarefaction, the extreme division between high-end jobs and low wages? On which scale will the creative destruction be? Will machines be recklessly spreading to cognitive fields which were untouched up to this day?

Just as digital word-processing ended typists’ jobs, people apprehend the effect of endless automation.

In summary, the coming revolution is inspiring anxiety across every sector of the economy, especially among some of the higher end, sometimes more prone to automation than low-end ones. Cybersecurity is among one of them.

Although automation could bring a much-hoped-for relief for cyber professionals, questions are arising.

That is why, in cybersecurity, according to some, automation could carry more risks than benefits.

While AI-enabled tools and machine learning have a lot of potentials, they come with significant risks. Cybercriminals and other hazard actors may use the same methods to boost their attacks – as automation is already used to make DDoS or phishing attacks more viral – or bend the automated systems that businesses employ.

Because these technologies aren’t yet mature or well understood by most IT departments, there’s an opportunity for misconfiguration and disruptions caused by overlapping tools.

Integration and management of automated systems may also add costs in the short term, even if they assist with cost reduction in the long run. Inappropriate expectations and complacency can lead to catastrophe. Indeed, even though security automation handles tasks independently, it still has to be instructed and navigated by humans intervention.

Human control must be adjusted and customized to how the organization wants the automation procedure to be handled. And since the degree of automation is alterable, it implies that analysts will have to exercise caution when dealing with decision-making processes even though they are automated.

Ultimately, organizations still prefer to answer manually to each threat arising, trying to adapt to each one of these. They are thus piling up tools, manual processes, and they are short-sighted, not seeing the burden hardening at each process laboriously implemented.

What we say is that this posture is not true because it doesn’t understand the purpose of automation as we imagine.

What good automation is and why you need to implement it.

Selective automation is the best help analysts could receive.

One common misconception is about the nature and spread of the alleged automation. As of today, we’re aware that automation cannot and shouldn’t be used to do every task. We are not promoting such a posture. To us, the point is to remove some duties from humans, not to remove all of them because some are better performed by the human brain.

Cognitive skills are still essentially not prone to automation. Only repetitive and basic tasks are to be targeted in such cases. AI/ML is still far away from being able to replace humans. Their cognitive qualities are challenging to replicate. Communication, creativity, critic, are the distinguishable qualities that enable humans to grow and prosper.

These qualities are especially precious in the cybersecurity field. Although analysts face attacks by a growing variety of tools, they ultimately face real humans with the same qualities and flaws. That is why human-like intelligence is needed. Decipher patterns, signature, and predict attackers’ next moves.

By and large, humans should take AI/ML for what they are: performance enhancers. Humans can analyze data within a broad context and think critically. But they do lack perfection. Among the enormity of data that needs to be ingested by analysts, errors occur irremediably. Automating and streamlining detection and responses helps to reduce those risks besides relieving teams from their daily burden.

On top of that, security professionals can and should use their expertise instead of spending time on labor-intensive, manual tasks. These tasks are not only repetitive, but they also aren’t rewarding, which adds to the burden of analysts in tier 1, thus increasing the potential turnover. In addition, some tasks are, of course, better done by machines, and humans should be rid of them. It’s about choosing which tasks fall in which bucket: AI/ML or human.

Automation and simplistic detection configurations are a bad combination. Attackers are becoming more and more creative. They adapt to evolution, and more than often, are ahead. They sophisticate phishing messages to pass through default email filters. They become increasingly able to create malware invisible to traditional security scanners. Tools configured with basic detection rules are unable to keep pace with modern threats, rapidly evolving ones.

Barriers to automation aren’t insuperable; appropriate tools can help overcome them.

In most cases, security tools have to be configured to align with the particular environment in which they are deployed. Writing detection and responses rules driving security automation require engineering resources that are in short supply.

From that point, there are two options for businesses: hire more engineers or affording security automation tools that lower the barrier of who can configure them. The former is not the greatest since it consumes a lot of assets.

On the contrary, the latter means relying on an agnostic solution to manage the tool stack to enable teams to avoid switching between each platform, using different consoles, or transferring data from one to another tool. Once the initial configuration is done, teams can begin to automate their tasks without custom code. The skill barrier is thus first lowered.

But, even if the approach is at first sight no-code, teams need to be able to grow talents. Every analyst, junior or senior, should be able to take in hand and understand what’s behind every workflow. This would increase transparency and trust in automation, besides allowing continuous learning on the go.

However, one common pitfall is to automate partially, to use tools to automate alerting but rely on a manual approach to remediate. Breaches spread quickly, and needing hours to manage a breach invalidates all the benefits of automated detection. An automatic response is just as crucial as automated alerts. SOARs are the most appropriate tools to alleviate the burden on security teams.

Connecting the benefits of an agnostic solution with the ones brought by a SOAR solution would result in the capacity to create processes that could cover security from the threats’ detection to their remediation.

As such, good automation is a force enhancer, not a force replacer.

Conclusion

Although threats are rapidly growing and evolving, some organizations still don’t take the path to automation. Anxiety about the sustainability of their jobs, fears of flawed implementation, and lack of adequate human resources to operate automation solutions make teams reluctant to automate their sometimes most basic tasks.

But, choosing the right tool to automate could and must be encouraged. Good automation exists; we believe in this. The goal is not automation per se but automation to enhance. Good automation shouldn’t create anxiety. On the contrary, it should alleviate analysts’ burden; it should empower them. Transparency and trust should be the main focus of automation; Knowing what’s behind it and how it works are the best guarantees for adoption. Humans need to be enforced, not replaced.

Start automating today

Sign up for Mindflow to get started with enterprise hyperautomation.

By registering, you agree to receive updates regarding Mindflow’s products and services and your account in Mindflow.

The future of automation is just a login away 🚀

Fill the form below to unlock the magic of Mindflow and be the first to try our feature . 

USE CASE

Phishing

OpenAI icon

OpenAI

Slack

Jira

Jira

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.