Oct 4, 2021
Paul-Arthur
Jonville
The Security Operations Center (SOC) is indispensable to any organization today. It manages a barrage of attacks daily, issues alerts when abnormalities are detected, and coordinates responses to these events to keep the company safe and running.
This situation would be manageable if all that happened were just one or two incidents at a time. This isn't the case. The average enterprise faces thousands of attacks per day. Because SOCs often lack automation, there's no way around human intervention in response-time critical tasks like triage and incident investigation; someone has to work through each event manually.
As a result, analysts working in SOCs commit more time each day to managing these repetitive tasks, leading to exhaustion among SOC ranks. In addition to this, cyber attackers are also sophisticated, using innovation to strengthen and multiply their attacks. SOCs, who are already under a lot of stress, will be forced to deal with an ever-increasing number of occurrences and events in the future, potentially even more challenging to detect and combat.
We think that good automation is the best method to address these issues. By utilizing automation (malware analysis, data enrichment, incident response), SOCs would cover a broader attack surface and reduce repetitive activities. However, people often argue about the potential hazards or threats of automation: artificial intelligence (AI)/Machine learning (ML) capacity overconfidence, can it replace humans or even poor implementation, and how do organizations keep growing talents?
Not all automation is good automation. Here, we promote appropriate, targeted, and well-planned automation through Security orchestration, automation, and response (SOAR). It's not about removing humans from SOCs but about empowering them.
Automation wrongly inspires anxiety because of misconceptions.
At first glance, the advent of automation in societies creates anxiety.
The OECD suggests that automation and AI will eradicate or substantially transform nearly half of all jobs in the next two decades.
The fear about the sustainability of their jobs is widespread across all sectors of the economy. For example, according to studies conducted by the ILOS Institute across Europe, 56% of Europeans surveyed think that automation will bring significant changes to the world of work in the foreseeable future, more than any other technology that triggered previous revolutions.
In short, mentioning the word automation creates anxiety. People often think about how automation would impact their jobs instead of how it could improve their roles.
Indeed, there's anxiety about the coming AI revolution. Will it be like the former technology revolutions—creating millions of jobs, elevating the general wealth produced by humanity, lifting millions of people out of poverty, or will it bring rarefaction, the extreme division between high-end jobs and low wages? On which scale will the creative destruction be? Will machines recklessly spread to cognitive fields that have remained untouched today?
Just as digital word processing ended typists' jobs, people apprehend the effect of endless automation.
In summary, the coming revolution is inspiring anxiety across every sector of the economy, especially among some of the higher end, sometimes more prone to automation than low-end ones. Cybersecurity is among one of them.
Although automation could bring much-hoped-for relief to cyber professionals, questions arise.
Automation could carry more risks than benefits in cybersecurity
While AI-enabled tools and machine learning have a lot of potential, they come with significant risks. Cybercriminals and other hazard actors may use the same methods to boost their attacks - as automation is already used to make DDoS or phishing attacks more viral - or bend the automated systems businesses employ.
Because most IT departments have yet to mature or understand these technologies, overlapping tools can cause misconfiguration and disruptions.
Integration and management of automated systems may also add costs in the short term, even if they assist with cost reduction in the long run. Inappropriate expectations and complacency can lead to catastrophe. Indeed, even though security automation handles tasks independently, human intervention must still instruct and navigate tasks.
Human control must be adjusted and customized to the organization's desired handling of the automation procedure. Since the degree of automation is alterable, analysts must exercise caution when dealing with decision-making processes even though they are automated.
Ultimately, organizations still prefer to respond manually to each threat arising, trying to adapt to each one. Thus, They are piling up tools and manual processes and are short-sighted, not seeing the burden hardening at each process laboriously implemented.
This posture is not true because it does not understand the purpose of automation as we imagine.
What's a good automation strategy?
Selective automation is the best help analysts could receive
One common misconception is about the nature and spread of alleged automation. As of today, we're aware that automation cannot and shouldn't be used to do every task. We are not promoting such a posture. The point is to remove some duties from humans, not all of them, because the human brain performs better.
Cognitive skills are still essentially not prone to automation. Only repetitive and basic tasks are to be targeted in such cases. AI/ML is still far away from being able to replace humans. Their cognitive qualities are challenging to replicate. Communication, creativity, and criticism are the distinguishable qualities that enable humans to grow and prosper.
These qualities are incredibly precious in the cybersecurity field. Although analysts face attacks by various tools, they ultimately face real humans with the same qualities and flaws. That is why human-like intelligence is needed. Decipher patterns and signatures and predict attackers' next moves.
Humans should generally take AI/ML for what they are: performance enhancers. Humans can analyze data within a broad context and think critically. But they do lack perfection. Among the enormous data that analysts need to ingest, errors occur irremediably. Automating and streamlining detection and responses helps reduce those risks and relieve teams from their daily burden.
In addition, security professionals can and should use their expertise instead of spending time on labor-intensive, manual tasks. These tasks are repetitive and unrewarding, adding to the burden of analysts in tier 1 and thus increasing potential turnover. In addition, some tasks are better done by machines, and humans should be rid of them. It's about choosing which tasks fall in which bucket: AI/ML or human.
Automation and simplistic detection configurations are a terrible combination. Attackers are becoming increasingly creative. They adapt to evolution and, more often than not, are ahead. They sophisticate phishing messages to pass through default email filters. They have become increasingly able to create malware that is invisible to traditional security scanners. Tools configured with basic detection rules are unable to keep pace with modern threats, which are rapidly evolving ones.
The right tools can overcome automation barriers
In most cases, security tools must be configured to align with the particular environment in which they are deployed. Writing detection and response rules driving security automation require limited engineering resources.
From that point, there are two options for businesses: hiring more engineers or affording security automation tools that lower the barrier of who can configure them. The former is not the greatest since it consumes a lot of assets.
On the contrary, the latter means relying on an agnostic solution to manage the tool stack to enable teams to avoid switching between each platform, using different consoles, or transferring data from one tool to another. Once the initial configuration is done, teams can automate tasks without custom code. Thus, the skill barrier is first lowered.
But, even if the approach is no-code at first sight, teams need to be able to grow talents. Every analyst, junior or senior, should be able to take it in hand and understand what's behind every workflow. This would increase transparency and trust in automation, besides allowing continuous learning on the go.
However, one common pitfall is to automate alerting and use tools to automate it partially but rely on a manual approach to remediation. Breaches spread quickly, and needing hours to manage a violation invalidates all the benefits of automated detection. An automatic response is just as crucial as computerized alerts. SOARs are the most appropriate tools to alleviate the burden on security teams.
Connecting the benefits of an agnostic solution with the ones brought by a SOAR solution would result in the capacity to create processes that could cover security from the detection of threats to their remediation.
Good automation is a force enhancer, not a force replacer.
Conclusion
Although threats are rapidly growing and evolving, some organizations still don't take the path to automation. Teams are reluctant to automate their sometimes most basic tasks because they are anxious about the sustainability of their jobs, fear of flawed implementation, and lack adequate human resources to operate automation solutions.
But, choosing the right tool to automate could and must be encouraged. Good automation exists; we believe in this. The goal is not automation per se but automation to enhance. Good automation shouldn't create anxiety. On the contrary, it should alleviate analysts' burden and empower them. Transparency and trust should be the main focus of automation. Knowing what's behind it and how it works are the best guarantees for adoption. Humans need to be enforced, not replaced.