Morey Haber, Chief Technology Officer, Chief Information Security Officer, BeyondTrust highlights the potential dangers of hacking back against cyber criminals
Just a few months ago, a bill was introduced to the U.S. Congress that would allow organisations to take offensive actions against their IT network’s intruders. The bill, known as the Active Cyber Defence Certainty Act, or ACDC for short, would allow victims to proactively launch cyber offensives against threat actors that have been identified as launching cyber attacks against them or their organisation, in order to disrupt or cripple their operations and to deter future attacks.
This bill is another sign that as sophisticated nation-state cyber threats have become a reality, and attacks on critical infrastructure have become commonplace, the concept of offensive cyber security – also known as “hacking back” – has gained more mainstream traction. However, as every security professional should know, this approach is not a trivial exercise, and it can be riddled with flaws and potential risks.
The risks of hacking back
In most countries, current regulations forbid firms and individuals from hacking back much like the sale of munitions in the Wassenaar Act. Only a few select government agencies have the authority to hunt down suspected cyber criminals in this way. This is because the adoption of an offensive approach has risks that may be unacceptable, or even illegal, if not carried out with extreme precision just like a physical military strike.
The main problem with launching an offensive attack is ensuring it is not a mistake. A full-fledged cyber offensive could have effects comparable in scale to those of a conventional war or nuclear bomb. For instance, a company might accidentally target critical infrastructure (CI) because they think the attack they are experiencing is originating from them. What if CI was “owned” and used to launch denial-of-service (DDoS) attacks, such as from the Mirai botnet, and someone decided to hack back against the CI? A mistaken cyber offensive could lead to poisons in our water supply or massive loss of power. What’s more, the ramifications of an inappropriate “punch” back could warrant an escalation that many organisations are not prepared to deal with technically and legally.
The dangers of automation
Next, consider the growing use of artificial intelligence (AI)—particularly with regards to IT security automation and orchestration. AI is based on machine learning algorithms—programmes that learn based on example and formulate results derived from statistics or other models. While AI lacks a concept of good or bad, it could be programmed with parameters to differentiate between “good” and “bad” behaviours or desired outcomes. The problem is that AI can learn bad behaviour, like a young child, and could initiate a very undesirable response—much like a temper tantrum. If AI is allowed to automatically attack back, then a cyber warfare scenario could escalate quickly beyond control.
For a more practical example, consider the streaming of video. The desired result is clear—multicast packets to all the targets subscribing to the stream. If a network device in-line corrupts those packets due to a hardware/software fault or another attack, the received packets could be malformed. AI could easily misinterpret these malformed packets as an attack, or the potential exploitation of a vulnerability. Crazy as it may seem, this is actually what signature-based intrusion detection system (IDS) solutions do today. If intrusion prevention system (IPS) engines are empowered with data for automated actions, the result could be to terminate the stream, or worse, attack back against the source. If you think this is crazy, consider how threat actors inappropriate trained an AI engine to think a turtle was a gun.
The proportional response
The potential outcomes of the above scenarios show why even conventional warfare is locked down, and why some highly sensitive areas of decision-making are probably always best left to humans—imperfect as we may be.
While automation in many forms is helping IT and IT security solve issues of scalability and efficiency, its use should be handled with care. For automation technologies and platforms governed by AI, where the logic to initiate a response may not even be logically explainable, this level of caution should be even higher. Actions and reactions can rapidly spiral out of control, and AI with automation could make it substantially worse.
Organisations would do well to leave the offensive approach to governments and their cyber security programmes. Instead, to avoid the potential harm that comes with taking an offensive cyber security posture, they should avoid the hype, and focus on implementing the use of defensive IT security technologies addressing the three pillars of cyber security protection – identity, privilege and asset management. This forms the basis of any cyber security program and can help assure that defences can mitigate the risks better than any counter-offensive strategy.
Editor's Recommended Articles
Must Read >> WhatsApp attacks: Pegasus spyware hacks 1400 users
Must Read >> 10 reasons to become an ethical hacker