Enhancing Defense with Deception

In previous posts, I have discussed the use of security automation and intelligence sharing to decrease the time to detect and the time to respond to cyberattacks. Today, I turn my attention to the other side of the equation, slowing the attack through deception. You may recall my previous discussions of Boyd’s OODA loop theory. In Boyd’s seminal presentation, Boyd suggests that to win it is necessary to get inside the adversary’s OODA loop [1]. Interrupting the adversary’s OODA loop can cause confusion and disorder for the opponent. Changing the situation faster than the attacker can comprehend the changes can give the defender an advantage. Deception provides a means to operate within the opponent’s OODA loop and disrupt his situational awareness.

What is Deception

Deception is not about deceiving the enemy. Deception is about causing the enemy to deceive himself. In other words, with deception, you cause the opponent to adjust his orientation based on his false observations. The art of deception is to show your opponent something that your opponent wishes to see and wishes to believe. If you create a false impression, you can seduce the enemy into deceiving himself.

The British Q-Boats

The British Navy used this concept to counter the successes of the German U-boats in WWI. The British Navy showed the Germans something the Germans expected to see, causing the Germans to deceive themselves. During WWI, the German Navy was having great success with the use of submarines, especially sinking merchant ships. In response, the British developed what they called Q-boats. The Q-boats were heavily armed vessels that the British disguised as merchant ships. These “merchant ships” would entice the German U-boats to surface and attack. In today’s jargon, we would call these “honey boats.” Once the German U-boats surfaced, the Q-boats maintained the deception. Part of the British crew, known as the “panic party” would appear to abandon the ship. Once the German U-boats were within range, the Q-boats would reveal the hidden guns and open fire on the German sub.

Of course, there are a couple of other important lessons in this story:

  1. Be willing to lose that which you use for enticement. In other words, assume the bear will eat the honey. The Germans destroyed many of the Q-boats.
  2. Once the opponent discovers the deception, the opponent will adjust. Assume the attacker is sophisticated and will change tactics. Thus, requiring you to maintain situational awareness and adjust your methods.

Honeypots: Today’s Q-Boats

The concept of battlefield deception applies to cyberspace. Cyberspace provides great potential for the practice of deception in cyber defense operations. In the cyber realm, combatants can construct and move deceptive terrain with ease. Companies can use the deceptive practice of honeypots and honeynets to divert attackers from valuable assets. A honeypot is designed as a decoy to entice attackers, like the British Q-Boats. In addition to slowing the attacker, honeypots allow defenders to gain knowledge of the attackers’ tactics, techniques, and procedures. Engaging the attacker early and maintaining deception with a honeypot allows the defender to collect and record details about the attacker’s attempts to compromise the system. Honeypots provide a versatile approach to network defenses. Defenders can deploy preventative and reactive honeypots in various network environments and situations. Also, honeypots can provide efficiencies over traditional intrusion detection systems. Honeypots generate far less logging data or noise than tradition intrusion detection since the honeypots are not involved in normal operations.

Deploying and maintaining honeypots requires expertise and ongoing maintenance to ensure the honeypot remains relevant. For a deception operation to be effective, it must present and maintain a plausible story to the attacker, such as the use of “panic parties” on the British Q-Boats. The architecture of a honeypot must prevent the attacker from using a compromised system within the honeypot to attack legitimate systems. Also, the honeypot architecture must provide capabilities for the defender to detect and capture all actions taken by the attacker while in the honeypot. The honeypot must also be realistic enough that once it lures the attacker in, the honeypot continues to deceive the attacker.

A Canary in a Coalmine

Deception tactics are not limited to honeypots and honeynets. Defenders can deploy many types of deception, which can provide an early-warning system of possible intrusions. Defenders can create a wide range of fake entities, including files, database entries, and passwords, which only a malicious attacker should access. Defensive systems monitor the fake entities and alert on any interactions with the bogus resources. Several of these methods are simple to implement and require no new technology. Organizations should consider combining techniques into a deception framework. The use of multiple techniques increases the effectiveness of the deception.

Honeytokens are fake records within a database used to alert on suspicious activity. Fake records, such as customer records with fake names, are placed in the database, and any attempt to access these records is considered suspicious. Honey files are files that have potentially interesting, but fake, content. The files should look realistic and be enticing to attackers. Defenders spread the honey files across file servers within the organization and then monitor for access to these files. Attackers also use metadata, or information about the data, during reconnaissance and attacks. Defenders can place false information within the metadata, such as file creation and modification times, to confuse the attacker.

Challenges with Deception

Like with all approaches to cybersecurity, the use of fake entities for deception comes with challenges. The fake entities could impact performance and will require resources. False alarms, or false positives, can occur when an employee interacts with a fake entity on the system. However, the interaction with honeytokens by an employee may indicate an insider threat. Perhaps the biggest challenge with fake entities is creating them. Fake entities must look realistic to the attacker to be effective.

Conclusion

Defenders should consider using deception as a critical component of their defensive posture since attackers have repeatedly demonstrated the ability to subvert traditional defenses. Conventional defensive tactics focus on detecting and preventing the attacker’s actions while deception focuses on manipulating the attacker’s perceptions. Deception can manipulate the attacker’s thinking and cause the attacker to act in a way beneficial to the defender. The use of deception can also cause the attacker to expend resources and force the attacker to reveal the attacker’s techniques and capabilities. In addition to detecting intruders, deception methods can provide an effective means of identifying an internal threat.

Cyber defenders can use many defensive methods to deter or delay attackers. Advanced defense methods, including deception, can raise the cost of an attack and slow the attack. Deception can accomplish two major objectives by disrupting the attacker’s OODA loop. First, deception can confuse and slow the attacker, causing the attacker to expend resources. Second, deception can reveal the attacker’s capabilities and techniques.

About the author: Donnie Wendt is an information security professional focused on designing and engineering security controls and monitoring solutions. Also, Donnie is an adjunct professor of cybersecurity at Utica College. Donnie is currently pursuing a Doctorate of Science in Computer Science with a research focus on security automation and orchestration.

References

[1] Boyd, J. R. (1996). The essence of winning and losing. (C. Spinney, C. Richards, & G. Richards, Eds.) Retrieved from http://dnipogo.org/john-r-boyd/