What Can Cognitive Security Learn From The B-17 Flying Fortress?
- CSI

- Nov 18
- 3 min read

When are people the most vulnerable to a social engineering attack?
When are they most likely to make a mistake?
If we pause to contemplate these two questions, we will likely come to a very similar list of conclusions; when they are tired, distracted, or they confuse one thing for another.
In the first few years of World War II, the U.S. Army Air Corps lost over 400 aircraft in less than two years, not from enemy engagements, but due to “pilot error”. Pilot error included incidents such as landing an aircraft without ever deploying the landing gear, running out of fuel, or even flying into the ground.
Exasperated by both the number of aircraft and lives lost, Dr. Paul Fitts and his colleague Dr. Alfonse Chapanis refused to accept the simple explanation of “pilot error” as the root cause of these tragedies. Their first clue was that these accidents were not random, they followed consistent patterns and usually occurred when pilots were distracted, fatigued, or scared.
In other words, these incidents occurred when pilots were operating automatically rather than deliberately.
If this sounds familiar, it should. For the past 30 years “human error” has consistently placed among the top causes for data loss. Similar to World War II-era pilots, these losses are not caused by an adversary. They are errors such as misaddressing an email, misconfiguring a system in a way that exposes data to the open web, or simply sharing information with an unintended audience. In the same way that treating “pilot error” as a root cause left aviation design flaws hidden, treating “human error” as a root cause of data breaches is currently preventing security researchers from discovering and remediating the root causes for modern data losses.
In August of 2019, a story broke about an Australian football team sending sensitive information about player contracts, payment information, and other financial information to a rival team. This appears to have been caused by a misaddressed email, a common occurrence when similar names (belonging to different organizations) are confused during of the auto-complete function of an email client. This type of error is among the most common causes of non-malicious data breaches.
Creating a centralized repository of non-malicious data breach causes is one of the lowest-hanging opportunities for reducing non-adversarial data losses. Such a repository would allow security and human factors researchers to identify trends and help direct root cause analysis investigations into the most egregious primary and contributing factors.
Claiming that a breach occurred because of human error fails to account for either the system in which that failure occurred, or context which contributed to the action leading to breach. A well designed system should not allow for human error to lead to a data breach. Dr. James Reason argued that adding layers of protection is one approach to preventing an error from leading to a safety (or security) incident. Of course every system will have unavoidable gaps which emerge as a tradeoff in any system. These gaps may be analogized as the holes in a slice of Swiss cheese. Errors become incidents when these errors align in such a way that an error passes through the series of gaps. The key to prevention, as Ira Winkler argues is that systems need to be designed so that it becomes nearly impossible for a user to cause irrecoverable harm to a system.

After analyzing the findings of over 450 aviation accidents, Dr. Fitts and Dr. Chapanis recommended several changes to the design of aircraft instrumentation and controls. Addressing the “designer error” of aircraft instruments and controls has saved thousands of lives over the decades since their introduction. One of the most impactful changes was to modify the shape of the levers controlling the landing gear and the wing flaps. Using “shape coding” to make controls feel different when grasped meant pilots instantaneously understood what actions they were taking, even when distracted or not looking at the control itself.
What would be the impact on phishing if hyperlinks were likewise “shape coded”?
The additional capabilities end-users are set to gain by AI mean that the potential impact of “human errors” is about to explode. This means that it is now more critical than ever to gain an understanding for how to prevent such catastrophes from occurring. The analysis Dr. Fitts and Dr. Chapanis conducted would not have been possible had the investigation data not been available. We now have the opportunity to proactively prevent AI-amplified user-initiated loss by creating a human-error incident repository. Centralizing this data would enable researchers to track patterns, prioritize the most damaging failure modes, and design more resilient systems, preventing this source of loss now and for future generations.






Comments