In cybersecurity, we often talk about safety, security, and privacy. Yet, not only we do not fully understand what these terms mean and often use them interchangeably (this is particularly true for pairs of safety and security as well as safety and privacy), we also often interpret these terms incorrectly. In addition to that, there is considerable heterogeneity in ways different people and different organisations perceive safety, security, and privacy. When thinking about the distinctions between safety and security, I really like to use the rainy day analogy. Essentially, this analogy suggests that safety is a desired characteristic of the environment and security is the means to achieve it. On a rainy day, we want to stay dry, so this characteristic of "dryness" is similar to "safety". If staying dry is a bit like staying safe, then umbrella is like security. We use umbrella to stay dry, hence umbrella represents means of achieving dryness just like security represents means of achieving safety. With privacy, the situation is even worse as it is not clear what it is and scholars still grapple with the term and its various definitions.
But the problem is not just that we are generally confused about the terminology. The problem is that what is private for one person, is very invasive for the other; what is safe for one organization is completely unsafe for the other. Hence, the definition of "good enough" security measures also vary from individual to individual and from organization to organization. Coming back to our analogy above, for some people, being dry means that not a drop should fall on their hair, shoes and/or clothes; while others are OK to get their shoes wet. Naturally, the umbrella these different people will select will vary. So what psychological reasons contribute to this confusion?
Intertemporal Effects
Let us consider an example of social media. Many people do not really understand how social media can be used in ways that could be averse to their interests. They may know what is going on (or guess what might happen), but they may not seem to care about their personal privacy. One reason why people do not seem to care enough about their personal privacy is the intertemporal effects of personal privacy (which, as we have shown above is a problematic concept in itself). These effects produce behavioural patterns similar to perceptions of data breaches because the apparent harm is not clear in the immediate feedback experience. The link between privacy, trust and behavior changes when people are “online” as danger is not correctly perceived (relative to time) in the moment of the Internet/digital technology use.
There have been many studies of human behavior online, which measure people's perceptions towards privacy. In many cases, when asked about privacy online, people reveal that they care but often forget as they engage in the online activities. The impact (harm) is often not readily apparent, and when people do notice the damage or realize potential damage, it is too late. Interestingly, most of the time people realize that they have been victims of a cyberattack when many other people report issues (i.e., when many other people are affected by the threat). It also has to do with the fact that cybercriminals often target data about you (e.g., identity data about you) at a time when you are least likely to care or pay attention to what is going on around you. For example, if you quickly need to send a work-related email, you will use an insecure Internet connection. Do you know that you should not do it? Of course you do. But at the moment when you need to get that email out, it is very unlikely to matter to you. As a result, you only start seriously thinking about your behaviour, potential threats and how to rectify the situation when you lose important data or access to your account as a result of this very simple mistake - connecting to a dodgy network.
Essentially, this happens because we tend to value use today more than we value use tomorrow - it is the agency of "easy to use" that, in effect, makes it too easy for us to engage and potentially compromise our safety and privacy. Paradoxically, behavioral intent can be influenced and manipulated though online engagement as seen in extreme forms of impacts such as radicalization. Yet, it can also be delivered through more subtle online tools such as online advertising.
This is further exacerbated by examples of location-based e-commerce services that may influence social and environment factors to enable the system to be more engaging, “user-friendly” and usable but at the same time affect trust, privacy, and safety. The ability of humans to abstract and distance themselves from the cause and effect of adverse events (including online events) is, perhaps, linked to deeper psychoanalytical characteristics of all people, such as psychological defense mechanisms to cope with anxiety (e.g., repression of memories or denial).
Sensory Gating
Another cause for our perception bias could be filtering out non-essential information, which phycologists call “sensory gating”. These innate behaviors have consequences for many areas, including cybersecurity. An obvious example of sensory gating is perceptions of global warming and climate change. Many people appreciate that climate change is an important issue; yet, on a daily basis many of us go to a shop and purchase groceries that have been sourced from long-distance countries with huge carbon foot prints; packed in plastic; etc. rather than selecting local, sustainable, and greenly packaged produce.
These behaviors are part of a range of psychological defense mechanisms that are a natural condition of being human. These conditions have been studied and several therapies have been developed by psychological research. For example, rational-emotive behavior therapy (REBT) can be used to encourage positive emotional states that arise when we interpret our experiences in ways that allow us to feel good about ourselves. Defense mechanisms can distort, deny, or falsify perceptions of reality. These mechanisms are not buried in the subconscious. REBT seeks to remove defense mechanisms that affect levels of personal happiness and trap negative emotions.
The cognitive behavioral therapy (CBT) approach focuses on thoughts, beliefs and attitudes that affect feelings and behavior. While CBT represents the most common treatment for mental disorders, its foundations investigate the link between emotions and feelings such that CBT shifts the emphasis from negative to positive perceptions. Particularly, encouraging thoughts to address the causes of feelings can affect behavior by reducing negative feelings. These psychological techniques are grounded in theoretical foundations, which have been proven to affect human behavior.
Note that such techniques may not only be used to improve people's ability to deal with cyber risks, but also to trick them into engaging in risky behavior online. For example, many social engineering techniques are based on similar principles. Consider, for example, the case of Cambridge Analytica, when millions of personal data records were used to create individual data profiles, which then were used to manipulate people's behavior by employing the very principle of substituting negativity with positivity.
Takeaways
There is much about safety, security, and privacy that we as society and as individuals do not understand very well. This lack of understanding often affects business culture, causing businesses to misinterpret and misuse these concepts and, ultimately, invest into inadequate protection mechanisms, which either do not produce any effect at all or may even harm organizational cybersecurity. To avoid this, psychology of cybersecurity should become the key factor in designing and building security systems in and around organizations.
#cybersecurity #psychology #humanfactor #infrastructure #cyberrisks #cyberthreats #cyberattack #risk #infosec #security #vulnerability #informationsecurity
This post was originally written by Ganna Pogrebna for the CyberBits blog in 2020
Comments