top of page
Writer's pictureGanna Pogrebna

Diversity and Inclusivity in Cyber Security: The Case of Technology and People

Updated: Nov 27, 2022



Like many systems and aspects of our life, cyber security often suffers from the lack of diversity and inclusivity. This concerns both technology and people. Below, we discuss how lack of diversity and inclusion may negatively influence efficiency and hurt the resilience of cyber security systems.


Technological Diversity


When we consider technological diversity, we need to remind ourselves that businesses often create issues in their own IT systems by locking them to a handful of software and hardware suppliers. As a result, only a small range of hardware and software solutions are used - i.e., the same standard is applied across the entire organisation, resulting in low diversity. This may sound familiar to many people: when you start working in a new organisation and ask for a particular hardware or software to be provided, you often hear that the organisation does not support such hardware of software (e.g., as a developer you often need Mac to fulfil your duties, but you often get a so-called "IBM-compatible" PC). Such lack of diversity creates additional "scale" vulnerabilities in equipment and/or software as if one machine/piece of software is affected by the cyber attack, the threat will very quickly propagate through the entire cyber system of the business. This happens due to the fact that in low diversity systems there is less variation in configuration, less heterogeneity, and, as a result, less complexity for the attackers to deal with.

Technological diversity is particularly important when businesses are trying to assess the risk of a cyber attack and calculate its cost. If the attack hits all of your systems and they are a critical resource, then this potentially represents a significant cost to the business. Diversity helps reduce the attack surface through heterogeneity of the types and systems in use. It also ensures resilience as, if attacked, multiple back-ups in such systems allow to put everything back on track very quickly. If a call centre of an insurance company, operating 10,000 machines on the same system, succumbs to a cyberattack, potentially, everything may fail as a result, in which case customers will not be able to access the services for several days. Naturally, this will, quite likely, also have consequences for business reputation and even revenue.





Cloud operators, such as Google, Amazon, and Microsoft deliberately design their architectures with diversity in mind, with multiple versions of many different components functioning in the same system at the same time. This is partly due to the size and scale of their operations: many technology providers have legacy systems and roll out updates gradually. Therefore, they often have different versions of hardware and software accumulated over time, which complicates servicing, but increases resilience. The reason why legacy systems exist is that updating to a new version/standard may not always work. Hence, rather than potentially paralysing all old systems with a new update, different parts of the system are tested and upgraded incrementally. While this creates obvious headaches for the IT support staff, ultimately, this yields a good cyber security resilience design. There can be millions of machines in these large cloud data centres, and the upgrade paths can be very fast. However, installing a mix of tools for cyber security reasons rather than in response to the end-user needs is quite costly and often depends on available resources.


Consider, for example, a transfer of the UK's vehicle driving licensing system to an online version. The transfer was very successful as the new online system replaced many old features of the "physical" licensing system - from printed card licenses to various registration documents - which were now combined and accessible electronically in one coherent digital resource. Yet, despite the initial success, subsequently, when the system required an upgrade, the upgrade failed and froze the service for a considerable amount of time. Clearly, even with essential maintenance downtime, such service disruptions can lead to complaints from customers unable to access and use critical systems, not to mention the reputational costs, which may result from such disruptions. The lesson to be learned here is that running multiple versions for diversity and designing resilience in the system, with multiple back-ups, if possible, is good practice.





A good example of diverse technological system comes from the essential healthcare sector. In healthcare, large hospital operations utilise big computer servers for many systems such as appointments and patient records, but often run separate systems for x-rays, MRIs, etc., along with many different hardware and software solutions. Furthermore, all critical information (e.g., patient records, clinical data, etc.) often has a "physical" (paper) twin, such that in case of an attack, it is always possible to restore the valuable data.


Human Inclusivity and Diversity in Cyber Security

Issues with human inclusivity and diversity is also an important problem. Recently, several articles and blogs highlighted the lack of women in cyber security, but other minority groups are equally underepresented in cyber security. Yet, probably the most important problem is cyber security is the fact that usually only technologically savvy people are asked about cyber security in organisations. And this is not always a good thing. The usual assumption is that cyber security is a boring technical issue, and non-technical professions do not have anything to contribute to the cyber security decision making. However, if you ask people outside the cyber security team as well as outside the IT team to contribute to the cyber security discussion, you will find that many people have things to say and feel that they can provide a range of great ideas.


Colourful Teams

In terms of monetary investment, a company can spend millions on cyber security tools to monitor the technology infrastructure of applications, networks, operating systems, and traffic data. Examples of common cybersecurity defense tools include:

  • Code reviews

  • Static code analysis

  • Dynamic code analysis testing

  • Functional testing

  • Non-functional testing

  • Automatic vulnerability scans

  • Manual scans

  • Threat detection and security incident response systems (SIEM)

  • Incident log management and analysis, incident response.


Yet, it is equally important to invest in people and form different groups within your organisation to address cyber security problems. It is often helpful to have Red, Blue, and Purple teams tackling cyber security in organisations.


Red Teams: a red team is an internal group that explicitly challenges a company’ s strategy, products, and preconceived notions. It frames a problem from the perspective of an adversary or a sceptic, to find gaps in plans, and to avoid blunders. The term red team comes from the cold war practice of in which US officers adopted a Soviet, “red” perspective (Moscow did the same thing and called it a “blue team”). US officers would “think red” to attempt to defeat US plans and systems, rather than mirror-image US thinking onto the Soviets. For example, the US navy used a red team to try to defeat its own submarine force using Soviet concepts and technology. Today, red teams are used to double-check important assumptions and overcome groupthink, a term referring to the practice of thinking or making decisions as a group, resulting typically in unchallenged, poor-quality decision-making.


Blue Teams: a blue team is a group of individuals who perform an analysis of information systems to ensure security, identify security flaws, verify the effectiveness of each security measure, and to make certain all security measures will continue to be effective after implementation. In preparation for a computer security incident, the blue team will perform hardening techniques on all operating systems throughout the organisation . If an incident does occur within the organisation, the blue team will perform actions to identify, contain, eradicate, and recover from the incident, seeking to learn lessons from the event for future response management.

Purple Teams: purple teams are ideally superfluous groups that exist to ensure and maximize the effectiveness of the red and blue teams. They do this by integrating the defensive tactics and controls from the blue team with the threats and vulnerabilities found by the red team into a single narrative that ensures the efforts of each are utilized to their maximum. When done properly, the synergy of all 3 teams will be incredibly beneficial, but this should be happening naturally.


The important thing to remember about these colourful teams is that they also need to be diverse and inclusive and consist of people, who are not all tech experts. Such teams will think outside the box and will be able to achieve the best cyber security organisational outcomes.



Take Aways


Diversity and inclusivity in cyber security systems are important concepts for both technological and human aspects of resilient operations. Diversity in technology leads to increased complexity for adversaries as well as makes organisational critical assets less accessible for cyber criminals. Diversity and inclusivity in human teams provide extra bandwidth in ensuring that organisations think about cyber security and cyber threats in creative and out-of-the-box ways.



This post was originally written by Ganna Pogrebna for the CyberBits blog in 2020

305 views0 comments

Comments


bottom of page