top of page
  • Writer's pictureGanna Pogrebna

Should We Use AI and Machine Decision Making in Cybersecurity?

Updated: Nov 27, 2022



The field of cybersecurity is increasingly working together with other fields, including, for example, ML automation, the legal profession, ethicists, social science and social media specialists. Yet these fields are also being impacted and disrupted by automation. One would think, that cybersecurity should be one of the most protected professions in the world right now, in that it is in increasingly high demand as threat vectors multiply and the volume of cyberattacks spreads into all corporate, social, and government spheres. Yet, at the same time, rather paradoxically, the economic realities which followed from the COVID second-order effects tell us that this is not quite the case as cybersecurity expenditure is cut in many organisations as well as R&D in cyber threat detection. Nevertheless, it is useful to understand, whether and to what extend automation could and should be used for cybersecurity purposes.


Internet of Things Changes Your Cybersecurity Planning Approach


Today, most organizations treat cybersecurity as a cost, a daunting proposition that they will only implement as a last resort. Why? Because, very often they do not see the benefit (until it is too late, anyway). But business models are changing because of the impact of cybersecurity. To provide one particular example, Philips Lighting, now called Signify, has recently moved from selling “light bulbs” to "urban areas" to selling “lighting-as-a-service” to "smart cities". It is this “as-a-service” addition to many business modelling approaches is the key issue in many domains (if your revenues are moving from manufacturing goods to selling updates and support in the longer term). So, employment and employees will change to support this. But also, cybersecurity becomes the fundamental platform in enabling “everything-as-a-service” because you must be able to talk to these devices through their total lifecycle that includes management, updates, and billing for device use.


What the IoT offers compared to the earlier machine-to-machine (M2M) connections is (potentially) quite valuable - these new business models sit on top. This is evolving across all industry sectors—it becomes health-as-a-service, lighting-as-a- service, cars-as-a-service, and so on. Uber, Zipcar, and others, for example, developed other as-a-service models, called the uberization effect. This opened up assets and services with direct contact between buyers and the owners of these objects, facilities, or work services. The physicality of the object has a different dimension in the cyberspace digital world. The ability to interact with different people, different public and private networks, different multiple vendors to deliver different services is the next wave. Under these circumstances many questions arise:

  • How do you validate who you are working with?


  • How do you isolate data so that it is harder to identify the person (protect personal as well as company data)?


  • How do you build entire systems using tech from different vendors who may not talk with each other, may be competitors, and will not share critical information with other people/businesses in the system?


  • How do you build a secure system that may connect to 3rd party technology or external networks and systems that you may not know the configuration of, or have access to, but have access to your enterprise system as part of connection services (such as supply chain business-to-business (B2B) or business-to-customer (B2C) services connected to many suppliers and external companies)?


  • How do you build security for bring-your-own-device (BYOD) or, in current COVID realities, use-your-own-device/work-from-home options?

  • How do you build a framework where different vendors and systems work and cooperate securely?

  • How do you formally prove that this device or system is trustworthy to receive or transfer your data?


  • How do you know that a particular device or system is able to manage your data without having to know everything about an individual user or the information context behind their data?




Why Use Automation for Cybersecurity?

Automation is useful in assisting and replacing some cybersecurity tasks, but creativity and experience remain the key factors when it comes to tackling things that cannot necessarily be programmed and automated. Under these circumstances, not only cybersecurity specialists, but also staff and even customers may potentially become human sensors of cybersecurity threats. This concept of "human-as-a-cybersecurity-sensor" goes a long way, as humans better understand the creativity of the criminal mind and can detect issues, where machines fail.


Yet, there are areas, where machines are useful. For example, one domain, where machines are quite successful is when cybersecurity attack vectors can be relatively easily anticipated. Another domain is when the cybersecurity of a system is very complex and difficult for the human expert to monitor. Think, for example, about areas of increasingly complex behaviors with multiple patterns running across many system end-points such as ATM-attached networks. Such systems may be better monitored by automation that can work continuously, 24 hours a day, all year round without rest.



We Are Not the Only Smart People in the Game

Yet, it is very important to note that AI solutions suffer from considerable problems. One of the biggest issues, of course, that AI and machine intelligence is used not only by the "good folks", but also by the adversaries. For example, if we use AI to construct the map of smart honeypots (which look like a real computer unit/spot/entity/system, with applications and data, fooling cybercriminals into thinking it is a legitimate target); adversaries are quite likely to use the same, if not "better trained", AI to detect where these honeypots might be located. Furthermore, new forms of cybersecurity AI automation are currently discovered that are being used by hackers in order to monitor and mimic human activity in a new phase of cyberattacks. The Dangers of a Machine Making Decisions in Cybersecurity


From a defender’s perspective, if you invest in AI for defense but after a period of time it blocks access to customers as part of an automated action (causing significant financial losses for customers and millions to resolve) but may have saved hundreds of millions up to that point, how do the company executive board members respond to this? Very often, they will look at each individual incident rather than the overall picture as the potential reputational risk takes precedence over other considerations. These may include such operational issues as fixing the algorithmic rules, but in other cases may be part of an underlying weakness in the use of AI.


This is a repeated issue when using statistical methods to “learn” automated rules in AI. You need to train the machine learning algorithms but very often AI algorithms develop rules that may evolve over time to do things that are not in alignment with the original policies that were in place at the time of setting-up. A well known example is the training of self-driving vehicles, where the algorithms may not be fit for purpose in all scenarios, which may result in damage or even loss of life. This may be a combination of (i) poorly defined data dimensions of the learning model, not being able to respond adequately in order to achieve its objective (an objective could be “do not hit an object”); (ii) a feature of learning from training data that fails to recognise a state change in its environment; (iii) a failure in its ability to automate sensory feedback-control appropriately for the given objective function, etc. These examples, along with many others, form new phenomena emerging from the field of machine learning and artificial intelligence that create new risks. This risks may be closely linked to cyber attacks and cyber defense, where the machine learning data or algorithms may be compromised; or, alternatively, the ML algorithm itself may be used to carry out a cyber attack.


There are also situations and domains, where the use of AI is simply inadequate. Think, for example, about security disruptions using fake news. It is incredibly difficult to employ automation to prevent the fake news due to the semantic nuances involved. Automation progress will be able to work through these complex challenges, but it remains an issue of equilibrium between constantly evolving threats and responses. This is the struggle, which is unlikely to disappear any time soon.





Take Aways


To sum up, some degree of automation in cybersecurity may be beneficial in highly researched, highly predictable domains, where much historical data is available and the context is either irrelevant or could be easily modelled. Yet, in many circumstances we need to rely on human ability to detect threats. This is why training your inner "human-as-a-cybersecurity-sensor" ability and keeping it is check is incredibly important.




This post was originally written by Ganna Pogrebna for the CyberBits blog in 2020

21 views0 comments

Comentarios


bottom of page