Identifying AI security threats




As artificial intelligence (AI) deployments continue to expand, securing AI is vital, but AI applications face multiple sophisticated threats.

Artificial Intelligence (AI) advances foster various automation services and autonomous systems; meanwhile, security threats against AI are also rapidly evolving. To help on developing secure AI-based services and systems, the European Telecommunications Standards Institute (ETSI) Industry Specification Group on Securing Artificial Intelligence (ISG SAI) analysed the problem of securing AI and the corresponding mitigations; two group reports have recently been released.

An ML’s lifecycle is captured in the ETSI ISG SAI GR-004 report as shown in Fig.1. The implementation and the deployment are actively studied for their security implications. The report also identifies and characterizes four attack types in order to raise people’s awareness on potential security threats. Existing mitigations are analysed and summarized in the ETSI ISG SAI GR-005 report as well. They are classified as model enhancement and model-agnostic according to whether the addressed AI model is modified by the deployed mitigation or not. Hence, service developers or system deployers may define their mitigation strategy according to the specific application scenarios. 


Figure 1: Typical machine learning lifecycle, ETSI SAI GR-004. (Source: ETSI)

Poisoning attack

One specific type of identified attacks, called poisoning attack, manipulates the training data to degrade the performance of AI-based services. Moreover, such attacks can either degrade the overall performance of the system or allow certain intended misclassifications. Poisoning attacks have a history dating back to early spam filter services. AI-enabled anti-spam filter continuously learned from email recipients’ reactions to refine the filtering function. Email recipients may mark normal e-mails as spam or restore misclassified spam e-mails back as normal.

Attackers exploit the learning process by modifying email contents. For example, carefully adding some crafted normal contents into spam emails would mislead the anti-spam filter to misunderstand those normal contents as potential spam. Later on, emails with those crafted contents may be classified as spam. The filtering performance would decrease to a point that users disable the anti-spam filtering service.

To mitigate this kind of attacks, data quality is key. As shown in Fig.2 (from the ETSI ISG SAI GR-005 report), available mitigation approaches against poisoning attacks include enhancing data quality from data supply chain, data sanitization for training datasets, and blocking poisoning during the training process by using techniques such as gradient shaping.

click for full size image

Figure 2: Mitigation approaches against poisoning attacks, ETSI ISG GR-005. (Source: ETSI)

In the example of spam filter services, two mitigation approaches, data sanitization and block poisoning, can be applied together to mitigate the poisoning attack.  Received emails and recipient’s reactions, instead of being input to the learning process upon arrival, are preserved and input in batch to the learning process periodically. It is a variant of blocking poisoning by slowing down the process. Meanwhile, for the batch of received emails in each period, data sanitization techniques can then be adopted to filter out suspicious contents from being learned.

Evasion attack

Within another type of attack, the evasion attack, malware obfuscation is well-known to the cybersecurity community. In this type of attack, an adversary uses manipulated inputs to evade a deployed model at inference time, resulting in the model producing an incorrect classification.

For many years now, malware authors have been trying to avoid detection by signature-based virus engines by obfuscating their malware mainly through the use of encryption.  As automated malware detectors based on static analysis have evolved from simple signature-based approaches to more complex heuristic techniques employing machine learning the need for detectors robust to obfuscation has never been greater.  

Android usage statistics from Google show there are over 2 billion active devices in use globally each month, with 82 billion apps and games downloaded every year. With momentum building for its use in IoT-connected devices and vehicles, Android implementations are a frequent and increasing target of malware attacks. 

Obfuscation is a challenging problem for current detection systems, with Android malware authors regularly using techniques such as encryption, reflection and reference renaming. These aim to disguise and camouflage malicious functionality in an app, tricking a model into classifying it as benign. For example, obfuscation is almost universally employed to hide use of APIs, and the use of encryption algorithms is five times higher in malicious apps than in benign. Furthermore, an analysis of 76 malware families found that almost 80% of apps use at least one obfuscation technique.

The problem is further compounded by the fact that many benign apps are obfuscated to protect intellectual property. More recently there has been growing research interest in the use of adversarial learning for creating more sophisticated obfuscation techniques such as metamorphic obfuscation in which the functionality of the obfuscated malware is the same as that of the original malware. 

There are several approaches to mitigating the obfuscation attack. Firstly, one can employ adversarial training by augmenting the original training data with obfuscated versions of the same malicious and benign apps, annotating each sample with two labels; either malicious or benign, and either obfuscated or unobfuscated.  An issue with adversarial training is that the model’s ability to generalise to unseen unobfuscated samples during deployment is reduced.  An approach to overcome this is to use a Discriminative Adversarial Network (DAN) that contains two classification branches with different cost functions, Fig. 3.


Figure 3: Discriminative adversarial architecture with conventional learning on the upper classification branch for malware detection and adversarial unlearning of obfuscation.(Source: Proceedings of the Tenth ACM Conference on Data and Application Security and Privacy, pp. 353-364).

The DAN employs the adversarial learning aspect of a Generative Adversarial Network (GAN) but instead of training a generative model, two discriminators are trained, one for malware and another for obfuscation.  The cost function for malware detection is minimised using the standard gradient descent algorithm to maximise classification accuracy. However, in contrast, the obfuscation branch minimises the cost function using stochastic gradient ascent resulting in a classifier whose classification accuracy is chance.  The motivation for this is that the DAN ensures that features are learnt which are inherently useful for malware detection, whilst at the same time being ignorant of obfuscation.  Such features can result in greater generalisation of unseen unobfuscated samples.

Finally, whilst there has been a lot of recent research into adversarial attacks and mitigations, the number of real-world use-cases has been small, with many viewing the area as an academic exercise that provides greater insight into how deep learning works (or breaks).  On the other hand, the ever-increasing use of machine learning in cybersecurity tools such as anti-virus engines, software and web application vulnerability detection, and host and network intrusion detection, places cybersecurity at the frontline of adversarial attackers and defenders of artificial intelligence.    

As securing AI is an important topic to ensure AI-based services and systems development and deployment, more study needs to be done. As an initial step, ETSI ISG SAI started the work in 2019 and presented two group reports on securing AI problem statement and mitigation strategy report. More results from ETSI ISG SAI can be expected in the near future.


Paul Miller is currently Deputy Director of the Centre for Secure Information Technologies (CSIT) at Queen’s University Belfast. From 1989 to 2018, he has been a Research Fellow and Scientist in Northern Ireland and Australia. Dr Miller’s main research focus is on the application of artificial intelligence to cyber and physical security. Specific areas of expertise are probabilistic modelling, deep learning neural networks, graph mining and evidential reasoning networks. He is an active member and participant of ETSI ISG SAI. He received the B.Sc. (Hons) and Ph.D degrees in pure and applied physics from Queen’s University Belfast in 1985 and 1989 respectively.
Hsiao-Ying Lin serves as the rapporteur of the Mitigation Strategy Report within ETSI ISG SAI. She works with Huawei as a senior researcher. In her spare time, she serves as the column editor of AI/ML for IEEE Computer Magazine. Her current research interests include trustworthy AI, data security, and security issues in automotive areas. She received a MS and PhD degrees in computer science from National Chiao Tung University, Taiwan, in 2005 and 2010, respectively..

Related Contents:

For more Embedded, subscribe to Embedded’s weekly email newsletter.

The post Identifying AI security threats appeared first on Embedded.com.





Original article: Identifying AI security threats
Author: Paul Miller and Hsiao-Ying Lin