Beiträge

Präsentationen auf vier akademischen Konferenzen

Unsere Kollegen vom Fraunhofer AISEC haben in den letzten Monaten vier Paper auf akademischen Konferenzen präsentiert. Klicken Sie auf die untenstehenden Titel um mehr über die einzelnen Beiträge zu erfahren.

Dieses Paper wurde auf dem DYNAMICS workshop am 7. Dezember 2020 auf der Annual Computer Security Applications Conference (ACSAC) präsentiert.

Hier kann man das Paper herunterladen.

Autoren: Philip Sperl und Konstantin Böttinger

Abstract: Neural Networks (NNs) are vulnerable to adversarial examples. Such inputs differ only slightly from their benign counterparts yet provoke misclassifications of the attacked NNs. The required perturbations to craft the examples are often negligible and even human imperceptible. To protect deep learning-based systems from such attacks, several countermeasures have been proposed with adversarial training still being considered the most effective. Here, NNs are iteratively retrained using adversarial examples forming a computational expensive and time consuming process often leading to a performance decrease. To overcome the downsides of adversarial training while still providing a high level of security, we present a new training approach we call \textit{entropic retraining}. Based on an information-theoretic-inspired analysis, entropic retraining mimics the effects of adversarial training without the need of the laborious generation of adversarial examples. We empirically show that entropic retraining leads to a significant increase in NNs’ security and robustness while only relying on the given original data. With our prototype implementation we validate and show the effectiveness of our approach for various NN architectures and data sets.

Das zweite Paper wurde auch auf der Annual Computer Security Applications Conference (ACSAC) 2020 präsentiert.

Autoren: Karla Markert, Romain Parracone, Philip Sperl und Konstantin Böttinger.

Abstract: Security of automatic speech recognition (ASR) is becoming ever more important as such systems increasingly influence our daily life, notably through virtual assistants. Most of today’s ASR systems are based on neural networks and their vulnerability to adversarial examples has become a great matter of research interest. In parallel, the research for neural networks in the image domain has progressed, including methods for explaining their predictions. New concepts, referred to as attribution methods, have been developed to visualize regions in the input domain that strongly influence the image’s classification.  In this paper, we apply two visualization techniques to the ASR system Deepspeech and show significant visual differences between benign data and adversarial examples. With our approach we make first steps towards explaining ASR systems, enabling the understanding of their decision process.

Dieses Paper wurde auf der 4th ACM Computer Science in Cars Symposium (ACM CSCS 2020) vorgestellt.

Autoren: Karla Markert, Donika Mirdita und Konstantin Böttinger

Abstract: Voice control systems in vehicles offer great advantages for drivers, in particular more comfort and increased safety while driving.  Being continuously enhanced, they are planned to comfortably allow access to the networked home via external interfaces. At the same time, this far-reaching control enables new attack vectors and opens doors for cyber criminals. Any attacks on the voice control systems concern the safety of the car as well as the confidentiality and integrity of the user’s private data. For this reason, the analysis of targeted attacks on automatic speech recognition (ASR) systems, which extract the information necessary for voice control systems, is of great interest. The literature so far has only dealt with attacks on English ASR systems. Since most drivers interact with the voice control system in their mother tongue, it is important to study language-specific characteristics in the generation of so-called adversarial examples: manipulated audio data that trick ASR systems. In this paper, we provide a short overview on recent literature to discuss the language bias towards English in current research. Our preliminary findings underline that there are differences in the vulnerability of a German and an English ASR system.

Das vierte Paper wurde bereits im September auf der IEEE European Symposium on Security and Privacy 2020 präsentiert.

Hier kann man das Paper herunterladen.

Autoren: Philip Sperl, Ching-Yu Kao, Peng Chen, Xiao Lei, und Konstantin Boettinger

Abstract: In this paper, we present a novel end-to-end framework to detect such attacks during classification without influencing the target model’s performance. Inspired by recent research in neuron-coverage guided testing we show that dense layers of DNNs carry security-sensitive information. With a secondary DNN we analyze the activation patterns of the dense layers during classification runtime, which enables effective and real-time detection of adversarial examples. This approach has the advantage of leaving the already trained target model and its classification accuracy unchanged. Protecting vulnerable DNNs with such detection capabilities significantly improves robustness against state-of-the-art attacks.Our prototype implementation successfully detects adversarial examples in image, natural language, and audio processing. Thereby, we cover a variety of target DNNs, including Long Short Term Memory (LSTM) architectures. In addition to effectively defend against state-of-the-art attacks, our approach generalizes between different sets of adversarial examples. Thus, our method most likely enables us to detect even future, yet unknown attacks.

 

DLA: Dense-Layer-Analysis for Adversarial Example Detection

Abstract

In recent years Deep Neural Networks (DNNs) have achieved remarkable results and even showed superhuman capabilities in a broad range of domains. This led people to trust in DNN classifications even in security sensitive environments like autonomous driving. Despite their impressive achievements, DNNs are known to be vulnerable to adversarial examples. Such inputs contain small perturbations to intentionally fool the attacked model. In this paper, we present a novel end-to-end framework to detect such attacks without influencing the target model’s performance. Inspired by research in neuron-coverage guided testing we show that dense layers of DNNs carry security-sensitive information. With a secondary DNN we analyze the activation
patterns of the dense layers during classification run-time, which enables effective and real-time detection of adversarial examples. Our prototype implementation successfully detects
adversarial examples in image, natural language, and audio processing. Thereby, we cover a variety of target DNN architectures. In addition to effectively defending against state-of-the-
art attacks, our approach generalizes between different sets of adversarial examples. Our experiments indicate that we are able to detect future, yet unknown, attacks. Finally, during white-box adaptive attacks, we show our method cannot be easily bypassed.

Access to Document

PDF

Authors

Philip Sperl, Ching-Yu Kao, Peng Chen, Xiao Lei, Konstantin Böttinger (Fraunhofer AISEC)

Conference

IEEE European Symposium on Security and Privacy 2020, September 7-11, 2020, virtual

Cite this