Beiträge

Call for Papers: Symposium on Security and Privacy in Speech Communication

Call for papers to be presented at the

1st Symposium on Security and Privacy in Speech Communication

Online, November 10-12, 2021

 

The first edition of the SPSC Symposium aims at laying the first building blocks required to address the question how researchers and practitioners might bridge the gap between social perceptions and their technical counterparts with respect to what it means for our voices and speech to be secure and private.

The symposium brings together researchers and practitioners across multiple disciplines – more specifically: signal processing, cryptography, security, human-computer interaction, law, and anthropology. By integrating different disciplinary perspectives on speech-enabled technology and applications, the SPSC Symposium opens opportunities to collect and merge input regarding technical and social practices, as well as a deeper understanding of the situated ethics at play.The SPSC Symposium addresses interdisciplinary topics.

For more details, see CFP.


Topics of Interest:
Topics regarding the technical perspective include but are not limited to:
  • Speech Communication
  • Cyber security
  • Machine Learning
  • Natural Language Processing
Topics regarding the societal view include but are not limited to:
  • Human-Computer Interfaces (Speech as Medium)
  • Ethics & Law
  • Digital Humanities
We welcome contributions on related topics, as well as progress reports, project disseminations, or theoretical discussions and “work in progress”.  There also is a dedicated PhD track. In addition, guests from academia, industry and public institutions as well as interested students are welcome to attend the conference without having to make their own contribution. All accepted submissions will appear in the conference proceedings published in ISCA Archive.

Submission:
Papers intended for the SPSC Symposium should be up to four pages of text. An optional fifth page can be used for references only. Paper submissions must conform to the format defined in the paper preparation guidelines and as detailed in the author’s kit. Papers must be submitted via the online paper submission system. The working language of the conference is English, and papers must be written in English.

Reviews:
All submissions share the same registration deadline (with one week of submission updates afterwards). At least three single-blind reviews are provided, we aim to get feedback from interdisciplinary experts for each submission.

Important dates:
Paper submission opens:           April 10, 2021
Paper submission deadline:     June 30, 2021
Author notification:                      September 5, 2021
Final paper submission:              October 5, 2021
SPSC Symposium:                          November 10-12, 2021

Contact:
For further details contact mail@spsc-symposium2021.de!

Präsentationen auf vier akademischen Konferenzen

Unsere Kollegen vom Fraunhofer AISEC haben in den letzten Monaten vier Paper auf akademischen Konferenzen präsentiert. Klicken Sie auf die untenstehenden Titel um mehr über die einzelnen Beiträge zu erfahren.

Dieses Paper wurde auf dem DYNAMICS workshop am 7. Dezember 2020 auf der Annual Computer Security Applications Conference (ACSAC) präsentiert.

Hier kann man das Paper herunterladen.

Autoren: Philip Sperl und Konstantin Böttinger

Abstract: Neural Networks (NNs) are vulnerable to adversarial examples. Such inputs differ only slightly from their benign counterparts yet provoke misclassifications of the attacked NNs. The required perturbations to craft the examples are often negligible and even human imperceptible. To protect deep learning-based systems from such attacks, several countermeasures have been proposed with adversarial training still being considered the most effective. Here, NNs are iteratively retrained using adversarial examples forming a computational expensive and time consuming process often leading to a performance decrease. To overcome the downsides of adversarial training while still providing a high level of security, we present a new training approach we call \textit{entropic retraining}. Based on an information-theoretic-inspired analysis, entropic retraining mimics the effects of adversarial training without the need of the laborious generation of adversarial examples. We empirically show that entropic retraining leads to a significant increase in NNs’ security and robustness while only relying on the given original data. With our prototype implementation we validate and show the effectiveness of our approach for various NN architectures and data sets.

Das zweite Paper wurde auch auf der Annual Computer Security Applications Conference (ACSAC) 2020 präsentiert.

Autoren: Karla Markert, Romain Parracone, Philip Sperl und Konstantin Böttinger.

Abstract: Security of automatic speech recognition (ASR) is becoming ever more important as such systems increasingly influence our daily life, notably through virtual assistants. Most of today’s ASR systems are based on neural networks and their vulnerability to adversarial examples has become a great matter of research interest. In parallel, the research for neural networks in the image domain has progressed, including methods for explaining their predictions. New concepts, referred to as attribution methods, have been developed to visualize regions in the input domain that strongly influence the image’s classification.  In this paper, we apply two visualization techniques to the ASR system Deepspeech and show significant visual differences between benign data and adversarial examples. With our approach we make first steps towards explaining ASR systems, enabling the understanding of their decision process.

Dieses Paper wurde auf der 4th ACM Computer Science in Cars Symposium (ACM CSCS 2020) vorgestellt.

Autoren: Karla Markert, Donika Mirdita und Konstantin Böttinger

Abstract: Voice control systems in vehicles offer great advantages for drivers, in particular more comfort and increased safety while driving.  Being continuously enhanced, they are planned to comfortably allow access to the networked home via external interfaces. At the same time, this far-reaching control enables new attack vectors and opens doors for cyber criminals. Any attacks on the voice control systems concern the safety of the car as well as the confidentiality and integrity of the user’s private data. For this reason, the analysis of targeted attacks on automatic speech recognition (ASR) systems, which extract the information necessary for voice control systems, is of great interest. The literature so far has only dealt with attacks on English ASR systems. Since most drivers interact with the voice control system in their mother tongue, it is important to study language-specific characteristics in the generation of so-called adversarial examples: manipulated audio data that trick ASR systems. In this paper, we provide a short overview on recent literature to discuss the language bias towards English in current research. Our preliminary findings underline that there are differences in the vulnerability of a German and an English ASR system.

Das vierte Paper wurde bereits im September auf der IEEE European Symposium on Security and Privacy 2020 präsentiert.

Hier kann man das Paper herunterladen.

Autoren: Philip Sperl, Ching-Yu Kao, Peng Chen, Xiao Lei, und Konstantin Boettinger

Abstract: In this paper, we present a novel end-to-end framework to detect such attacks during classification without influencing the target model’s performance. Inspired by recent research in neuron-coverage guided testing we show that dense layers of DNNs carry security-sensitive information. With a secondary DNN we analyze the activation patterns of the dense layers during classification runtime, which enables effective and real-time detection of adversarial examples. This approach has the advantage of leaving the already trained target model and its classification accuracy unchanged. Protecting vulnerable DNNs with such detection capabilities significantly improves robustness against state-of-the-art attacks.Our prototype implementation successfully detects adversarial examples in image, natural language, and audio processing. Thereby, we cover a variety of target DNNs, including Long Short Term Memory (LSTM) architectures. In addition to effectively defend against state-of-the-art attacks, our approach generalizes between different sets of adversarial examples. Thus, our method most likely enables us to detect even future, yet unknown attacks.