Seminar on Auditory Interfaces

Half day seminar on auditory interfaces


To register for this event you must be logged in as a member of ASIP-NET.

Time and place

February 15, 2012, 12:30-17:15
Metting room 1, 2nd floor, Building 101A
Tehcnical University of Denmark, 2800 Kgs. Lyngby, Denmark

The seminar is free of charge, however, limited to 105 participants.

Map: Google Map


Lectures Slides

Opens internal link in current windowDownload lecture slides from the file archive



The use of brain scanning techniques such as EEG enables a direct quantification auditory stimulus. This facilitates the design of auditory equipment and interactive devices.

The seminar is intended for all interesting in new auditory interface techniques.






Welcome by Jan Larsen, DTU Informatics, Denmark


Auditory Brain-Computer Interfaces

Dr. Michael Tangermann

Dept. of Machine Learning, Berlin Institute of Technology, Germany


Utilizing the auditory sense for the control of a Brain-Computer Interface (BCI) is a relatively recent research area. Compared to the strength of visual event related potentials (ERP) of the EEG, auditory stimuli typically revealed much weaker ERP responses, which was a drawback for early auditory BCI approaches.

During the last two years, however, the field has experienced a strong improvement due to new experimental paradigms, which exploit the ability of users to spatially attend to target stimuli. In combination with modern machine learning methods, the single trial analysis of ongoing brain activity opened up a growing number of application areas, e.g. communication and control for patients that have reduced deliberate motor control (including gaze control), or the online monitoring of (spatial) auditory attention as a tool for the neurosciences and to support the development of assistive technology.

In my talk, I will introduce the basics of auditory ERP first, then present the state of the art research for online spelling with auditory BCI systems.




The Mismatch Negativity: An Objective Index of Sound-Discrimination Accuracy

Prof. Dr. Risto Nšštšnen

Institute of Psychology, University of Tartu, Tartu, Estonia

Center of Functionally Integrative Neuroscience, University of Aarhus, Aarhus, Denmark

Cognitive Brain Research Unit, University of Helsinki, Helsinki, Finland


The mismatch negativity (MMN) is an electric brain response which is automatically (task-independently) elicited by any discriminable change in a repetitive sound or sound pattern as long as the memory trace of the previous stimuli lasts. When this change is made smaller in magnitude the MMN is attenuated in amplitude, eventually vanishing at around the discrimination threshold. Therefore the MMN provides a unique objective measure for a subject or patientís discrimination accuracy. Furthermore, with the MMN, these discrimination thresholds can be separately determined for the different auditory attributes. Moreover, the individualís ability to discriminate even complex sound stimuli and patterns such as different phonemes can be measured by using the MMN.

Several studies have shown that training-induced improvements in different kinds of auditory discrimination abilities are accompanied by increased MMN amplitudes, reflecting learning-related plastic changes in the auditory cortex. In fact, some recent data suggest that the improved discrimination in the course of training might even be preceded by an MMN enhancement.

In stroke patients with aphasia, the MMN may index the gradual recovery of auditory discrimination abilities as time from stroke onset elapses. In addition, in cochlear-prosthesis patients, the MMN can similarly index the gradual recovery of different auditory discrimination functions.

Furthermore, the MMN can also reflect the plastic changes occurring when an individual is exposed to a certain language environment, most typically when a newborn is exposed to his/her mother tongue, but such MMN changes reflecting the emergence of new phonetic categories also occur when an adult learns a foreign language.

Finally, the MMN can also be used as an index of the duration of sensory memory in audition. These studies have shown that this duration (of the order of 10 sec in young individuals) gets shorter with normal aging, being very short in patients with degenerative brain diseases such as Alzheimerís disease. These results suggest that the MMN could be used as an index of general brain plasticity.


Coffee break and networking


The Auditory Domain as an Interface for Social Interaction

Dr. Ivana Konvalinka

Center of Functionally Integrative Neuroscience, University of Aarhus, Aarhus, Denmark

Abstract: Social interaction is a complex phenomenon, and its underlying behavioural and neural mechanisms remain largely unknown. Previous research has mainly studied individuals in isolation, while immersed in a social context, measuring their brain activity as they responded to static computerized "social" stimuli. Only a small number of recent studies have explored what goes on in the brains and bodies of two or more people as they engage in a true interaction. Musical paradigms have been quite useful in the design of such experiments, providing both controlled as well as realistic settings.

In this talk, I will show how coupling people through the auditory domain can be an ideal paradigm in studying mechanisms of real-time social interactions. Moreover, I will present a study of simultaneous EEG recordings from two interacting members engaged in a musical task, revealing both intra- and inter-personal brain rhythms that drive the interaction. I will argue that multi-person EEG recordings can better reveal signatures of social cognition than single-person neurophysiological recordings.


Panel discussion


The seminar is arranged in cooperation with:


Danish Sound Technology Network

Signal Processing Chapter of the IEEE Denmark Section



Member Comments

no comments given