Inaugural Seminar

Inaugural ASIP-NET Seminar on Speech Enhancement and Noise Reduction Techniques


To register for this event you must be logged in as a member of ASIP-NET.

Time and place

October 26, 2006, 13.00-17.00
Oticon Head Quarters, Kongebakken 9, 2765 Smørum, Denmark
Directions: Map

Opens internal link in current windowSubmit seminar survey form

Opens internal link in current windowDownload lecture slides from the file archive



Welcome and introduction to ASIP-NET


Fundamentals of Independent Component Analysis
Professor Erkki Oja, Helsinki University, Finland

Independent Component Analysis (ICA) is a computational technique for revealing hidden factors that underlie sets of measurements or signals. ICA assumes a statistical model whereby the observed multivariate data, typically given as a large database of samples, are assumed to be linear or nonlinear mixtures of some unknown latent variables. The mixing coefficients are also unknown. The latent variables are nongaussian and mutually independent, and they are called the independent components of the observed data. By ICA, these independent components, also called sources or factors, can be found. Thus ICA can be seen as an extension to Principal Component Analysis and Factor Analysis. ICA is a much richer technique, however, capable of finding the sources when these classical methods fail completely.

In many cases, the measurements are given as a set of parallel signals or time series. Typical examples are mixtures of simultaneous sounds or human voices that have been picked up by several microphones, brain signal measurements from multiple EEG sensors, several radio signals arriving at a portable phone, or multiple parallel time series obtained from some industrial process. The term blind source separation is used to characterize this problem.

The lecture will first cover the basic idea of demixing in the case of a linear mixing model and then take a look at some recent applications.


Binarual Noise Reduction for Hearing Aids
Ph.D. Simon Doclo, Katholieke Universiteit Leuven, Belgium

Noise reduction algorithms in hearing aids are crucial for hearing impaired persons to improve speech intelligibility in background noise. Multi-microphone systems are able to exploit both spatial and spectral information and are hence preferred to single-microphone systems. Commonly used multi-microphone noise reduction techniques for hearing aids are based on adaptive beamforming, computational auditory scene analysis, or multi-channel Wiener filtering.

Due to the increasing communication possibilities and processing power, hearing aids are evolving into true binaural processors, where the hearing aids on both ears are cooperating with each other. In addition to reducing background noise and limiting speech distortion, another important objective of a binaural algorithm is to preserve the listener's impression of the auditory environment in order to exploit the natural binaural hearing advantage. This can be achieved by preserving the binaural cues, i.e. the interaural time and level differences, of the speech and the noise components.

In this presentation we first give an overview of several existing binaural noise reduction algorithms and then focus on a particular algorithm based on multi-channel Wiener filtering. In addition to significantly suppressing background noise, this algorithm perfectly preserves the binaural cues of the target speech component. On the contrary, the binaural cues of the noise component are typically distorted. In order to preserve the binaural cues for all components, the underlying cost function is extended with terms related to the interaural transfer function of the speech and the noise components. Both the physical and the perceptual evaluation of this algorithm in terms of speech intelligibility and localisation performance will be discussed.


Coffee break and networking


ASIP-Net web portal


SII optimized noise reduction
Ph.D. Magnus Nørgaard, Widex A/S, Denmark

With the "Widex Inteo" hearing aid, Widex has introduced a noise reduction system based on optimization of the Speech Intelligibility Index (SII). SII is calculated according to an established ANSI standard and is a measure, which is highly correlated with the intelligibility of speech under a variety of adverse listening conditions. Earlier noise reduction systems for hearing aids were developed according to quite simple statistical methods and aimed at increasing comfort for the user without deteriorating speech intelligibility. The SII-based noise reduction represents a major leap forward in complexity as an on-line optimization is performed to constantly maximize the SII with respect to estimated speech and noise spectra, the user's hearing loss and the compression scheme used for compensating the hearing loss.


Speech Enhancement - Statistical Models and Auditory Perception
Professor Rainer Martin, Ruhr-University Bochum, Germany

Speech enhancement algorithms are of great importance to numerous applications. These include hearing instruments, hands-free voice communication systems and last but not least mobile phones. The most prominent tasks of these algorithms are to improve the perceived quality of the signal and to reduce the listening effort. In many application scenarios, these objectives require a reduction of the level of background noise.

Advanced noise reduction algorithms are based on models of the source signal, the acoustic channel, and the (human) receiver. Noise reduction in the short-time Fourier domain has become popular because it provides much flexibility in terms of these modeling requirements and leads to efficient implementations. However, the statistical properties of Fourier domain signals and their implications for auditory perception have neither been fully explored nor exploited.

In this talk, we present recent results in statistical modeling of discrete Fourier domain coefficients and show that accurate statistical models go along with interesting insights into the perceptual properties of the resulting algorithms. Babble noise, for instance, presents a challenging case for which more accurate models are of great importance for improving signal quality. In this presentation we will discuss both single channel and dual-channel processing models and will present audio examples. 


Wrap up


The seminar is supported by the Signal Processing Chapter of the IEEE Denmark Section


Opens internal link in current windowSubmit seminar survey form

Opens internal link in current windowDownload lecture slides from the file archive

Member Comments

Jan Larsen Monday, 30.10.2006 14:43
The presentations will be posted as soon as possible
Paul-Frederik Bach Friday, 27.10.2006 15:37
Thank you for an interesting seminar.
Will the 4 presentations be posted on this web-site?

Paul-Frederik Bach

Displaying results 1 to 2 out of 2