Adaptive Noise Cancellation

50 downloads 396110 Views 506KB Size Report
Its cost, inevitably, is that it needs two inputs - a primary ... estimate of the noise by filtering the reference input and then subtracting this noise estimate from the .... unconstrained transfer function of the adaptive filter is given by (App.I). W. ∗.
Adaptive oise Cancellation

Aarti Singh 1/ECE/97 Dept. of Electronics & Communication Netaji Subhas Institute of Technology

Certificate This is to certify that Aarti Singh, student of VIIIth semester B.E. (Electronics and Communication) carried out the Project on “Adaptive Noise Cancellation” under my guidance during a period of four months –February to May 2001. It is also stated that this Project was carried out by her independently and that it has not been submitted before.

Prof. M.P. Tripathi A Addvviissoorr

Prof. Raj Senani ((H HO OD D--EEC CEE))

Acknowledgement I take this opportunity to express my regards and sincere thanks to my advisor and guide Prof. M. P. Tripathi, without whose support, this project would not have been possible. His constant encouragement and moral support gave me the motivation to carry out the project successfully. I am also indebted to Dr. Harish Parthasarthy for his valuable and timely guidance. The discussions with him helped a lot in developing an in-depth understanding of the topics involved. Also, I would like to thank Sh. Bhagat, the DSP Lab Incharge, who helped me with the lab facilities whenever I needed them.

Aarti Singh

Abstract This Project involves the study of the principles of Adaptive Noise Cancellation (ANC) and its Applications. Adaptive oise Cancellation is an alternative technique of estimating signals corrupted by additive noise or interference. Its advantage lies in that, with no apriori estimates of signal or noise, levels of noise rejection are attainable that would be difficult or impossible to achieve by other signal processing methods of removing noise. Its cost, inevitably, is that it needs two inputs - a primary input containing the corrupted signal and a reference input containing noise correlated in some unknown way with the primary noise. The reference input is adaptively filtered and subtracted from the primary input to obtain the signal estimate. Adaptive filtering before subtraction allows the treatment of inputs that are deterministic or stochastic, stationary or time-variable.

The effect of uncorrelated noises in primary and reference inputs, and presence of signal components in the reference input on the ANC performance is investigated. It is shown that in the absence of uncorrelated noises and when the reference is free of signal, noise in the primary input can be essentially eliminated without signal distortion. A configuration of the adaptive noise canceller that does not require a reference input and is very useful many applications is also presented.

Various applications of the ANC are studied including an in depth quantitative analysis of its use in canceling sinusoidal interferences as a notch filter, for bias or low-frequency drift removal and as Adaptive line enhancer. Other applications dealt qualitatively are use of ANC without a reference input for canceling periodic interference, adaptive self-tuning filter, antenna sidelobe interference canceling, cancellation of noise in speech signals, etc. Computer simulations for all cases are carried out using Matlab software and experimental results are presented that illustrate the usefulness of Adaptive Noise Canceling Technique.

I. Introduction The usual method of estimating a signal corrupted by additive noise is to pass it through a filter that tends to suppress the noise while leaving the signal relatively unchanged i.e. direct filtering. s+n



Filter

The design of such filters is the domain of optimal filtering, which originated with the pioneering work of Wiener and was extended and enhanced by Kalman, Bucy and others. Filters used for direct filtering can be either Fixed or Adaptive. 1. Fixed filters - The design of fixed filters requires a priori knowledge of both the signal and the noise, i.e. if we know the signal and noise beforehand, we can design a filter that passes frequencies contained in the signal and rejects the frequency band occupied by the noise. 2. Adaptive filters - Adaptive filters, on the other hand, have the ability to adjust their impulse response to filter out the correlated signal in the input. They require little or no a priori knowledge of the signal and noise characteristics. (If the signal is narrowband and noise broadband, which is usually the case, or vice versa, no a priori information is needed; otherwise they require a signal (desired response) that is correlated in some sense to the signal to be estimated.) Moreover adaptive filters have the capability of adaptively tracking the signal under non-stationary conditions.

Noise Cancellation is a variation of optimal filtering that involves producing an estimate of the noise by filtering the reference input and then subtracting this noise estimate from the primary input containing both signal and noise. +

s+n



sˆ = s + (n - nˆ )

n0

Filter



It makes use of an auxiliary or reference input which contains a correlated estimate of the noise to be cancelled. The reference can be obtained by placing one or more sensors in the noise field where the signal is absent or its strength is weak enough. Subtracting noise from a received signal involves the risk of distorting the signal and if done improperly, it may lead to an increase in the noise level. This requires that the noise estimate nˆ should be an exact replica of n. If it were possible to know the relationship between n and nˆ , or the characteristics of the channels transmitting noise from the noise source to the primary and reference inputs are known, it would be possible to make nˆ a close estimate of n by designing a fixed filter. However, since the characteristics of the transmission paths are not known and are unpredictable, filtering and subtraction are controlled by an adaptive process. Hence an adaptive

filter is used that is capable of adjusting its impulse response to minimize an error signal, which is dependent on the filter output. The adjustment of the filter weights, and hence the impulse response, is governed by an adaptive algorithm. With adaptive control, noise reduction can be accomplished with little risk of distorting the signal. Infact, Adaptive Noise Canceling makes possible attainment of noise rejection levels that are difficult or impossible to achieve by direct filtering. The error signal to be used depends on the application. The criteria to be used may be the minimization of the mean square error, the temporal average of the least squares error etc. Different algorithms are used for each of the minimization criteria e.g. the Least Mean Squares (LMS) algorithm, the Recursive Least Squares (RLS) algorithm etc. To understand the concept of adaptive noise cancellation, we use the minimum mean-square error criterion. The steady-state performance of adaptive filters based on the mmse criterion closely approximates that of fixed Wiener filters. Hence, Wiener filter theory (App.I) provides a convenient method of mathematically analyzing statistical noise canceling problems. From now on, throughout the discussion (unless otherwise stated), we study the adaptive filter performance after it has converged to the optimal solution in terms of unconstrained Wiener filters and use the LMS adaptive algorithm (App.IV) which is based on the Weiner approach.

II. Adaptive oise Cancellation – Principles

signal source

primary i/p s+n

o/p signal sˆ

+ _ filter o/p nˆ

noise source

n0

Adaptive filter

reference i/p

error, ε ADAPTIVE NOISE CANCELLER

Fig. 1 Adaptive Noise Canceller As shown in the figure, an Adaptive Noise Canceller (ANC) has two inputs – primary and reference. The primary input receives a signal s from the signal source that is corrupted by the presence of noise n uncorrelated with the signal. The reference input receives a noise n0 uncorrelated with the signal but correlated in some way with the noise n. The noise no passes through a filter to produce an output nˆ that is a close estimate of primary input noise. This noise estimate is subtracted from the corrupted signal to produce an estimate of the signal at sˆ , the ANC system output. In noise canceling systems a practical objective is to produce a system output sˆ = s + n – nˆ that is a best fit in the least squares sense to the signal s. This objective is accomplished by feeding the system output back to the adaptive filter and adjusting the filter through an LMS adaptive algorithm to minimize total system output power. In other words the system output serves as the error signal for the adaptive process. Assume that s, n0, n1 and y are statistically stationary and have zero means. The signal s is uncorrelated with n0 and n1, and n1 is correlated with n0. sˆ = s + n – nˆ 2 2 ⇒ sˆ = s + (n - nˆ )2 +2s(n - nˆ ) Taking expectation of both sides and realizing that s is uncorrelated with n0 and nˆ , E[ sˆ 2] = E[s2] + E[(n - nˆ )2] + 2E[s(n - nˆ )] = E[s2] + E[(n - nˆ )2] 2 The signal power E[s ] will be unaffected as the filter is adjusted to minimize E[ sˆ 2]. ⇒ min E[ sˆ 2] = E[s2] + min E[(n - nˆ )2] Thus, when the filter is adjusted to minimize the output noise power E[ sˆ 2], the output noise power E[(n - nˆ )2] is also minimized. Since the signal in the output remains constant, therefore minimizing the total output power maximizes the output signal-tonoise ratio. Since

( sˆ - s) = (n – nˆ )

This is equivalent to causing the output sˆ to be a best least squares estimate of the signal s.

IIA. Effect of uncorrelated noise in primary and reference inputs As seen in the previous section, the adaptive noise canceller works on the principle of correlation cancellation i.e., the ANC output contains the primary input signals with the component whose correlated estimate is available at the reference input, removed. Thus the ANC is capable of removing only that noise which is correlated with the reference input. Presence of uncorrelated noises in both primary and reference inputs degrades the performance of the ANC. Thus it is important to study the effect of these uncorrelated noises.

Uncorrelated noise in primary input

signal source

o/p signal sˆ

mo Σ

Σ

d = s + n + mo

+ _ Σ filter o/p nˆ

noise source

H(z)

x

Adaptive filter error, ε ADAPTIVE NOISE CANCELLER

Fig. 2 ANC with uncorrelated noise m0 in primary input The figure shows a single channel adaptive noise canceller with an uncorrelated noise mo present in the primary input. The primary input thus consists of a signal and two noises mo and n. The reference input consists of n* h( j ), where h(j) is the impulse response of the channel whose transfer function is H(z). The noises n and n* h( j ) have a common origin and hence are correlated with each other but are uncorrelated with s. The desired response d is thus s + mo + n. Assuming that the adaptive process has converged to the minimum mean square solution, the adaptive filter is now equivalent to a Wiener filter. The optimal unconstrained transfer function of the adaptive filter is given by (App.I) δxd (z) W∗(z) = δxx (z) The spectrum of the filters input δxx (z) can be expressed as 2

δxx (z) = δnn (z)Η (z)

where δnn (z) is the power spectrum of the noise n. The cross power spectrum between filter’s input and the desired response depends only on the mutually correlated primary and reference components and is given as

δxd (z) = δnn (z)Η (z-1 ) The Wiener function is thus W∗(z) =

δnn (z) Η (z-1 ) 2

δnn (z)Η (z)

1 = H(z)

Note that W∗(z) is independent of the primary signal spectrum δss(z) and the primary uncorrelated noise spectrum δmomo(z).This result is intuitively satisfying since it equalizes the effect of the channel transfer function H(z) producing an exact estimate of the noise n. Thus the correlated noise n is perfectly nulled at the noise canceller output. However the primary uncorrelated noise no remains uncancelled and propagates directly to the output.

Uncorrelated noise in the reference input

signal source

s+n

Σ

noise H(z) source

Σ

o/p signal sˆ

+ _ Σ filter o/p nˆ

x Adaptive filter error, ε

m1 ADAPTIVE NOISE CANCELLER

Fig. 3 ANC with uncorrelated noise in reference input The figure shows an adaptive noise canceller with an uncorrelated noise m1 in the reference input. The adaptive filter input x is now m1+ n* h( j ). The filters input spectrum is thus

δxx (z) = δm1m1 (z) + δnn (z)Η (z)

2

The Wiener transfer function now becomes δnn (z) Η (z-1 ) ∗ W (z) = 2 δm1m1 (z) + δnn (z)Η (z) We see that the filter transfer function now cannot equalize the effect of the channel and the filter output is only an approximate estimate of primary noise n.

Effect of primary and reference uncorrelated noises on A C performance The performance of the single channel noise canceller in the presence of uncorrelated noises-mo in primary input and m1 in reference input simultaneously, can be evaluated in terms of the ratio of the signal to noise density ratio at the output, ρout (z) to the

signal to noise density ratio at the primary input, ρpri (z). Factoring out the signal power spectrum yields

ρout (z) primary noise spectrum = output noise spectrum ρpri (z) δnn (z) + δmomo (z) = δnout (z) The canceller’s output noise power spectrum δnout (z) is a sum of three components: 1. Due to propagation of mo directly to the output. 2. Due to propagation of m1 to the output through the transfer function, -W*(z). 3. Due to propagation of n to the output through the transfer function, 1 – H(z)W*(z). The output noise spectrum is thus 2

2

δnout (z)= δmomo (z) + δm1m1 (z) W∗(z) + δnn (z)1 - Η (z) W∗(z)

We define the ratios of the spectra of the uncorrelated to the spectra of the correlated noises at the primary and reference as ∆

Rprin (z) =

δmomo (z) δnn (z)

and

δm1m1 (z)



Rrefn (z) =

2

δnn (z)Η (z)

respectively. The output noise spectrum can be expressed accordingly as 2 1 δm1m1 (z) δnout (z)= δmomo (z) + 2 2 + δnn (z) 1 - R refn(z) + 1  H (z)  Rrefn(z) + 1 Rrefn (z) = δnn (z) Rprin (z) + δnn (z) R (z) + 1 refn The ratio of output to the primary input noise power spectra can now be written as (Rprin (z) + 1)(Rrefn(z) + 1) ρout (z) = R (z) + R (z) R (z) + R (z) ρpri (z) prin prin refn refn This expression is a general representation of the ideal noise canceller performance in the presence of correlated and uncorrelated noises. It allows one to estimate the level of noise reduction to be expected with an ideal noise canceling system. It is apparent from the above equation that the ability of a noise canceling system to reduce noise is limited by the uncorrelated-to-correlated noise density ratios at the primary and reference inputs. The smaller are Rprin(z) and Rrefn(z), the greater will be the ratio of signal-to-noise density ratios at the output and the primary input i.e. ρout(z)/ρpri (z) and the more effective the action of the canceller. The desirability of low levels of uncorrelated noise in both primary and reference inputs is thus emphasized.

IIB. Effect of Signal Components in the reference input Often low-level signal components may be present in the reference input. The adaptive noise canceller is a correlation canceller, as mentioned previously and hence presence of signal components in the reference input will cause some cancellation of the signal also. This also causes degradation of the ANC system performance. Since the reference input is usually obtained from points in the noise filed where the signal strength is small, it becomes essential to investigate whether the signal distortion due to reference signal components can outweigh the improvement in the signal-to noise ratio provided by the ANC.

Σ sj

primary input

+

dj

Σ -

J(z)

nj

xj H(z)

output

yj

Σ

W*(z) εj

reference input

Fig. 4 ANC with signal components in reference input The figure shows an adaptive noise canceller that contains signal components in the reference input, propagated through a channel with transfer function J(z). If the spectra of the signal and noise are given by δss (z) and δnn (z) respectively, the signalto-noise density ratio at the primary input is ∆ δss(z) ρpri(z) = δnn (z) The spectrum of the signal component in the reference input is 2

δssref(z) = δss(z) J(z) and that of the noise component is

2

δnnref (z) = δnn (z)Η (z)

Therefore, the signal-to-noise density ratio at the reference input is thus

ρref(z) =

δss(z) J(z)

2 2

δnn (z)Η (z)

The spectrum of the reference input x can be written as 2

2

δxx(z) = δss(z) J(z) + δnn (z)Η (z)

and the cross spectrum between the reference input x and the primary input d is given by δxd(z) = δss(z) J(z -1) + δnn (z)Η (z -1) When the adaptive process has converged, the unconstrained Weiner filter transfer function is thus given by δss(z) J(z -1) + δnn (z)Η (z -1) W*(z) = 2 2 δss(z) J(z) + δnn (z)Η (z)

We now evaluate expressions for the output signal-to-noise density ratio and the signal distortion and then compare them to see whether the effects of signal distortion are significant enough to render the improvement in S R useless. Signal distortion D(z): When signal components are present in the reference input, some signal distortion will occur and the extent of signal distortion will depend on the amount of signal propagated through the adaptive filter. The transfer function of the propagation path through the filter is δss(z) J(z -1) + δnn (z)Η (z -1) - J(z)W*(z) = - J(z) 2 2 δss(z) J(z) + δnn (z)Η (z) When |J(z)| is small i.e. signal components coupled to the reference input are small, this function can be expressed as J(z) - J(z)W*(z) ≅ Η (z) The spectrum of the signal component propagated to the noise canceller output through the adaptive filter is thus approximately

δss(z)

J(z) Η (z)

2

Hence, defining the signal distortion D(z) as the ratio of the spectrum of the signal components in the output propagated through the adaptive filter to the spectrum of signal components in the primary input, we have ∆

D(z) =

δss(z) | J(z) W*(z) |2 δss(z)

= | J(z)W*(z) |2 When J(z) is small, this reduces to D(z) ≅ | J(z)/ H(z)|2 From the expressions for S R at the primary and reference inputs, ρref(z) D(z) ≅ ρpri(z) This result shows that the relative strengths of signal-to-noise density ratios at the primary and reference inputs govern the amount of signal distortion introduced. Higher the S R at the reference input i.e. the larger the amount of signal components present in the reference, the higher is the distortion. A low distortion results form high signal-to-noise density ratio at the primary input and low signal-to-noise density ratio at the reference input. Output signal-to-noise density ratio : For this case, the signal propagates to the noise canceller output via the transfer function 1-J(z)W*(z), while the noise propagates through the transfer function 1H(z)W*(z). The spectrum of the signal component in the output is thus

δssout(z) = δss(z) |1-J(z)W*(z)| = δss(z)

2

[H(z) - J(z)]δnn (z)Η (z -1) 2

δss(z) J(z) + δnn (z)Η (z)

2

and that of noise component is similarly δnnout(z) = δnn(z) | 1-H(z)W*(z)| 2 [J(z) - H(z)]δss (z)J (z -1) = δnn(z) 2 2 δss(z) J(z) + δnn (z)Η (z) The output signal-to-noise density ratio is thus

ρout(z) = =

δss(z) δnn(z) Η (z-1) 2 δnn(z) δss(z ) J(z-1) δnn (z)Η (z) δss(z) J(z)

2

2

From the expression for signal-to-noise density ratio at reference input, this can be written as 1 ρout(z) = ρref(z) This shows that the signal-to-noise density ratio at the noise canceller output is simply the reciprocal at all frequencies of the signal-to-noise density ratio at the reference input, i.e. the lower the signal components in the reference, the higher is the signal-tonoise density ratio in the output.

Output noise: When |J(z)| is small, the expression for output noise spectra reduces to 2 δss(z) J(z -1) δnnout(z) ≅ δnn(z) δnn (z)Η (z -1) In terms of signal-to-noise density ratios at reference and primary inputs, δnnout(z) ≅ δnn(z) |ρref(z)| | ρpri(z)| The dependence of output noise on these three factors is explained as under: 1. First factor δnn (z) implies that the output noise spectrum depends on the input noise spectrum, which is obvious. 2. The second factor implies that, if the signal-to-noise density ratio at the reference input is low, the output noise will be low, i.e. the smaller the signal components in the reference input, the more perfectly the noise will be cancelled. 3. The third factor implies that if the signal-to-noise density ratio in the primary input is low, the filter will be trained most effectively to cancel the noise rather than the signal and consequently output noise will be low.

IIC. Use of A C without a reference signal An important and attractive use of ANC is using it without a reference signal. This is possible for the case when one of the signal and noise is narrowband and the other broadband. This is particularly useful for applications where it is difficult or impossible to obtain the reference signal.

sj + nj

reference i/p xj

Σ

+

primary dj i/p

broadband output

-

W*(z)

narrowband output

delay ADAPTIVE NOISE CANCELLER

Fig. 5 ANC without reference input In the case where signal is narrowband and noise is broadband, or signal is broadband and noise is narrowband, a delayed version of the input signal can be used as the reference input. This is because a broadband signal is not correlated to previous sample values unlike a narrowband signal. We only need to insure that the delay introduced should be greater than the decorrelation-time of the broadband signal and less than the decorrelation-time of the narrowband signal. i.e. τd (BB) < delay < τd (NB) This concept is applied to a number of problems 1. Canceling periodic interference without an external reference source. 2. Adaptive self-tuning filter 3. Adaptive Line Enhancer These applications are discussed later.

III. Applications IIIA. Adaptive oise Canceling applied to sinusoidal interferences The elimination of a sinusoidal interference corrupting a signal is typically accomplished by explicitly measuring the frequency of the interference and implementing a fixed notch filter tuned to that frequency. A very narrow notch is usually desired in order to filter out the interference without distorting the signal. However, if the interference is not precisely known, and if the notch is very narrow, the center of the notch may not fall exactly over the interference. This may lead to cancellation of some other frequency components of the signal i.e. distorting the signal, while leaving the interference intact. Thus, it may infact lead to an increase in the noise level. Also, there are many applications where the interfering sinusoid drifts slowly in frequency. A fixed notch cannot work here at all unless it is designed wide enough to cover the range of the drift, with the consequent distortion of the signal. In situations such as these, it is often necessary to measure in some way the frequency of the interference, and then implement a notch filter at that frequency. However, estimating the frequency of several sinusoids embedded in the signal can require a great deal of calculation. When an auxiliary reference input for the interference is available, an alternative technique of eliminating sinusoidal interferences is by an adaptive noise canceller. This reference is adaptively filtered to match the interfering sinusoids as closely as possible, allowing them to be filtered out. The advantages of this type of notch filter are1: It makes explicit measurement of the interfering frequency unnecessary. 2: The adaptive filter converges to a dynamic solution in which the time-varying weights of the filter offer a solution to implement a tunable notch filter that helps to track the exact frequency of interference under non-stationary conditions or drifts in frequency. 3: It offers easy control of bandwidth as is shown below. 4: An almost infinite null is achievable at the interfering frequency due to the close and adjustable spacing of the poles and zeros. 5: Elimination of multiple sinusoids is possible by formation of multiple notches with each adaptively tracking the corresponding frequency.

A C as Single-frequency notch filter: To understand the operation of an Adaptive Noise Canceller as a Notch filter, we consider the case of a single-frequency noise canceller with two adaptive weights.

noise canceller output

dj primary i/p

Σ -

+ synchronous samplers w1j x1j

reference i/p Ccos(ω0t +φ)

w2j

Σ

LMS Algorithm

εj

yj

adaptive filter output

x2j

900 delay Sampling period = T

x1j=Ccos(ω0t +φ) x2j = Csin(ω0t +φ)

Fig. 6 Single-frequency adaptive noise canceller The primary input consists of the signal corrupted by a sinusoidal interference of frequency ω0. The reference input is assumed to be of the form Ccos(ω0t + φ), where C and φ are arbitrary i.e. the reference input contains the same frequency as the interference while its magnitude and phase may be arbitrary. The primary and reference inputs are sampled at the frequency Ω = 2π/Τ rad/s. The two tap inputs are obtained by sampling the reference input directly and sampling a 90o shifted version of the reference as shown in the figure above. To observe the notching operation of the noise canceller, we derive an expression for the transfer function of the system from the primary input to the ANC output. For this purpose, a flow graph representation of the noise canceller system using the LMS algorithm is constructed. w1j + 1 w1j D 1 z –1 E F y1j 2µ Σ L x1j

K x2j = C cos(ωojT + φ) A

+ dj

+

C Σ - B

εj H

G

Σ

yj

w2j + 1 w2j + 2µ

Σ

1

z -1

y2j J I x2j

M x2j = C sin(ωojT + φ) Fig. 7 Flow diagram representation

The LMS weight update equations are given by w1j + 1 = w1j + 2µ εj x1j w2j + 1 = w2j + 2µ εj x2j The sampled tap-weight inputs are x1j = C cos(ωoTj + φ) and x2j = Csin(ωoTj + φ) The first step in the analysis is to obtain the isolated impulse response from the error εj, point C, to the filter output, point G, with the feedback loop from point G to point B broken. Let an impulse of amplitude unity be applied at point C at discrete time j = k; that is, εj = δ( j - k) where δ( j - k) is a unit impulse. The response at point D is then εj x1j = Ccos(ωokT + φ) for j ≠ k and zero for j = k which is the input impulse scaled in amplitude by the instantaneous value of x1j at j = k. The signal flow path from point D to point E is that of a digital integrator with transfer function 2µ/(z-1) and impulse response 2µu(j-1),where u(j) is a unit-step function. Convolving 2µu( j-1) with εj x1j yields the response at point E: w1j = 2µCcos(ωokT + φ) where j ≥ k + 1. When the scaled and delayed step function is multiplied by x1j, the response at point F is obtained: y1j = 2µC2cos(ωojT + φ) cos(ωokT + φ) where j ≥ k + 1. The corresponding response at J may be obtained in a similar manner y2j = 2µC2sin(ωojT + φ) sin(ωokT + φ) Combining these equations ,we obtain the response at filter output G: yj = 2µC2u( j - k - 1) cosωoT(j - k) We now set to derive the linear transfer function for the noise canceller. When the time k is set equal to zero, the unit impulse response of the linear time-invariant signal-flow path from C to G is given as yj = 2µC2u( j - 1) cosωojT and the transfer funcxtion of this path is  z( z - cosωoT)  - 1 G (z) = 2µC2 2 2z cosω T + 1 z   o This can be expressed in terms of a radian sampling frequency Ω = 2π/T as 2µC2(zcos(2πωoΩ-1) - 1) G (z) = 2 z - 2z cos(2πωo Ω -1)+ 1

If the feedback loop from G to B is now closed, the transfer function H(z) from the primary input A to the noise canceller output C can be obtained from the feedback formula: 1 H(z) = (1 - G (z)) z2 - 2z cos(2πωo Ω-1)+ 1 = 2 z - 2(1 - µC2)zcos(2πωoΩ-1) + 1 - 2µC2 The above equation shows that the single-frequency noise canceller has the properties of a notch filter at the reference frequency ωo. The zeros of the transfer function are located in the plane at z = exp(±i2πωo Ω -1) and are located on the unit circle at angles of ±2πωo Ω -1rad. The poles are located at z = (1 -µC2)cos(2πωo Ω -1) ± i[(1 - 2µC2) - (1 - µC2)cos2(2πωoΩ-1)]1/2 The poles are inside the unit circle at a radial distance (1 - 2µC2)1/2, approximately equal to 1 - µC2,from the origin and at angles of ±arc cos[(1 -µC2)(1 - 2µC2)-1/2cos(2πωoΩ -1)] For slow adaptation, that is, small values of µC2, these angles depend on the factor 2 2 4 1 -µC2 1 - 2µC + µ C 1/2  2 1/2 =  2 (1 - 2µC )  1 - 2µC  1 2 4 ≅1- µ C +… 2 which differs only slightly from a value of one. The result is that, in practical instances, the angles of the poles are almost identical to those of the zeroes. z-plane

Half-Power Points each segment ≅ µC2

x

ω = Ω/2 2πωo/Ω

x

ω=0

ω = - ωo

Fig. 8 Location of poles and zeros Since the zeros lie on the unit circle, the depth of the notch is infinite at the frequency ω = ω0. The closeness of the poles to the zeros determines the sharpness of the notch.

Corresponding poles and zeros are separated by a distance approximately equal to µC2. The arc length along the unit circle (centered at the position of a zero) spanning the distance between half-power points is approximately 2µC2. This length corresponds to a notch bandwidth of BW = µC2Ω/π = 2µC2/T The Q of the notch is determined by the ratio of the center frequency to the bandwidth: ω0π Q = ω0/BW = µC2Ω The single-frequency noise canceller is, therefore, equivalent to a stable notch filter when the input is a pure cosine wave. The depth of the null achievable is generally superior to that of a fixed digital or analog filter because the adaptive process maintains the null exactly at the reference frequency.

Multiple-frequency notch filter: This discussion can be readily extended to the case of a multiple-frequency noise canceller. The formation of multiple notches is achieved by using an adaptive filter with multiple weights. Two weights are required for each sinusoid to achieve the necessary filter gain and phase. Uncorrelated broadband noise superposed on the reference input creates a need for additional weights. Suppose the reference is a sum of M sinusoids M

xj =

∑C

m

cos(ω m jT + θ m )

m =1

At the ith tap-weight input of the transversal tapped delay-line filter of order , M

x ij = ∑ C m cos(ω m [ j − i + 1]T + θ m )

i=1…

m =1 M

= ∑ C m cos(ω m jT + θ im ) m =1

where θ im = θ m − ω m [i − 1]T . The filter output at the ith tap-weight yij is given by yij = wij xij Proceeding as before, we get a similar equation for wij as, M

wij = 2 µ ∑ C m cos(ω m kT + θ im ) ∴

where j ≥ k+1

m =1 M

M

m =1

n =1

y ij = 2 µ ∑ C m cos(ω m kT + θ im )∑ C n cos(ω n kT + θ in )

j ≥ k+1

The overall filter output is given as

y j = ∑ y ij i =1

Substituting and taking the Z-transform of both sides, gives the transfer function G(z) as

C n z −1 [cos(ω n T + θ in ) − cos θ in z −1 ] M C m cos θ im ∑ 1 − 2 z −1 cos ω n T + z − 2 i =1 n =1 m =1 Since the input is a unit impulse and time of applying the pulse k is set to zero.

M

Y ( z ) = G ( z ) = 2 µ ∑∑

The denominator of G(z) is of the form M

∏ (1 − 2 z

−1

cos ω n T + z − 2 )

n =1

Therefore, poles of G(z) are located at n = 1…M z = exp(±i 2πω n T ) i.e. poles are located at each of the reference frequencies. Since poles of G(z) are the zeros of H(z), the overall system function has zeros at all reference frequencies i.e. a notch is formed at each of the reference sinusoidal frequencies.

IIIB. Bias or low-frequency drift Canceling using Adaptive

oise Canceller The use of a bias weight in an adaptive filter to cancel low-frequency drift in the primary input is a special case of notch filtering with the notch at zero frequency. A bias weight is incorporated to cancel dc level or bias and hence is fed with a reference input set to a constant value of one. The value of the weight is updated to match the dc level to be cancelled. Because there is no need to match the phase of the signal, only one weight is needed. dj + ∑ output sj+drift wj +1 yj

εj LMS Algorithm Fig. 9 ANC as bias/low-frequency drift canceller The transfer function from the primary input to the noise canceller output is now derived. The expression of the output of the adaptive filter yj is given by yj = wj .1 = wj The bias weight w is updated according to the LMS update equation wj+1 = wj + 2µ(εj .1) ⇒ yj+1 = yj +2µ(dj – yj) = (1-2µ)yj +2µdj Taking the z-transform of both the sides yields the steady-state solution: 2µ Y(z) = D(z) z - (1 - 2µ) Z-transform of the error signal is E(z) = D(z)- Y(z) z-1 = D(z) z - (1 - 2µ)

The transfer function is now z-1 E(z) H(z) = D(z) = z - (1 - 2µ) This shows that the bias-weight filter is a high pass filter with a zero on the unit circle at zero frequency and a pole on the real axis at a distance 2µ to the left of the zero. The smaller the µ, the closer is the location of the pole and the zero, and hence the notch is precisely at zero frequency i.e. only dc level is removed. The single-weight noise canceller acting as a high-pass filter is capable of removing not only a constant bias but also slowly varying drift in the primary input. If the bias level drifts and this drift is slow enough, the bias weight adjusts adaptively to track and cancel the drift. Using a bias weight alongwith the normal weights in an ANC can accomplish bias or drift removal simultaneously with cancellation of periodic or stochastic interference.

IIIC. Canceling Periodic Interference without an External Reference Source periodic interference

pri. i/p

Σ broadband signal

+

Σ

broadband output

-

∆ delay Adaptive Noise Canceller Fig.10 Cancellation of periodic interference There are a number of circumstances where a broadband signal is corrupted by periodic interference and no external reference input free of the signal is available. This is the case for playback of speech or music in the presence of tape hum or turntable rumble. Adaptive Noise Canceling can be applied to reduce or eliminate such interference by introducing a fixed delay ∆ in the reference input drawn directly from the primary input. The delay chosen must be of sufficient length to cause the broadband signal components in the reference input to become decorrelated from those in the primary input. The interference components, because of their periodic nature, will remain correlated with each other.

IIID. Adaptive Self-tuning filter The noise canceller without a reference input can be used for another important application. In many instances where an input signal consisting of mixed periodic and broadband components is available, the periodic rather than the broadband components are of interest. If the system output is taken from the adaptive filter in an adaptive noise canceller, the result is an adaptive self-tuning filter capable of extracting a periodic signal from broadband noise. The configuration for the adaptive self-tuning filter is shown below:

broadband interference

primary input

Σ

+

Σ

reference input periodic signal

periodic output

∆ delay Adaptive Noise Canceller

Fig. 11 ANC as self-tuning filter With sum of sinusoidal signals in broadband stochastic interference, the adaptive filter developed sharp resonance peaks at the frequencies of all the spectral line components of the periodic portion of the primary input.

IIIE. A C as Adaptive Line Enhancer The use of ANC as a self-tuning filter suggests its application as an ALE Adaptive Line Enhancer for detection of extremely low-level sine waves in noise. The adaptive self-tuning filter, whose capability of separating periodic and stochastic components of a signal was illustrated above (where the components were of a comparable level), is able to serve as an adaptive line enhancer for enhancing the detectability of narrowband signals in the presence of broadband noise. The configuration of ANC without a reference input, as discussed previously, is used here. Input x

+

Σ -

∆ Delay weight values

error Fast Fourier Transform

Filter Transfer Function

Fig. 12 ANC as Adaptive Line Enhancer The input consists of signal plus noise. The output is the digital Fourier transform of the filter’s impulse response. Detection is accomplished when a spectral peak is evident above the background noise.

Operation of the adaptive line enhancer can be understood intuitively as follows. The delay causes decorrelation between the noise components of the input data in the two channels while introducing a simple phase difference between the sinusoidal components. The adaptive filter responds by forming a transfer function equivalent to that of a narrow-band filter centered at the frequency of the sinusoidal components. The noise component of the delayed input is rejected, while the phase difference of the sinusoidal components is readjusted so that they cancel each other at the summing junction, producing a minimum error signal composed of the noise component of the instantaneous input data alone. The advantages of adaptive line enhancing with respect to digital Fourier analysis include its effective application over a wide range of input signal and noise parameters with little apriori information. It is capable of estimating and tracking instantaneous frequencies and hence is especially advantageous in applications like where the sine wave is frequency modulated. We now analyze its steady state behavior with a stationary input consisting of multiple sinusoids in uncorrelated noise. Using the method of undetermined coefficients, the LxL Weiner-Hopf matrix describing the steady-state impulse response of an L-weight ALE with arbitrary delay or “prediction distance”∆ may be transformed into a set of 2 coupled linear equations, where is the number of sinusoids in the ALE input. This set of equations, which decouples as the adaptive filter becomes longer, provides a useful description of the interaction between the sinusoids introduced by the finite-length of the filter. Using the Weiner-Hopf model for the ALE response, LxL matrix equation can be written in component form as: L −1

∑φ

xx

(l − k ) w * (k ) = φ xx (l + ∆)

0 ≤ l ≤ L-1

k =0

where φxx is the autocorrelation of the input w*(k) are the optimal weights When x(j) consists of N sinusoids in white noise,

φ xx (k ) = σ 02 δ (k ) + ∑ σ n2 cos ω n k n =1

where δ(k) is the Kronecker delta function. σ02 is the white noise power σn2 is the power in the nth sinusoid ωn represents the frequencies of the sinusoids To avoid the computational complexity involved in taking matrix inverse, we use the method of undetermined coefficients. Since the ALE adaptive filter is expected to respond by forming peaks at the input frequencies, we assume the following solution for w*(k) 2

w * (k ) = ∑ An e jω n k n =1

where for notational convenience, ωn+N is defined as -ωn (n=1…N); the ωn+N are thus the negative frequency components of the input sinusoids. Substituting with φxx(l),

and equating coefficients of exp(jωrl), leads to the following 2N equations in the 2N coefficients A1, …, A2N. 2

e jω r ∆ r = 1,2,…2N Ar + ∑ γ rn An = L + 2σ 02 / σ r2 n =1 n≠r

is defined as σn2 and γrn is given by 1 1 − e j (ω n −ω r ) L γ rn = L + 2σ 02 / σ r2 1 − e j (ω n −ω r ) The solution for the An completely determine w*(k).

where σn

2

+N

When the input to the ALE consists of N sinusoids and additive white noise, the mean steady-state impulse response of the ALE can be expressed as a weighted sum of positive and negative frequency components of the input sinusoids. It is seen that the coefficients that couple An together are proportional to the L-point Fourier transform of exp(jωnk) evaluated at ωr. From the form of γrn, An+N = An . This shows that w*(k) are real. Since the form of γrn is of sinc-type, when L becomes large or when ωn-ωr is some integral multiple of 2π/L, γrn can be neglected. Further as L becomes large, the ratio of the main lobe (at ωn-ωr =0) to the sidelobe peaks is given approximately by 1/(p+1/2)π. Even if ωn is within the first few peaks of ωr, the associated γrn can be neglected. As γrn→ 0 for all n and r(i.e. as L becomes large), the An uncouple and are given to a good approximation by e jω n ∆ An = n = 1…2N L + 2σ 02 / σ r2 Therefore as γrn→ 0, the ALE for N sinusoids will adapt to a linear superposition of N independent ALE’s, each adapted to a single sinusoid in white noise. The frequency response of the steady-state ALE can now simply be expressed as L −1

H * (ω ) = ∑ ω *(k )e − jω ( k +1) k =0

1 − e j (ω n −ω ) L 1 − e j (ω n −ω ) n =1

e − j (ω n ∆ +ω ) 1 − e − j (ω n +ω ) L e j (ω n ∆ −ω ) 1 − e j (ω n −ω ) L +∑ ≅∑ σ o 2 1 − e − j (ω n +ω ) n =1 σ o 2 1 − e j (ω n −ω ) n =1 L+2 2 L+2 2 2

= ∑ An e − jω

σn

σn

The above equation corresponds to sum of bandpass filters (centers at ±ωn), each having a peak value given by (L/2)SNRn/((L/2)SNRn + 1) where SNRn = σn2/σo2. As L→ ∞, all of the peak values and the ALE becomes a super position of perfectly resolved bandpass filters, each with unit gain at its center frequency. Caution must be exercised in choosing L because as L is increased, the weight-vector noise also increases.

The ALE thus provides an alternative to spectral analytic techniques and has the advantage of not requiring a priori information and also adaptively tracking the sinusoidal frequencies.

IIIF. Canceling Antenna sidelobe Interference Strong unwanted signals incident on the sidelobes of an antenna array can severely interfere with the reception of weaker signals in the main beam. The conventional method of reducing such interference, adaptive beamforming, is often complicated and expensive to implement. When the number of spatially discrete interference sources is small, adaptive noise canceling can provide a simpler and less expensive method of dealing with this problem. Recieving Elements plane-wave signal • Σ Σ • •

interference Fig. 13 ANC applied to a receiving array The reference is obtained by steering the reference sensor in the direction of the interference.

Conclusion Adaptive Noise Cancellation is an alternative way of canceling noise present in a corrupted signal. The principal advantage of the method are its adaptive capability, its low output noise, and its low signal distortion. The adaptive capability allows the processing of inputs whose properties are unknown and in some cases non-stationary. Output noise and signal distortion are generally lower than can be achieved with conventional optimal filter configurations. This Project indicates the wide range of applications in which Adaptive Noise Canceling can be used. The simulation results verify the advantages of adaptive noise cancellation. In each instance canceling was accomplished with little signal distortion even though the frequencies of the signal and interference overlapped. Thus it establishes the usefulness of adaptive noise cancellation techniques and its diverse applications.

Scope for further work: In this project, only the Least-Mean-Squares Algorithm has been used. Other adaptive algorithms can be studied and their suitability for application to Adaptive Noise Cancellation compared. Other algorithms that can be used include Recursive Least Squares, Normalised LMS, Variable Step-size algorithm etc. Moreover, this project does not consider the effect of finite-length filters and the causal approximation. The effects due to these practical constraints can be studied.

Appendix I. Weiner Filter Theory Wiener proposed a solution to the continuous-time linear filtering problem and derived the Wiener-Hopf integral equation. The discrete-time equivalent of this integral equation is called the ‘Normal equation’. Solution of these two equations defines the Wiener filter. We concentrate here on the discrete-time case only. Formulation of Wiener filters in discrete-time case for the general case of complex-valued time-series: This discussion is limited to- 1) Filter impulse response of finite duration. 2) A single input and single output filter. Statement of the Optimum Filtering Problem: u(n)

u(n-1)

u(n-2)

-1

-1

z

z

w1*

w2*

u(n-M+1) -1

z

wM-1*

wM* d(n|un)

Σ

Σ

Σ Σ

e(n) + d(n) Fig.A Linear Transversal Filter

Order of filter = no. of delay elements in the filter = M-1 Impulse response of the transversal filter {hk}={wk*} k = 1,2… M Filter output is related to filter input and impulse response of the filter by the convolution sum M d(n|un) = Σ wk* u(n-k+1) k =1

Signal output of the filter at time n = d(n|un), estimate of desired response d(n) assuming knowledge of the tap inputs. The estimation problem is solved by designing filter so that difference between d(n) & d(n|un) is made as “small” as possible in a statistical sense.

estimation error = e(n) = d(n) – d(n|un) In Wiener theory “minimum mean-squared error criterion” is used to optimize the filter. Specifically, tap weights are chosen so as to minimize the “index of performance” J(w) , the mean-squared error MSE. J(w) = E[e(n)e*(n)] By minimizing J(w), we obtain the best or optimum linear filter in the minimum meansquare sense. Error Performance Surface Let: Mx1 tap-weight vector Mx1 input vector Then, filter output d(n|un) = wHu(n)

wT = [w1,, w2,…., wM] uT(n) = [ u(n), u(n-1),…, u(n-M+1)] where H denotes the Hermitian Transpose

or d(n|un) = uH(n)w => Estimation error between desired response d(n) and filter output d(n|un) e(n) = d(n) - wHu(n) or e*(n) = d*(n) - uH(n)w Hence, Mean squared error, J(w) = E[e(n)e*(n)] = E[(d(n) - wHu(n)) (d*(n) - uH(n)w)] Expanding and recognizing tap-weight vector w is constant, J(w) = E[d(n)d*(n) – wH E[u(n)d*(n)] – E[d(n)uH(n)]w + wH E[u(n)uH(n)]w We make the following assumptions - discrete time stochastic process represented by tap inputs u(n), u(n-1)… is weakly stationary. - Mean value of the process is zero. - Tap-input vector u(n) and desired response d(n) are jointly stationary. We can now identify, 1. E[d(n)d*(n)] = σd2 variance of desired response assuming d(n) has zero mean. 2. E[u(n)d*(n)] = p Mx1 cross-correlation vector between tap input vector & desired response. i.e. pT = [p(0), p(-1),…, p(1-M)] k=1,2,…M where p(1-k) = E[u(n-k+1)d*(n)] 3. E[d(n)uH(n)] = pH MxM correlation matrix of tap input vector 4. E[u(n)uH(n)] = R ⇒

J(w) = σd2 - pHw – wHp + wHRw

When u(n) and d(n) are jointly stationary, mean-squared error J(w) is precisely a secondorder function of the tap-weight vector w. Therefore, dependence of J(w) on elements of w i.e. tap weights w1, w2, …wM is equivalent to a bowl-shaped surface with a unique minimum. This is the error-performance surface of the transversal filter.

Fig. B Error surface for M=2 and filter weights w1=a0 and w2=a1 Optimum Solution: Now, requirement is to design the filter so that it operates at this ‘bottom’ or ‘minimum’ point. At this point, J(w) has a minimum value, Jmin w has an optimum value, w0 The resultant transversal filter is said to be ‘optimum in the mean-squared sense’. To get w0, we apply the condition for minimum J(w) i.e. d J(w) = 0 dw We get: (for a weakly stationary process, d σd2 = 0 variance is constant) dw d (pHw) = 0 dw d (wHp) =2p dw d (wHRw) =2Rw dw Hence, gradient vector ∇ = derivative of mean-squared error J(w) wrt tap-weight w ∇ = d J(w) dw …(1) = -2p + 2Rw At the bottom of the error-performance surface i.e. for the optimal case, this gradient is equal to 0 or as earlier mentioned, J(w) is a minimum. ⇒

Rw0 = p

…(2)

This is the discrete-form of the Weiner-Hopf equation, also called normal equation. Its solution gives: w0 = R-1p Computation of optimum tap-weight vector w0 requires knowledge of two quantities: 1. Correlation matrix R of the tap-input vector u(n). 2. Cross-correlation vector p between tap-input u(n) and desired response d(n).

The curve obtained by plotting the mean-squared error versus the number of iterations, n is called the learning curve.

II. Adaptive Filters Adaptive filters are digital filters with an impulse response, or transfer-function, that can be adjusted or changed over time to match desired system characteristics. Unlike fixed filters, which have a fixed impulse response, adaptive filters do not require complete a priori knowledge of the statistics of the signals to be filtered. Adaptive filters require little or no a priori knowledge and moreover, have the capability of adaptively tracking the signal under non-stationary circumstances. For an adaptive filter operating in a stationary environment, the error-performance surface has a constant shape as well as orientation. When, however, the adaptive filter operates in a non-stationary environment, the bottom of the error-performance surface continually moves, while the orientation and curvature of the surface may be changing too. Therefore, when the inputs are non-stationary, the adaptive filter has the task of not only seeking the bottom of the error performance surface, but also continually tracking it.

III. Steepest Descent Algorithm An adaptive filter is required to find a solution for its tap-weight vector that satisfies the normal equation. Solving this equation by analytical means presents serious computational difficulties, especially when the filter contains a large number of tap weights and when the data rate is high. An alternative procedure is to use the method of steepest descent, which is one of the oldest methods of optimization. 1. Initial values of w(0) are chosen arbitrarily i.e. initial guess as to where the minimum point of the error-performance surface may be located. Typically w(0) = null vector. 2. Using this, we compute the gradient vector, defined as the gradient of meansquared error J(n) wrt w(n) at time n (nth iteration). 3. We compute the next guess at the tap-weight vector by making a change in the initial or present guess in a direction opposite to that of the gradient vector. 4. Go back to step 2 and repeat the process. ⇒

w(n+1) = w(n) + 1 µ [-∇(n)] µ = positive real-valued constant 2 From equation (1), ∇(n) = -2p +2Rw(n) For the application of the steepest-descent algorithm, we assume that the correlation matrix R and cross-correlation matrix p are known. w(n+1) = w(n) + µ[p – Rw(n)] n = 0, 1, 2,… (3) ⇒ We observe that the parameter µ controls the size of the incremental correction applied to the tap-weight vector as we proceed from one iteration to the next. Therefore, µ is referred to as the step-size parameter or weighting constant.

The equation (3) describes the mathematical formulation of the steepest-descent algorithm or also referred to as the deterministic gradient algorithm.

Feedback-Model of the steepest-descent algorithm: -µ

R

I µ

z-1I

p w(n+1) w(n) Fig. C Signal-flow graph representation of the steepest-descent algorithm. Stability of the steepest-descent algorithm: Since the steepest-descent algorithm involves the presence of feedback, the algorithm is subject to the possibility of its becoming unstable. From the feedback model, we observe that the stability performance of the algorithm is determined by two factors: 1. the step-size parameter µ 2. the correlation matrix R of the tap-input vector u(n) as these two parameters completely control the transfer function of the feedback loop. Condition for stability: weight-error vector, c(n) = w(n) – w0 where w0 is the optimum value of the tap-weight vector as defined by the normal equation. Therefore, eliminating cross-correlation vector p in equation (3) and rewriting the result in terms of the weight-error vector, c(n+1) = (I - µR) c(n) Using unitary-similarity transformation, we may express the correlation matrix R as R = QΛQH (refer Appendix B) c(n+1) = (I - µ QΛQH) c(n) ⇒ Premultiplying both sides of the equation by QH and using the property of unitary matrix Q that QH = Q-1, we get QH c(n+1) = (I - µ Λ) QH c(n) We now define a new set of coordinates as follows: v(n) = QH c(n) = QH [w(n) – w0] Accordingly, we may write v(n+1) = (I - µ Λ)v(n) The initial value of v(n) equals v(0) = QH [w(0) – w0] Assuming that the initial tap-weight vector w(0) is zero, this reduces to v(0) = -QH w0 th For the k natural mode of the steepest-descent algorithm, we thus have

vk(n+1) = (1 - µλk) vk(n) k=1, 2, …, M …(4) th where λk is the k eigenvalue of the correlation matrix R. This equation is represented by the following scalar-valued feedback model:

1-µλk

o o z-1 vk(n) vk(n+1) Fig. D Signal-flow graph of the kth mode of the steepest-descent algorithm Equation (4) is a homogeneous difference equation of the first order. Assuming that vk(n) has the initial value vk(0), we readily obtain the solution k=1, 2, …, M …(5) vk(n) = (1 - µλk)n vk(0), Since all eigenvalues of the correlation matrix R are positive and real, the response vk(n) will exhibit no oscillations. For stability or convergence of the steepest-descent algorithm, the magnitude of the geometric ratio of the above geometric series must be less than 1 for all k. ∀k ⇒ -1 < 1-µλk < 1 Provided this condition is satisfied, as the number of iterations, n, approaches infinity, all natural modes of the steepest-descent algorithm die out, irrespective of the initial conditions. This is equivalent to saying that the tap-weight vector w(n) approaches the optimum solution w0 as n approaches infinity. Therefore, the necessary and sufficient condition for the convergence or stability of the steepest-descent algorithm is that the step-size parameter µ satisfy the following condition: 0 < µ < 2/λmax where λmax is the largest eigenvalue of the correlation matrix R. Convergence rate of the steepest-descent algorithm: Assuming that the magnitude of 1-µλk is less than 1 i.e. stability criterion is met, from equation (5) the variation of the kth natural mode of the steepest-descent algorithm with time is as shown below: vk(0) vk(n) 0

1

2 3 Time, n

4

5

We see that an exponential envelope of time constant τk can be fitted to the geometric series by assuming the unit of time to be the duration of one iteration cycle and by choosing the time constant τk such that 1 - µλk = exp( –1/ τk) ⇒ τk = -1 …(6) ln(1 - µλk) The time constant τk defines the time required for the amplitude of the kth natural mode vk(n) to decay to 1/e of its initial value vk(0). For slow-adaptation i.e. small µ, τk ≈ 1 µ