Non Stationary Noise Removal from Speech Signals using Variable Step Size Strategy

In this paper the problem of noise removal from speech signals using Variable Step Size based adaptive filtering is presented. For this, the same formats for representing the data as well as the filter coefficients as used for the LMS algorithm were chosen. As a result, the steps related to the filtering remains unchanged. The proposed treatment, however exploits the modifications in the weight update formula for all categories to its advantage and thus pushes up the speed over the respective LMS-based realizations. Our simulations, however, confirm that the ability of MRVSSLMS and RVSSLMS algorithms is better than conventional LMS and Kowngs VSSLMS algorithms in terms of SNR improvement and convergence rate. Hence these algorithm is acceptable for all practical purposes.

pdf5 trang | Chia sẻ: huongthu9 | Lượt xem: 385 | Lượt tải: 0download
Bạn đang xem nội dung tài liệu Non Stationary Noise Removal from Speech Signals using Variable Step Size Strategy, để tải tài liệu về máy bạn click vào nút DOWNLOAD ở trên
Non Stationary Noise Removal from Speech Signals using Variable Step Size Strategy K. Prameela, M. Ajay Kumar, Mohammad Zia-Ur-Rahman and Dr B V Rama Mohana Rao Dept. of E.C.E., Narasaraopeta Engg. College, Narasaraopeta-522 601, India E-mail: mdzr_5@ieee.org Abstract— The aim of this paper is to implement various adaptive noise cancellers (ANC) for speech enhancement based on gradient descent approach, namely the least-mean square (LMS) algorithm and then enhanced to variable step size strategy. In practical application of the LMS algorithm, a key parameter is the step size. As is well known, if the step size is large, the convergence rate of the LMS algorithm will be rapid, but the steady-state mean square error (MSE) will increase. On the other hand, if the step size is small, the steady state MSE will be small, but the convergence rate will be slow. Thus, the step size provides a trade-off between the convergence rate and the steady-state MSE of the LMS algorithm. An intuitive way to improve the performance of the LMS algorithm is to make the step size variable rather than fixed, that is, choose large step size values during the initial convergence of the LMS algorithm, and use small step size values when the system is close to its steady state, which results in Variable Step Size LMS (VSSLMS) algorithms. By utilizing such an approach, both a fast convergence rate and a small steady-state MSE can be obtained. By using this approach various forms of VSSLMS algorithms are implemented. These are robust variable step-size LMS (RVSSLMS) algorithm providing fast convergence at early stages of adaptation and modified robust variable step-size LMS (MRVSSLMS) algorithm. The performance of these algorithms is compared with conventional LMS and Kowngs VSSLMS algorithm. Finally we applied these algorithms on speech enhancement application. Simulation results confirms that the implemented RVSSLMS and MRVSSLMS are superior than conventional algorithms in terms of convergence rate and signal to noise ratio improvement (SNRI). Keywords— Adaptive filtering, LMS algorithm, Noise Cancellation, Speech Processing, Variable Step Size. I. INTRODUCTION In real time environment speech signals are corrupted by several forms of noise such as such as competing speakers, background noise, car noise, and also they are subject to distortion caused by communication channels; examples are room reverberation, low-quality microphones, etc. In all such situations extraction of high resolution signals is a key task. In this aspect filtering come in to the picture. Basically filtering techniques are broadly classified as non-adaptive and adaptive filtering techniques. In practical cases the statistical nature of all speech signals is non-stationary; as a result non-adaptive filtering may not be suitable. Speech enhancement improves the signal quality by suppression of noise and reduction of distortion. Speech enhancement has many applications; for example, mobile communications, robust speech recognition, low-quality audio devices, and hearing aids. Many approaches have been reported in the literature to address speech enhancement. In recent years, adaptive filtering has become one of the effective and popular approaches for the speech enhancement. Adaptive filters permit to detect time varying potentials and to track the dynamic variations of the signals. Besides, they modify their behavior according to the input signal. Therefore, they can detect shape variations in the ensemble and thus they can obtain a better signal estimation. The first adaptive noise cancelling system at Stanford University was designed and built in 1965 by two students. Their work was undertaken as part of a term paper project for a course in adaptive systems given by the Electrical Engineering Department. Since 1965, adaptive noise cancelling has been successfully applied to a number of applications. Several methods have been reported so far in the literature to enhance the performance of speech processing systems; some of the most important ones are: Wiener filtering, LMS filtering [1], spectral subtraction [2]-[3], thresholding [4]-[5]. On the other side, LMS-based adaptive filters have been widely used for speech enhancement [6]–[8]. In a recent study, however, a steady state convergence analysis for the LMS algorithm with deterministic reference inputs showed that the steady-state weight vector is biased, and thus, the adaptive estimate does not approach the Wiener solution. To handle this drawback another strategy was considered for estimating the coefficients of the linear expansion, namely, the block LMS (BLMS) algorithm [9], in which the coefficient vector is updated only once every occurrence based on a block gradient estimation. A major advantage of the block, or the transform domain LMS algorithm is that the input signals are approximately uncorrelated. Recently Jamal Ghasemi et.al [10] proposed a new approach for speech enhancement based on eigenvalue spectral subtraction, in [11] authors describes usefulness of speech coding in voice banking, a new method for voicing detection and pitch estimation. This method is based on the spectral analysis of the speech multi-scale product [12]. In practice, LMS is replaced with its Normalized version, NLMS. In practical applications of LMS filtering, a key parameter is the step size. If the step size is large, the convergence rate of the LMS algorithm will be rapid, but the steady-state mean square error (MSE) will increase. On the other hand, if the step size is small, the steady state MSE will be small, but the convergence rate will be slow. Thus, the step size provides a tradeoff between the convergence rate and the steady-state MSE of the LMS algorithm. The performance of the LMS algorithm may be improved by making the step size variable rather than fixed. The resultant approach with variable step size is known as variable step size LMS (VSSLMS) algorithm [13]. By utilizing such an approach, both a fast convergence rate and a small steady-state MSE can be obtained. Many VSSLMS algorithms are proposed during Mohd Zia-Ur-Rahman et al, International Journal of Computer Science & Communication Networks,Vol 1(1),September-October 2011 Available online @ www.ijcscn.com 91 ISSN:2249-5789 recent years [14]-[17]. In this paper, we considered the problem of noise cancellation in speech signals by effectively modifying and extending the framework of [1], using VSSLMS algorithms mentioned in [14]-[17]. For that, we carried out simulations on various real time speech signals contaminated with real noise. The simulation results show that the performances of the VSSLMS based algorithms are comparable with LMS counterpart to eliminate the noise from speech signals. Recently in [18] Karthik et.al demonstrated speech enhancement using variable step size LMS (VSSLMS) algorithms, in [19], [20] Rahman et.al presented speech filtering using variable step size least mean fourth based treatment and unbiased and normalized adaptive filtering techniques. II. ADAPTIVE ALGORITHMS A. Basic Adaptive Filter Structure Figure 1 shows an adaptive filter with a primary input that is noisy speech signal s1 with additive noise n1. While the reference input is noise n2, which is correlated in some way with n1. If the filter output is y and the filter error e= (s1+n1)-y, then 𝑒𝑒2 = (s1 + n1)2 – 2y (s1 + n1) + y2 = (n1 – y)2 + s12 + 2 s1 n1 – 2y s1. (1) Since the signal and noise are uncorrelated, the mean-squared error (MSE) is E[e2]=E[(n1 – y)2]+E[s12] (2) Minimizing the MSE results in a filter error output that is the best least-squares estimate of the signal s1. The adaptive filter extracts the signal, or eliminates the noise, by iteratively minimizing the MSE between the primary and the reference inputs. Minimizing the MSE results in a filter error output y that is the best least-squares estimate of the signal s1. Figure 1: Adaptive Filter Structure. B. Conventional LMS Algorithms The LMS algorithm is a method to estimate gradient vector with instantaneous value. It changes the filter tap weights so that e(n) is minimized in the mean-square sense. The conventional LMS algorithm is a stochastic implementation of the steepest descent algorithm. It simply replaces the cost function ξ(n) = E[e2(n)] by its instantaneous coarse estimate. The error estimation e(n) is e(n) = d(n) – w(n) Φ(n) (3) Where Φ(n) is input data sequence. Coefficient updating equation is w(n+1) = w(n) + µ Φ(n) e(n), (4) Where µ is an appropriate step size to be chosen as 0 < µ < 2 𝑡𝑡𝑡𝑡 𝑅𝑅 for the convergence of the algorithm. C. Kwong’s VSSLMS algorithm The LMS type adaptive algorithm is a gradient search algorithm which computes a set of weights wk that seeks to minimize E(dk -XTkWk )The algorithm is of the form Wk+1 = Wk + μkXkϵk Where ϵk = dk + XTkW*k and μk is the step size. In the standard LMS algorithm μk is a constant. In this μk is time varying with its value determined by the number of sign changes of an error surface gradient estimate. Here the new variable step size or VSS algorithm, for adjusting the step size μk yields : μ′k+1 = αμk + γϵ2k 0 < α < 1, γ > 0 and μmax if μ′k+1> μmax μk+1 = μmin if μ′k+1< μmin μ′k+1 otherwise (5) where 0 < μmin < μmax. The initial step size μ0 is usually taken to be μmax, although the algorithm is not sensitive to the choice. The step size μk , is always positive and is controlled by the size of the prediction error and the parameters α and γ. Intuitively speaking, a large prediction error increases the step size to provide faster tracking. If the prediction error decreases, the step size will be decreased to reduce the misadjustment. The constant μmax is chosen to ensure that the mean-square error (MSE) of the algorithm remains bounded. A sufficient condition for μmax μmax 2/(3 tr (R)) (6) μmin is chosen to provide a minimum level of tracking ability. Usually, μmin will be near the value of μ that would be chosen for the fix ed step size (FSS) alg orith m. α must be chosen in the range (0, 1) to provide exponential forgetting. D. Robust Variable Step-Size LMS (RVSSLMS) algorithm Mohd Zia-Ur-Rahman et al, International Journal of Computer Science & Communication Networks,Vol 1(1),September-October 2011 Available online @ www.ijcscn.com 92 ISSN:2249-5789 A number of time-varying step-size algorithms have been proposed to enhance the performance of the conventional LMS algorithm. Simulation results comparing the proposed algorithm to current variable step-size algorithms clearly indicate its superior performance for cases of stationary environments. For non-stationary environments, our algorithm performs as well as other variable step-size algorithms in providing performance equivalent to that of the regular LMS algorithm [17]. The adaptation step size is adjusted using the energy of the instantaneous error. The weight update recursion is given by w (n+1)= w(n)+μ(n)e(n)X(n) And updated step-size equation is μ(n+1)=αμ(n)+γe2(n) (7) where 00 , and μ(n+1) is set to or when it falls below or above these lower and upper bounds, respectively. The constant μmax is normally selected near the point of instability of the conventional LMS to provide the maximum possible convergence speed. The value of μmax is chosen as a compromise between the desired level of steady state misadjustment and the required tracking capabilities of the algorithm. The parameter γ controls the convergence time as well as the level of misadjustment of the algorithm. At early stages of adaptation, the error is large, causing the step size to increase, thus providing faster convergence speed. When the error decreases, the step size decreases, thus yielding smaller misadjustment near the optimum. However, using the instantaneous error energy as a measure to sense the state of the adaptation process does not perform as well as expected in the presence of measurement noise. The output error of the identification system is e(n)=d(n)-XT(n)W(n) (8) where d(n) is the desired signal is given by d(n)=XT(n)W*(n)+ξ(n) (9) ξ(n) is a zero-mean independent disturbance, and W*(n) is the time-varying optimal weight vector. Substituting (8) and (9) in the step-size recursion, we get μ(n+1)=αμ(n)+γVT(n)X(n)XT(n)V(n)+γξ2(n)- 2γξ(n)VT(n)X(n) (10) Where V(n)=W(n)-W*(n) is the weight error vector. The input signal autocorrelation matrix, which is defined as R=E{X(n)XT(n)}, can be expressed as R=QᴧQT, where ᴧ is the matrix of eigen values, and Q is the model matrix of R. using Ṽ(n)=QTV(n) and Xʹ(n) = Q TX(n), then the statistical behavior of μ(n+1) is determined. E{μ(n+1)}=αE{μ(n)}+γ(E{ξ2(n)}+E{ ṼT(n)ᴧ Ṽ(n)}) where we have made use of the common independence assumption of Ṽ(n) and Xʹ(n). Clearly, the term E{ ṼT(n)ᴧXʹ(n) } in fluences the p rox imity of the adaptive system to the optimal solution, and μ(n+1) is adjusted accordingly. However, due to the presence of E{ξ2(n)}, the step-size update is not an accurate reflection of the state of adaptation before or after convergence. This reduces the efficiency of the algorithm significantly. More specifically, close to the optimum, μ(n) will still be large due to the presence of the noise term E{ξ2(n)} . The step size can be rewritten as μ(n+1)=αμ(n)+γ[E{VT(n)X(n)XT(n-1)V(n-1)}]2 (11) It is also clear from above discussion that the update of μ(n) is dependent on how far we are from the optimum and is not affected by independent disturbance noise. Finally, the considered algorithm involves two additional update equations compared with the standard LMS algorithm. Therefore, the added complexity is six multiplications per iteration. These multiplications can be reduced to shifts if the parameters α,β,γ, are chosen as powers of 2. E. Modified Robust Variable Step-Size LMS (MRVSSLMS) algorithm From the frame work of step size parameter of LMS algorithm, Kwongs and RVSSLMS algorithms the step size of MRVSS is given: 𝜇𝜇(𝑛𝑛 + 1) = �𝜇𝜇𝑚𝑚𝑚𝑚𝑚𝑚 ; 𝑖𝑖𝑖𝑖 𝜇𝜇(𝑛𝑛 + 1) > 𝜇𝜇𝑚𝑚𝑚𝑚𝑚𝑚 𝜇𝜇𝑚𝑚𝑖𝑖𝑛𝑛 ; 𝑖𝑖𝑖𝑖 𝜇𝜇(𝑛𝑛 + 1) < 𝜇𝜇𝑚𝑚𝑖𝑖𝑛𝑛 𝛼𝛼𝜇𝜇(𝑛𝑛) + 𝛾𝛾𝑝𝑝2(𝑛𝑛) (12) p (n +1) = (1−β (n)) p(n) +β (n)e(n)e(n −1) (13) 𝛽𝛽(𝑛𝑛 + 1) = �𝛽𝛽𝑚𝑚𝑚𝑚𝑚𝑚 ; 𝑖𝑖𝑖𝑖 𝛽𝛽(𝑛𝑛 + 1) > 𝛽𝛽𝑚𝑚𝑚𝑚𝑚𝑚 𝛽𝛽𝑚𝑚𝑖𝑖𝑛𝑛 ; 𝑖𝑖𝑖𝑖 𝛽𝛽(𝑛𝑛 + 1) < 𝛽𝛽𝑚𝑚𝑖𝑖𝑛𝑛 𝜂𝜂𝛽𝛽(𝑛𝑛) + 𝜆𝜆𝑒𝑒2(𝑛𝑛) (14) where the parameters 0 0 . The p (n) is the time average of the error signal correlation at iteration time n and n+1, and the β (n) is the time average of the square error signal, which is used to control the sensitivity of p (n) to the instantaneous error correlation. min max 0 < μmin < μmax ; 0 < βmin < βmax <1 . The upper bound of step size μmax satisfied the mean square stability condition. The lower bound of the step size μmin is used to guarantee the excess MSE under the tolerant level. The parameter β should be less than 1 and larger than zero. That is to say, when the algorithm is convergent, the instantaneous error power is very small and the error signal correlation is not sensitive to instantaneous error, and the accuracy of error signal correlation is enhanced. If the system is suddenly changed, the instantaneous error signal power is increased, which result to the enlargement of the correlation function of the error signal and the instantaneous error signal correlation, therefore the algorithm has a good tracking ability. Mohd Zia-Ur-Rahman et al, International Journal of Computer Science & Communication Networks,Vol 1(1),September-October 2011 Available online @ www.ijcscn.com 93 ISSN:2249-5789 In one word, the MRVSS have good tracking ability and good anti-noise ability, which are the advantages of algorithm proposed in reference [15][17]. Using these strategies different adaptive noise cancellers are implemented to remove diverse form of noises from speech signals. III. SIMULATION RESULTS To show that RVSSLMS and MRVSSLMS algorithms are appropriate for speech enhancement we have used real speech signals with noise. In the figure number of samples is taken on x-axis and amplitude is taken on y-axis. In order to test the convergence performance we have simulated a sudden noise spike at 4000th sample. From the figure it is clear that the performance of the implemented RVSSLMS and MRVSSLMS algorithms is better than the conventional LMS and Kwongs VSSLMS algorithm. To prove the concept of filtering we have considered five speech samples contaminated with various real noises. These noises are high voltage murmuring, crane noise. For comparison purpose we also considered random noise removal. Generally the noise added to the speech signal when it is transmitted through free space is random in nature. The noisy speech signal is given as in put to the adaptive filter structure shown in Figure 1, signal somewhat correlated with noise is given as reference signal. As the number of iterations increases error decreases and clean signal can be extracted from the output of the filter. These simulation results are shown in Figures 3, 4. To evaluate the performance of the algorithms SNRI is measured and tabulated in Tables I, II, III. IV CONCLUSION In this paper the problem of noise removal from speech signals using Variable Step Size based adaptive filtering is presented. For this, the same formats for representing the data as well as the filter coefficients as used for the LMS algorithm were chosen. As a result, the steps related to the filtering remains unchanged. The proposed treatment, however exploits the modifications in the weight update formula for all categories to its advantage and thus pushes up the speed over the respective LMS-based realizations. Our simulations, however, confirm that the ability of MRVSSLMS and RVSSLMS algorithms is better than conventional LMS and Kowngs VSSLMS algorithms in terms of SNR improvement and convergence rate. Hence these algorithm is acceptable for all practical purposes. Figure 3: Typical filtering results of high voltage murmuring removal (a) Speech Signal with real noise, (b) recovered signal using LMS algorithm, (c) recovered signal using Kowngs VSSLMS algorithm, (d) recovered signal using RVSSLMS algorithm, (e) recovered signal using MRVSSLMS algorithm. Figure 4: Typical filtering results of crane noise removal (a) Speech Signal with real noise, (b) recovered signal using LMS algorithm, (c) recovered signal using Kowngs VSSLMS algorithm, (d) recovered signal using RVSSLMS algorithm, (e) recovered signal using MRVSSLMS algorithm. 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 x 10 4 -20 2 (a) 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 x 10 4 -2 0 2 (b) 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 x 10 4 -2 0 2 (c) 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 x 10 4 -2 0 2 (d) 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 x 10 4 -2 0 2 (e) 0 1 2 3 4 5 6 x 10 4 -2 0 2 (a) 0 1 2 3 4 5 6 x 10 4 -2 0 2 (b) 0 1 2 3 4 5 6 x 10 4 -2 0 2 (c) 0 1 2 3 4 5 6 x 10 4 -2 0 2 (d) 0 1 2 3 4 5 6 x 10 4 -2 0 2 (e) Mohd Zia-Ur-Rahman et al, International Journal of Computer Science & Communication Networks,Vol 1(1),September-October 2011 Available online @ www.ijcscn.com 94 ISSN:2249-5789 Table I: SNR Contrast for Random noise removal. Sl. No Sample No Before Filtering LMS Kowngs VSSLMS RVSSLMS MRVSSLMS After Imp After Imp After Imp After Imp 1 I 0.7523 5.9077 5.1553 6.5145 5.7621 9.0738 8.3214 10.1066 9.3542 2 II -2.1468 4.1468 6.6975 5.7103 8.2610 6.6617 9.2154 7.9232 10.4730 3 III -4.1554 1.4826 5.6380 1.539 5.6944 3.1546 7.3100 4.7609 8.9163 4 IV -3.6941 1.9213 5.6154 2.0417 5.7358 3.5682 7.2623 5.1431 8.8372 5 V -5.6992 0.5443 6.2435 2.3337 8.0329 2.6920 8.3912 3.8539 9.5531 Average Improvement 5.8699 6.6972 8.1000 9.4269 Table II: SNR Contrast for High voltage murmuring removal. S.No Sample No Before Filtering LMS Kowngs VSSLMS RVSSLMS MRVSSLMS After Imp After Imp After Imp After Imp 1 I -1.5937 2.0034 3.5971 3.0735 4.6672 4.2078 5.8015 4.6311 6.2248 2 II 0.0705 1.7646 1.6940 1.9657 1.8951 5.9283 5.8577 6.5044 6.4338 3 III 2.6032 4.3508 1.7476 5.5225 2.9193 7.4302 4.8270 7.9161 5.3129 4 IV 3.0644 4.9673 1.9029 6.6277 3.5633 7.4096 4.3452 8.5129 5.4485 5 V 0.9671 2.8560 1.8888 3.0758 2.1086 7.1156 6.1484 7.9817 7.0145 Average Improvement 2.1660 3.0307 5.3959 6.0869 Table III: SNR Contrast for Crane noise removal. S.No Sample No Before Filtering LMS Kowngs VSSLMS RVSSLMS MRVSSLMS After Imp After Imp After Imp After Imp 1 I 0.5244 3.2108 2.6863 4.1024 3.5770 4.2822 3.7577 4.6914 4.1669 2 II -1.8459 3.2714 5.1173 5.7327 7.5786 6.0373 7.8832 6.7004 8.5463 3 III -2.1790 3.3691 5.5481 4.2556 6.4346 4.3284 6.5074 4.9409 7.1199 4 IV -1.6394 2.3560 3.9954 4.2422 5.8816 4.4689 6.1083 5.1134 6.7528 5 V -3.6823 0.7695 4.4518 4.9700 8.6523 5.8311 9.5134 6.7282 10.4109 Average Improvement 4.3597 6.4250 6.7540 7.3993 REFERENCES [1] B. Widrow, J. Glover, J. M. McCool, J. Kaunitz, C. S. Williams, R. H.Hearn, J. R. Zeidler, E. Dong, and R. Goodlin,“Adaptive noise cancelling: Principles and applications ”, Proc. IEEE, vol. 63, pp.1692-1716, Dec. 1975. [2] B. L. Sim, Y. C. Tong, J. S. Chang and C. T. Tan, “A parametric formulation of the generalized spectral subtraction method,” IEEE Trans. On Speech and Audio Processing, vol. 6, pp. 328-337, 1998. [3] I. Y. Soon, S. N. Koh, and C. K. Yeo, “Noisy speech enhancement using discrete cosine transform,” Speech Communication, vol. 24, pp. 249-257, 1998. [4] H. Sheikhzadeh, and H. R. Abutalebi, “An improved wavelet-based speech enhancement system,” Proc. of the Eurospeech, 2001. [5] S. Salahuddin, S. Z. Al Islam, M. K. Hasan, M. R. Khan, “Soft thresholding for DCT speech enhancement,” Electron. Letters, vol. 38, no.24, pp. 1605-1607, 2002. [6] J. Homer, “Quantifying the convergence speed of LMS adaptive filter with autoregressive inputs,” Electronics Letters, vol. 36, no. 6, pp. 585– 586, March 2000. [7] H. C. Y. Gu, K. Tang and W. Du, “Modifier formula on mean square convergence of LMS algorithm,” Electronics Letters, vol. 38, no. 19, pp. 1147 –1148, Sep 2002. [8] M. Chakraborty and H. Sakai, “Convergence analysis of a complex LMS algorithm with tonal reference signals,” IEEE Trans. on Speech and Audio Processing, vol. 13, no. 2, pp. 286 – 292, March 2005. [9] S. Olmos , L. Sornmo and P. Laguna, ``Block adaptive filter with deterministic reference inputs for event-related signals:BLMS and BRLS," IEEE Trans. Signal Processing, vol. 50, pp. 1102-1112, May.2002. [10] Jamal Ghasemi and Mohammad Reza Karami Mollaei, “A New Approach for Speech Enhancement Based On Eigenvalue Spectral Subtraction”, Signal Processing: An International Journal, vol. 3, Issue. 4, pp. 34-41. [11] Mohamed Anouar Ben Messaoud, Aïcha Bouzid and Noureddine Ellouze,” A New Method for Pitch Tracking and Voicing Decision Based on Spectral Multi-scale Analysis”, Signal Processing: An International Journal, vol. 3, Issue. 5, pp. 144-152. [12] M.Satya Sai Ram, P. Siddaiah and M. Madhavi Latha,” USEFULNESS OF SPEECH CODING IN VOICE BANKING”, Signal Processing: An International Journal, vol. 3, Issue. 4, pp. 42-52. [13] Yonggang Zhang, Ning Li, Jonathon A. Chambers, and Yanling Hao, “New Gradient-Based Variable Step Size LMS Algorithms,” EURASIP Journal on Advances in Signal Processing vol. 2008, Article ID 529480, 9 pages, doi:10.1155/2008/529480. [14] S. Karni and G. Zeng, “A new convergence factor for adaptive filters,” IEEE Transactions on Circuits and Systems, vol. 36, no. 7, pp. 1011–1012, 1989. [15] R. H. Kwong and E.W. Johnson, “A variable step-size LMS algorithm,” IEEE Transactions on Signal Processing, vol. 40, no. 7, pp. 1633–1642, 1992. [16] V. J. Mathews and Z. Xie, “A stochastic gradient adaptive filter with gradient adaptive step-size,” IEEE Transactions on Signal Processing, vol. 41, no. 6, pp. 2075–2087, 1993. [17] T. Aboulnasr and K.Mayyas, “A robust variable step-size LMStype algorithm: analysis and simulations,” IEEE Transactions on Signal Processing, vol. 45, no. 3, pp. 631–639, 1997. [18] G.V.S. Karthik, M.Ajay Kumar, Md.Zia Ur Rahman, “Speech Enhancement Using Gradient Based Variable Step Size Adaptive Filtering Techniques”, International Journal of Computer Science & Emerging Technologies, UK, (E ISSN: 2044-6004), Volume 2, Issue 1, February 2011, pp. 168-177. [19] Md. Zia Ur Rahman, K.Murali Krishna, G.V.S. Karthik, M. John Joseph and M.Ajay Kumar, “ Non Stationary Noise Cancellation in Speech Signals using an Efficient Variable step size higher order filter”, International Journal of Research and Reviews in Computer Science, UK, Vol. 2, No. 1, 2011. [20] Md Zia Ur Rahman et al., “Filtering Non Stationary noise in Speech signals using Computationally efficient unbiased and Normalized Algorithm”, International Journal on Computer Science and Engineering, ISSN : 0975- 3397 Vol. 3 No. 3 Mar 2011, pp. 1106- 1113. Mohd Zia-Ur-Rahman et al, International Journal of Computer Science & Communication Networks,Vol 1(1),September-October 2011 Available online @ www.ijcscn.com 95 ISSN:2249-5789

Các file đính kèm theo tài liệu này:

  • pdfnon_stationary_noise_removal_from_speech_signals_using_varia.pdf