通信工程中英文翻译
Combined Adaptive Filter with LMS-Based Algorithms
Boˇ zo Krstaji´ c, LJubiˇ sa Stankovi´ c,and Zdravko Uskokovi´
Abstract : A combined adaptive filter is proposed. It consists of parallel LMS-based adaptive FIR filters and an algorithm for choosing the better among them. As a criterion for comparison of the considered algorithms in the proposed filter, we take the ratio between bias and variance of the weighting coefficients. Simulations results confirm the advantages of the proposed adaptive filter. Keywords : Adaptive filter, LMS algorithm, Combined algorithm,Bias and variance trade-off
1.Introduction
Adaptive filters have been applied in signal processing and control, as well as in many practical problems, [1, 2]. Performance of an adaptive filter depends mainly on the algorithm used for updating the filter weighting coefficients. The most commonly used adaptive systems are those based on the Least Mean Square (LMS) adaptive algorithm and its modifications (LMS-based algorithms).
The LMS is simple for implementation and robust in a number of applications [1–3]. However, since it does not always converge in an acceptable manner, there have been many attempts to improve its performance by the appropriate modifications: sign algorithm (SA) [8], geometric mean LMS (GLMS) [5], variable step-size LMS(VS LMS) [6, 7].
Each of the LMS-based algorithms has at least one parameter that should be defined prior to the adaptation procedure (step for LMS and SA; step and smoothing coefficients for GLMS; various parameters affecting the step for VS LMS). These parameters crucially influence the filter output during two adaptation phases:transient and steady state. Choice of these parameters is mostly based on some kind of trade-off between the quality of algorithm performance in the mentioned adaptation phases.
We propose a possible approach for the LMS-based adaptive filter performance improvement. Namely, we make a combination of several LMS-based FIR filters with different parameters, and provide the criterion for choosing the most suitable algorithm for different adaptation phases. This method may be applied to all the LMS-based algorithms, although we here consider only several of them.
The paper is organized as follows. An overview of the considered LMS-based algorithms is given in Section 2.Section 3 proposes the criterion for evaluation and combination of adaptive algorithms. Simulation results are presented in Section 4.
2. LMS based algorithms
Let us define the input signal vector k =[x (k ) x (k -1) x (k -N +1)]T and vector of weighting coefficients as k =[W 0(k ) W 1(k ) W N -1(k )]T . The weighting coefficients vector should be calculated according to:
k +1=k +2μE {e k k } (1) where µ is the algorithm step, E{·} is the estimate of the expected value ande k =d k -k T k is the error at the in-stant k,and dk is a reference signal. Depending on the estimation of expected value in (1), one defines various forms of adaptive algorithms: the LMSE e k k =e k k ,the GLMSE e k k =a
k k k k k i =0({}∑(1-a )e , 0
in the adaptation the step µ(k) is changed [6, 7].
The considered adaptive filtering p roblem consists in trying to adjust a set of weighting coefficients so that the system output, y k =k T k , tracks a reference signal, assumed as d k =*T
k 2k +n k ,where n k is a zero mean Gaussian noise with the variance σn ,and
k *is the optimal weight vector (Wiener vector). Two cases will be considered:k *= is a constant (stationary case) andk *is time-varying (nonstationary case). In nonstationary case the unknown system parameters( i.e. the optimal vectork *)are time variant. It is often assumed that
*variation of k *may be modeled as k *=+1k +K is the zero-mean random perturbation,
2independent on k and n k with the autocorrelation matrix G =E k k T =σZ .Note that []
analysis for the stationary case directly follows for 2σZ =0. The weighting coefficient vector converges to the Wiener one, if the condition from [1, 2] is satisfied.
Define the weighting coefficientsmisalignment, [1–3],k =k -k *. It is due to both the effects of gradient noise (weighting coefficients variations around the average value) and the weighting vector lag (difference between the average and the optimal value), [3]. It can be expressed as:
k =(k -E (k ))+E (k )-k *, (2)
According to (2), the ith element of k is:
(3)
where ()V i (k )=E (W i (k ))-W i (k )+(W i (k )-E (W i (k )))*()=bias (W i (k ))+ρi (k )
weighting coefficient bias and bias (W i (k )) is the ρi (k ) is a zero-mean random variable with the variance σ2.The variance depends on the type of LMS-based algorithm, as well as on the external noise variance
22.Thus, if the noise variance is constant or slowly-varying,σ is time invariant for a particular σn
LMS-based algorithm. In that sense, in the analysis that follows we will assume thatσ depends only on the algorithm type, i.e. on its parameters.
An filter is its mean square deviation (MSD) of
k →∞2weighting coefficients. For the adaptive filters, it is given by, [3]:MSD =lim E k k .
3. Combined adaptive filter
The basic idea of the combined adaptive filter lies in parallel implementation of two or more adaptive LMS-based algorithms, with the choice of the best among them in each iteration [9]. Choice of the most appropriate algorithm, in each iteration, reduces to the choice of the best value for the weighting coefficients. The best weighting coefficient is the one that is, at a given instant, the closest to the corresponding value of the Wiener vector.
Let W i (k , q ) be the i −th weighting coefficient for LMS-based algorithm with the chosen parameter q at an instant k. Note that one may now treat all the algorithms in a unified way (LMS: q ≡ µ,GLMS: q ≡ a,SA:q ≡ µ). LMS-based algorithm behavior is crucially dependent on q. In each iteration there is an optimal value qopt , producing the best performance of the adaptive al- gorithm. Analyze now a combined adaptive filter, with several LMS-based algorithms of the same type, but with different parameter q.
The weighting coefficients are random variables distributed around the W i *(k ),with
2, related by [4, 9]: bias (W i (k , q ))and the variance σq [T ]
i (k , q )-W i *(k )-bias (W i (k , q ))≤κσq , (4)
where (4) holds with the probability P(κ), dependent on κ. For example, for κ = 2 and a Gaussian distribution,P(κ) = 0.95 (two sigma rule).
Define the confidence intervals for W i (k , q ), [4, 9]:
D i (k )=W i (k , q )-2k σq , W i (k , q )+2κσq (5)
Then, from (4) and (5) we conclude that, as long as bias (W i (k , q ))
Since we do not have apriori information about the bias (W i (k , q )), we will use a specific statistical approach to get the criterion for the choice of adaptive algorithm, i.e. for the values of q. []
The criterion follows from the trade-off condition that bias and variance are of the same order of magnitude, i.e.bias (W i (k , q )≅κσq , [4].
The proposed combined algorithm (CA) can now be summarized in the following steps: Step 1. Calculate W i (k , q )for the algorithms with different q 's from the predefined set Q ={q i , q 2, }.
Step 2. Estimate the variance 2 for each considered algorithm. σq
Step 3. Check if D i (k ) intersect for the considered algorithms. Start from an algorithm with largest value of variance, and go toward the ones with smaller values of variances. According to
(4), (5) and the trade-off criterion, this check reduces to the check if
i (k , q m )-W i (k , q l
is satisfied, where q m , q l ∈Q ,and the following relation holds:
222∀q h :σqm >σqh >σql , ⇒q h ∉Q .
If no D i (k ) intersect (large bias) choose the algorithm with largest value of variance. If the D i (k ) intersect, the bias is already small. So, check a new pair of weighting coefficients or, if that is the last pair, just choose the algorithm with the smallest variance. First two intervals that do not intersect mean that the proposed trade-off criterion is achieved, and choose the algorithm with large variance.
Step 4. Go to the next instant of time.
The smallest number of elements of the set Q is L =2. In that case, one of the q 's should provide good tracking of rapid variations (the largest variance), while the other should provide small variance in the steady state. Observe that by adding few more q 's between these two extremes, one may slightly improve the transient behavior of the algorithm.
Note that the only unknown values in (6) are the variances. In our simulations we estimate as in [4]: 2 σq
(i (k )-W i (k -1))/0. 2, (7) σq =median
for k = 1, 2,... , L and 22. σZ
2 as: σn The alternative way is to estimate
2σn ≈
Expressions relating 22 and σq in steady state, for different types of LMS-based algorithms, σn 1T 2∑e i ,for x(i ) = 0. (8) T i =1
22 and σq are σn are known from literature. For the standard LMS algorithm in steady state,
related
222,[3]. Note that any other estimation of σq is valid for the proposed filter. σq =q σn
Complexity of the CA depends on the constituent algorithms (Step 1), and on the decision algorithm (Step 3).Calculation of weighting coefficients for parallel algorithms does not increase the calculation time, since it is performed by a parallel hardware realization, thus increasing the hardware requirements. The variance estimations (Step 2), negligibly contribute to the increase of algorithm complexity, because they are performed at the very beginning of adaptation and they are using separate hardware realizations. Simple analysis shows that the CA increases the number of operations for, at most, N(L−1) additions and N (L −1) IF decisions, and needs some additional hardware with respect to the constituent algorithms.
4.Illustration of combined adaptive filter
Consider a system identification by the combination of two LMS algorithms with different steps. Here, the parameter q is μ,i.e. Q ={q 1, q 2}={μ, μ/10}.
The unknown system has four time-invariant coefficients,and the FIR filters are with N = 4. We give the average mean square deviation (AMSD ) for both individual algorithms, as well as for their combination,Fig. 1(a). Results are obtained by averaging over 100 independent runs (the Monte Carlo method), with μ = 0. 1. The reference dk is corrupted by a zero-mean uncorrelated Gaussian noise with 2= 0. 01 and SNR = 15 dB, and κ is 1. 75. In the first 30 iterations the σn
variance was estimated according to (7), and the CA picked the weighting coefficients calculated by the LMS with μ.
As presented in Fig. 1(a), the CA first uses the LMS with μ and then, in the steady state, the LMS with μ/10. Note the region, between the 200th and 400th iteration,where the algorithm can take the LMS with either stepsize,in different realizations. Here, performance of the CA would be improved by increasing the number of parallel LMS algorithms with steps between these two extrems.Observe also that, in steady state, the CA does not ideally pick up the LMS with smaller step. The reason is in the statistical nature of the approach.
Combined adaptive filter achieves even better performance if the individual algorithms, instead of starting an iteration with the coefficient values taken from their previous iteration, take the ones chosen by the CA. Namely, if the CA chooses, in the k -th iteration, the weighting coefficient vector P ,then
each individual algorithm calculates its weighting coefficients in the (k +1) -th iteration according to:
k +1=p +2μE e k k {}
(9)
Fig. 1. Average MSD for considered algorithms.
Fig. 2. Average MSD for considered algorithms.
Fig. 1(b) shows this improvement, applied on the previous example. In order to clearly compare the obtained results,for each simulation we calculated the AMSD . For the first LMS (μ) it was AMSD = 0. 02865, for the second LMS (μ/10) it was AMSD = 0. 20723, for the CA (CoLMS) it was AMSD = 0. 02720 and for the CA with modification (9) it was AMSD = 0. 02371.
5. Simulation results
The proposed combined adaptive filter with various types of LMS-based algorithms is implemented for stationary and nonstationary cases in a system identification setup.Performance of the combined filter is compared with the individual ones, that compose the
particular
combination.
In all simulations presented here, the reference dk is corrupted by a zero-mean uncorrelated Gaussian noise with 2σn =0. 1and SNR = 15 dB. Results are obtained by averaging over 100 independent runs, with N = 4, as in the previous section.
(a) Time varying optimal weighting vector: The proposed idea may be applied to the SA algorithms in a nonstationary case. In the simulation, the combined filter is composed out of three SA adaptive filters with different steps, i.e. Q = {μ, μ/2, μ/8}; μ = 0. 2. The optimal vectors is generated according to the presented model with 2σZ =0. 001,and with κ = 2. In the first 30 iterations the variance was estimated according to (7), and CA takes the coefficients of SA with μ (SA1).
Figure 2(a) shows the AMSD characteristics for each algorithm. In steady state the CA does not ideally follow the SA3 with μ/8, because of the nonstationary problem nature and a relatively small difference between the coefficient variances of the SA2 and SA3. However,this does not affect the overall performance of the proposed algorithm. AMSD for each considered algorithm was: AMSD = 0. 4129 (SA1,μ), AMSD = 0. 4257 (SA2,μ/2), AMSD = 1. 6011 (SA3, μ/8) and AMSD = 0. 2696
(Comb).
(b) Comparison with VS LMS algorithm [6]: In this simulation we take the improved CA (9) from 3.1, and compare its performance with the VS LMS algorithm [6], in the case of abrupt changes of optimal vector. Since the considered VS LMS algorithm[6] updates its step size for each weighting coefficient individually, the comparison of these two algorithms is meaningful. All the parameters for the improved CA are the same as in 3.1. For the VS LMS algorithm
[6], the relevant parameter values are the counter of sign change m 0 = 11, and the counter of sign continuity m 1 = 7. Figure 2(b)shows the AMSD for the compared algorithms, where one can observe the favorable properties of the CA, especially after the abrupt changes. Note that abrupt changes are generated by multiplying all the system coefficients by −1 at the 2000-th iteration (Fig. 2(b)). The AMSD for the VS LMS was AMSD = 0. 0425, while its value for the CA (CoLMS) was AMSD = 0. 0323.
For a complete comparison of these algorithms we consider now their calculation complexity, expressed by the respective increase in number of operations with respect to the LMS algorithm. The CA increases the number of requres operations for N additions and N IF decisions.For the VS
LMS algorithm, the respective increase is: 3N multiplications, N additions, and at least 2N IF decisions.
These values show the advantage of the CA with respect to the calculation complexity.
6. Conclusion
Combination of the LMS based algorithms, which results in an adaptive system that takes the favorable properties of these algorithms in tracking parameter variations, is proposed.In the course of adaptation procedure it chooses better algorithms, all the way to the steady state when it takes the algorithm with the smallest variance of the weighting coefficient deviations from the optimal value.
Acknowledgement . This work is supported by the Volkswagen Stiftung, Federal Republic of Germany.
References
[1] Widrow, B.; Stearns, S.: Adaptive Signal Processing. Prentice-Hall, Inc. Englewood Cliffs, N.J. 07632.
[2] Alexander, S. T.: Adaptive Signal Processing – Theory and Applications. Springer-Verlag, New York 1986.
[3] Widrow, B.; McCool, J. M.; Larimore, M. G.; Johnson, C. R.:Stationary and Nonstationary Learning Characteristics of the LMS Adaptive Filter. Proc. IEEE 64 (1976) 1151–1161.
[4] Stankovic, L. J.; Katkovnik, V.: Algorithm for the instantaneous frequency estimation using time-frequency distributions with variable window width. IEEE SP. Letters 5 (1998), 224–227.
[5] Krstajic, B.; Uskokovic, Z.; Stankovic, Lj.: GLMS Adaptive Algorithm in Linear Prediction. Proc. CCECE’97 I, pp. 114–117, Canada, 1997.
[6] Harris, R. W.; Chabries, D. M.; Bishop, F. A.: A Variable Step (VS) Adaptive Filter Algorithm. IEEE Trans. ASSP 34 (1986),309–316.
[7] Aboulnasr, T.; Mayyas, K.: A Robust Variable Step-Size LMSType Algorithm: Analysis and Simulations. IEEE Trans. SP 45 (1997), 631–639.
[8] Mathews, V. J.; Cho, S. H.: Improved Convergence Analysis of Stochastic Gradient Adaptive Filters Using the Sign Algorithm. IEEE Trans. ASSP 35 (1987), 450–454.
[9] Krstajic, B.; Stankovic, L. J.; Uskokovic, Z.; Djurovic, I.:Combined adaptive system for identification of unknown systems with varying parameters in a noisy enviroment. Proc. IEEE ICECS’99, Phafos, Cyprus, Sept. 1999.
基于LMS 算法的自适应组合滤波器
摘要:提出了一种自适应组合滤波器。它由并行LMS 的自适应FIR 滤波器和一个具有更好的选择性的算法组成。作为正在研究中的滤波器算法比较标准,我们采取偏差和加权系数之间的方差比。仿真结果证实了提出的自适应滤波器的优点。
关键词:自适应滤波器;LMS 算法;组合算法;偏差和方差权衡
1、绪论
自适应滤波器已在信号处理和控制,以及许多实际问题[1, 2]的解决当中得到了广泛的应用. 自适应滤波器的性能主要取决于滤波器所使用的算法的加权系数的更新。最常用的自适应系统对那些基于最小均方(LMS )自适应算法及其改进(基于LMS 的算法)。
LMS 算法是非常简便,易于实施,具有广泛的用途[1-3]。但是,因为它并不总是收敛在一个可接受的方式,所以有很多的尝试,以对其性能做适当改进:符号算法(SA )的[8],几何平均LMS 算法(GLMS )[5],变步长LMS (最小均方比)算法[6,7]。
每一种基于LMS 的算法都至少有一个参数在适应过程(LMS 算法和符号算法,加强和GLMS 平滑系数,各种参数对变步长LMS 算法的影响)中被预先定义。这些参数的影响关键在两个适应阶段:瞬态和稳态滤波器的输出。这些参数的选择主要是基于一种算法质量的权衡中所提到的适应性能。我们提出了一个自适应滤波器的性能改善的方法。也就是说,我们提出了几个基于LMS 算法的不同参数的FIR 滤波器,并提供不同的适应阶段选择最合适的算法标准。这种方法可以适用于所有的LMS 的算法,虽然我们在这里只考虑其中几个。
本文的结构如下,作者认为的LMS 的算法概述载于第2节,第3节提出了自适应算法的改进和组合标准,仿真结果在第4节。 2、基于LMS 的算法
式误差,d k 是一个参考信号。根据(1)中不同的预期值估计在,我们可以得出
是变化的[6,7]。
是最佳权向量(维纳向量)。我们考虑两种情况:k *=
那么加权系数向量收敛于维纳解。
数的平均值左右的变化)和加权矢量滞后(平均及最佳值的差额)的影响,[3]。
它可以表示为:
V i (k )=E (W i (k ))-W i *(k )+(W i (k )-E (W i (k )))
()(3) =bias (W i (k ))+ρi (k )
bias (W i (k ))是加权系数的偏差,ρi (k )与方差σ2是零均值的随机变量差,它
2取决于LMS 的算法类型,以及外部噪声方差σn 。因此,如果噪声方差为常数或
是缓慢变化的,σ2为某一特定的基于LMS 时间不变的算法。在这个意义上说,在后面的分析中我们将假定σ2只依赖算法类型,及其参数。自适应滤波器的一个重要性能衡量标准是其均方差(MSD )的加权系数。对于自适应滤波器,它被赋值,[3]:
MSD =lim E k T k k →∞[]
3、组合自适应滤波器
合并后的自适应滤波器的基本思想是在两个或两个以上自适应LMS 算法并行实现与每个迭代之间的最佳选择,[9]。在每次迭代中选择最合适的算法,选择最佳的加权系数值。最好的加权系数是1,即在给定的时刻,向相应的维纳矢量值最接近。让W i (k , q )是以基本LMS 算法为基础的第i 个加权系数,在瞬间选
择参数q 和系数k 。注意,现在我们可以在一个统一的处理方式(LMS: q ≡ µ,GLMS: q ≡ a,SA:q ≡ µ)下。基于LMS 算法的行为主要依赖于q ,在每个迭代中有一个最佳值q opt ,生产的最佳表现的自适应算法。现在分析最小均方与一些基于相同类型的算法相结合的自适应滤波器,但参数q 是不同的。
规则)。
置信区间的定义W i (k , q ), [4
, 9]
于独立q ,这意味着,对于小偏差,置信区间对同一的LMS 的算法是不同的,而对同一的LMS 的算法则相交。另一方面,当偏置变大,然后中央位置的不同间隔距离很大, 而且他们不相交。
由于我们对有关信息bias (W i (k , q ))没有先验知识,我们将使用一种特定的统计学方法得到的标准, 即自适应算法选择的q 值问题。这个标准的平衡状态, 从或提出的联合算法(CA)现在可以被总结为下面的步骤:
第1步:从不同预定义设置Q ={q i , q 2, }中为算法计算W i (k , q )。
2第2步:估计每个算法的方差σq
。 第3步:检查D i (k )是否相交对于算法。从一个最大的差异值算法走向与差异较小的值。根据(4),(5)和取舍的标准,如果下式成立那么将会减少这个检查:
222∀q h :σqm >σqh >σql , ⇒q h ∉Q
如果没有D i (k )相交(大偏差)选择具有最大的方差的值算法。如果相交,偏差已经很小。因此,检查了一对新的加权系数,或者,如果D i (k )是最后一对,只选择具有最小方差的算法。首先两个区间不相交意味着实现了取舍标准,并选择最大方差算法。
第4步:转到下一个瞬间。
元素的集合Q 中最小的数L= 2。在这种情况下,应提供良好的跟踪快速变化(最大的差异),而其他应提供小的方差的稳定状态。
通过增加更多的观察, 这两个极端之间, 我们可以稍微改进算法的瞬态行为。需要
1T 2e , for ∑i =1i T 2σn ≈x (i ) =0
(8)
22有关表达式σn 和σq 在稳定状态为LMS 算法的不同类型,从已知文献中可
2222以看出。对于标准的LMS 算法在稳定状态,σn 和σq 是相关的。σq ,[3]. 需=q σn
2要注意的是,任何其他估计σq 对于滤波器来说是有效的。
CA 的复杂性取决于组成算法(第1步),并在决策算法(步骤3)。加权系数的计算并未使并行算法增加计算时间,因为它是由硬件实现并行执行的,从而增加了硬件要求。方差估计(步骤2),忽略了有助于提高算法的复杂性,因为他们是刚刚开始的时候, 他们正在使用单独适应硬件实现。简单的分析表明,在CA 增加最多的操作步骤,添加了N (L −1) 和N (L −1) IF 决定增补,而且需要添加一些硬件以满足组成算法。
4、组合自适应滤波器举例
考虑由两个不同步骤的LMS 算法相结合的系统鉴定。在这里,参数q 是μ,即Q ={q 1, q 2}={μ, μ/10}。 未知的系统有四个时间不变系数,而且FIR 滤波器的N = 4。我们给个人平均为方差算法(AMSD ),以及它们的结合,如图1(a )所示。结果,获得了平均超过100(蒙特卡罗方法)个独立的运行,其中μ = 0. 1。它引用了未知损坏不
2相关零均值高斯噪声,其中σn = 0. 01, SNR = 15 dB, κ = 1. 75. 在最初的30次迭
代的方差估计根据式(7)和CA 的加权来计算LMS 的系数μ。
图1(a )中提出, 第一次使用的CA 与μ的LMS ,然后在稳定状态,与μ/10的LMS 。需要注意的是第200和第400迭代,该LMS 算法可以采取任何步长根据不同的认识。在这里,CA 将通过增加计算量与并行LMS 算法都得到改善,同时还认为,在稳定状态下,CA 不能理想的接近小步长的LMS 算法,原因是该方法的统计特性。
组合自适应滤波器能够达到更好的性能如果该独立算法能胜过他们以往所采取的系数值迭代,即采取由CA 所选择的那些值。也就是说,如果CA 选择,
系数在(k +1)次迭代:
图1快速平均算法
图2 快速平均算法
在前面的示例应用中,图1(b )显示了这种改进。为了比较清楚地取得成果,为每次仿真计算了AMSD ,对于第一个LMS (μ)是AMSD = 0.02865,第二的LMS (μ/10)是AMSD = 0.20723,对CA (CoLMS )是AMSD = 0.02720,还有与改进的式(9)是AMSD = 0.02371。 6、仿真结果 提出的基于LMS 的算法不同类型的自适应组合滤波器是实行固定和非平稳情况,合并后的过滤系统识别。比较联合滤波器性能, 以组成特定的组合。
2这里所有的仿真,dk 由零均值高斯噪声损坏无关σn 15 dB. 结果,=0. 1and ,SNR = 获得了平均超过100个独立运行的N = 4,如上一节。
(a )优化加权时变向量:提出的想法可能被应用到SA 算法的非平稳情况。在
仿真中,组合滤波器组由3个不同的SA 自适应滤波器步骤组成,即自适应滤波
2器Q = {μ, μ/2, μ/8}; μ = 0. 2. 根据最优向量生成的模型σZ =0. 001 ,κ = 2. n
的前30次迭代的方差估计根据式(7),CA 与SA 系数μ(SA1)。
图2(a )显示了每个算法的AMSD 特点。在稳定状态的CA 不理想的遵循μ/ 8 SA3,因为问题的性质和非平稳之间的SA2和SA3系数差异相对较小,但这并不影响该算法的整体性能。每个算法AMSD 考虑是:AMSD = 0.4129(SA1,μ),AMSD = 0.4257(SA2,μ/ 2),AMSD = 1.6011(SA3,μ/ 8)和AMSD = 0.2696。 (b )比较与VS LMS算法[6]:在仿真中,我们改进仿真由3.1节中的(9)式,并在最佳载体突然变化的情况下比较其与LMS 算法的性能[6]。我认为比较LMS 算法[6],其加权系数为每个单独的步长进行了更新,这两个算法的比较是有意义的。
所有对CA 参数的改进和3.1是相同的,对VS LMS算法[6],有关的参数值是变化的且具有符号的连续性,m0 = 11,m1 = 7。图2(b )显示,特别是在突然改变了算法的比较之后,我们可以观察到CA 的有利特性,AMSD 。但要注意的是,突然的变化使系统乘以-1到2000次迭代(图2(b ))。这对VS AMSD 是AMSD = 0.0425,而在CA (CoLMS )中AMSD = 0.0323。
与一个完整的这些算法相比, 我们认为现在的计算复杂度增加了。这表明了各自增长了LMS 算法。CA 增加了对N 的补充和N IF的讨论. 对于VS LMS算法,其增加了:3N 乘法,N 的添加,以及决定至少2N IF 。这些值表明,CA 虽然计算复杂但具有其独特的优势。
8、结论
组合LMS 算法,在自适应系统中将这些参数变化的跟踪与算法的良好性能结果相结合,是自适应过程中选择的更好的算法,一直到稳定状态时需要从最优值与最小方差算法的加权系数的偏差。
9. 参考文献
[1] Widrow, B.; Stearns, S.: Adaptive Signal Processing. Prentice-Hall, Inc. Englewood Cliffs, N.J. 07632.
[2] Alexander, S. T.: Adaptive Signal Processing – Theory and Applications. Springer-Verlag, New York 1986.
[3] Widrow, B.; McCool, J. M.; Larimore, M. G.; Johnson, C. R.:Stationary and Nonstationary Learning Characteristics of the LMS Adaptive Filter. Proc. IEEE 64 (1976) 1151–1161.
[4] Stankovic, L. J.; Katkovnik, V.: Algorithm for the instantaneous frequency estimation using time-frequency distributions with variable window width. IEEE SP. Letters 5 (1998), 224–227.
[5] Krstajic, B.; Uskokovic, Z.; Stankovic, Lj.: GLMS Adaptive Algorithm in Linear Prediction. Proc. CCECE’97 I, pp. 114–117, Canada, 1997.
[6] Harris, R. W.; Chabries, D. M.; Bishop, F. A.: A Variable Step (VS) Adaptive Filter Algorithm. IEEE Trans. ASSP 34 (1986),309–316.
[7] Aboulnasr, T.; Mayyas, K.: A Robust Variable Step-Size LMSType Algorithm: Analysis and Simulations. IEEE Trans. SP 45 (1997), 631–639.
[8] Mathews, V. J.; Cho, S. H.: Improved Convergence Analysis of Stochastic Gradient Adaptive Filters Using the Sign Algorithm. IEEE Trans. ASSP 35 (1987), 450–454.
[9] Krstajic, B.; Stankovic, L. J.; Uskokovic, Z.; Djurovic, I.:Combined adaptive system for identification of unknown systems with varying parameters in a noisy enviroment. Proc. IEEE ICECS’99, Phafos, Cyprus, Sept. 1999.