Performance Analysis of Orthogonal Gradient Sign Algorithm Using Spline-based Hammerstein Model for Smart Application

Suchada Sitjongsataporn1,* and Sethakarn Prongnuch2

1Department of Electronic Engineering, Mahanakorn Institute of Innovation (MII), Faculty of Engineering and Technology, Mahanakorn University of Technology, 140 Cheumsamphan Rd., Nongchok, Bangkok, Thailand
2Department of Robotics Engineering, Faculty of Industrial Technology, Suan Sunandha Rajabhat University, 1 U-Thong Nok Rd., Dusit, Bangkok, Thailand
E-mail: ssuchada@mut.ac.th; sethakarn.pr@ssru.ac.th
*Corresponding Author

Received 30 July 2021; Accepted 20 January 2022; Publication 17 March 2022

Abstract

This paper presents a spline-based Hammerstein model for adaptive filtering based on a sign algorithm with the normalised orthogonal gradient algorithm. Spline-based Hammerstein architecture consists of an interpolation spline-based adaptive lookup table in the part of nonlinear filter and an adaptive finite impulse response filter used in the part of linear filter. Hammerstein spline adaptive filter (HSAF) is a nonlinear filter for the nonlinear systems among the advantages in the low computational cost and high performance. An adaptive lookup table and spline control points are determined and derived with the orthogonal gradient-based mechanism. Performance analysis in terms of convergence properties and mean square analysis based on the mean square error (MSE) constraint are proven by using the Taylor series expansion of the estimation error in the form of the excess MSE. Experimental results indicate the robust performance of the proposed algorithm can provide the better performance than the other models based on the conventional least mean square Hammerstein spline adaptive filtering algorithm.

Keywords: Hammerstein model, spline adaptive filtering, sign algorithm, orthogonal gradient adaptive algorithm, nonlinear systems.

1 Introduction

Recently, the nonlinear system identification has been interestingly applied to solve the system modelling problem. A class of spline-based adaptive filtering (SAF) structure [14] has been presented. SAF is a type of nonlinear spline adaptive filter detailed in [2], which has been presented the performance in terms of adaptive learning for the nonlinear system modelling [3]. Normalised version of least mean square for SAF has been proposed in [45] for the nonlinear system identification.

SAF architecture has been modified in several fields of engineering such as the nonlinear system identification using the infinite impulse response (IIR) [67] against impulsive noise [8], system identification [9] and multilayer feedforward networks [10]. SAF based on IIR nonlinear filtering is proposed in order to solve the Wiener nonlinear system identification in [6]. The set-membership framework and least-M estimate approach has been presented in the combination method [8] for achieving the effective suppression and fast convergence on the impulsive noise. Normalised version of orthogonal gradient-based adaptive applied for SAF [9] has been presented in the system identification. An adaptive spline with activation function based on the neural network [10] is presented due to solve in the data processing real-time problems.

However, Hammerstein model based on SAF (HSAF) is a kind of nonlinear model applied in several nonlinear systems [1112] such as the Hammerstein on cubic SAF [11] and the digital cancellation in full duplex [12]. HSAF based on the stochastic gradient descent algorithm with the normalised version of least mean square (NLMS) algorithm has been presented in [13].

Hammerstein spline-based adaptive filter (HSAF) architecture consists of an interpolation spline-based adaptive lookup table in the part of nonlinear filter and an adaptive finite impulse response (FIR) filter in the part of linear filter. HSAF has been derived with a memory-less function based on a uniform cubic spline function [11], which the results reveal that the proposed filter can properly apply during the learning process using the gradient approach. For a self-interference canceller, a HSAF algorithm applied for full-duplex devices has been proposed to reduce the complexity compared to the common solution [12]. Performance results from in-band full-duplex prototype demonstrated that it can obtain the same performance with low computational complexity.

For the fast convergence, the orthogonal gradient-based adaptive algorithm (OGA) has been verified in [1415]. Based on the orthogonal projection, the author in [15] has been derived with the greedy approach for the convergence analysis. As a remark on the advantage of Hammerstein spline-based filtering and fast convergence of OGA-based, it is inspired that the adaptation process of coefficients can be applied during learning with low computational complexity.

This paper is organised as follows. System model and a proposed Hammerstein spline-based adaptive filtering based on the sign algorithm and normalised orthogonal gradient-based adaptive algorithm are described in Section 2. Convergence properties in terms of stability and mean square analysis are derived in Section 3. Numerical results are shown in Section 4. Conclusion is summarised in Section 5. In this paper, vectors are defined by the boldface lowercase letters. Matrix is obtained with boldface capital letter. In addition, the floor operator, absolute value and transpose operator stand for , || and ()T, respectively.

2 Proposed Spline-based Hammerstein Algorithm and System Model

2.1 Spline Interpolation

General type of piece-wise polynomials in the spline interpolation under the smooth and continuity constraints are applied to interpolate the input [1]. That means a nonlinear system can be modelled with low-order piece-wise polynomials through a set of adaptive control points in the system.

Therefore, the spline curve is a combination of spline segments between knots. The particular area between the ith and i+1th knot is called the ‘ith-span’ as

u(k) =x(k)x-x(k)x, (1)
i(k) =x(k)x+Q-12, (2)

where x(k) is the input signal, x is the uniform space between successive knots and Q is the number of knots.

The spline interpolation output is given with the matrix notation as [2]

si(k)=u(k)TCq(k), (3)

where the output vector si(k) sorted from the span index i in the present iteration k, 0iQ-1. A real-valued control points vector is as q(k)Q+1 and u(k)TC defines the matrix multiplication between the spline basis matrix and vector u(k)(p+1)×1 as

q(k) =[q0q1qQ-1]T, (4)
u(k) =[uP(k)uP-1(k)u(k) 1]T, (5)

where P denotes a piece-wise degree of spline curve.

images

Figure 1 Spline-based Hammerstein model, where P=3.

2.2 Spline-based Hammerstein Model

Splined-interpolated adaptive lookup table and FIR filter architecture is depicted in Figure 1, where x(k) is the input signal and y(k) is the output signal as

y(k)=w(k)Tsi(k), (6)

where w(k)M×1 is the FIR of the linear filter as

w(k)=[w0(k)w1(k)wM-1(k)]T, (7)

where M is the number of tap of coefficient vector.

According to the estimating unknown parameters, the error signal e(k) is considered the adaptive linear filter w(k) and the spline control points q(k) as

e(k)=d(k)-y(k)=d(k)-w(k-1)Tsi(k), (8)

where d(k) denotes the desired signal.

2.3 Proposed Orthogonal Gradient Sign Algorithm for Spline-based Hammerstein Adaptive Filtering

By estimating the weights of w(k) and q(k) in order to minimise the error e(k) can be done by using the gradient descent-based algorithm, the cost function is minimised the squared error as [311]

J(w,q)=12minw,q{|e(k)|2}, (9)

where e(k) is given in (8).

Differentiating the cost function in (9) with respect to (w.r.t) w(k) and q(k), we get

Jw =J(w,q)w(k)=si(k)e(k), (10)
Jq =J(w,q)q(k)=-u(k)TCw(k)e(k). (11)

Consequently, the proposed coefficient vector w(k) based on the sign version of the normalised orthogonal gradient adaptive (SNOGA) algorithm is verified by

w(k)=w(k-1)+μwdw(k), (12)

where dw(k) is the vector direction of linear filter w(k) as

dw(k)=dw(k-1)+λwgw(k), (13)

where λw is the forgetting factor for w(k).

The negative gradient gw(k) of w(k) is expressed by taking the partial derivative of the cost function in (10) w.r.t w(k) as given in (12) as

gw(k) =λw(k)gw(k)-J(w,q)w(k), (14)
gw(k) =λw(k)gw(k)-si(k)sgn{e(k)}, (15)

where sgn{} is the sign operator.

Therefore, the proposed control points q(k) based on the sign version of NOGA is similarly written by

q(k)=q(k-1)+μqdq(k), (16)

where dq(k) is the directional vector of control points q(k) as

dq(k)=dq(k-1)+λqgq(k), (17)

where λq is the forgetting factor for q(k).

The gradient vector gq(k) of q(k) can be obtained similarly as

gq(k) =λq(k)gq(k)-J(w,q)q(k), (18)
gq(k) =λq(k)gq(k)-u(k)TCw(k)e(k), (19)

As shown in [1415], the forgetting factor λw(k) of w(k) and λq(k) of q(k) lie on the orthogonal projection of the present gradient vectorsgw(k), gq(k) and the previous directional vectors dw(k-1), dq(k-1) as

λw(k) =dw(k-1)Tgw(k-1)dw(k)Tdw(k-1), (20)
λq(k) =dq(k-1)Tgq(k-1)dq(k)Tdq(k-1). (21)

According to the estimation of memory model [1], it is seen that the spline control points are independent on the linear filter w(k), so the memory is used after the nonlinearity.

During the adaptation process, the adaptive linear filter w(k) and the spline control points q(k) are updated recursively in parallel as summarised in Table 2.3.

Proposed sign normalised orthogonal gradient adaptive algorithm for Hammerstein spline-based adaptive filtering (SNOGA-HSAF) Initialise: (0)=δw[1 0 0]T, q(0)=[1 0 0]T

dw(0)=dq(0)=gq(0)[1 0 0]T,x=0.2

For k=1,2,

For u(k)=x(k)x-x(k)x

For i(k)=x(k)x+Q-12

For u(k)=[uP(k)uP-1(k)u(k) 1]T

For si(k)=u(k)TCq(k)

For y(k)=w(k)Tsi(k)

For e(k)=d(k)-w(k-1)Tsi(k)

For gw(k)=λw(k)gw(k)-si(k)sgn{e(k)}

For dw(k)=λwdw(k)-gw(k-1)

For w(k)=w(k-1)+μwdw(k)

For gq(k)=λq(k)gq(k)-u(k)TCw(k)e(k)

For dq(k)=λqdq(k)-gq(k-1)

For q(k)=q(k-1)+μqdq(k)

For λw(k)=dw(k-1)Tgw(k-1)dw(k)Tdw(k-1)

For λq(k)=dq(k-1)Tgq(k-1)dq(k)Tdq(k-1)

end

3 Convergence Properties

In order to obtain the optimal performance, the minimisation on the error of filters can be maintained an adaptive learning rate of algorithm.

3.1 Stability

Let us introduce the approximation form of iterative learning orthogonal gradient-based sign algorithm for adaptive FIR spline-based Hammerstein filtering w(k) as

w(k)=w(k-1)+μwsi(k)sgn{e(k)}, (22)

where sgn{e(k)} denotes the sign of error parameter.

The convergence property of adaptive FIR filtering w(k) in (22) can be calculated by the Taylor series expansion of the estimation error signal e(k+1) as

e(k+1)e(k)+e(k)w(k)w(k), (23)

where e(k)w(k)w(k) is the first order of Taylor series expansion and w(k) is the difference of w(k) as

w(k)=w(k)-w(k-1)μwsi(k)sgn{e(k)}, (24)

and the estimation a priori error e(k) is defined as

e(k)=d(k)-w(k-1)Tsi(k). (25)

Differentiating e(k) in (25) with respect to w(k), we arrive at

e(k)w(k)=-si(k), (26)

and substitution (24) and (26) in (23), we get

e(k+1)e(k)-μwsi(k)Tsi(k)sgn{e(k)}. (27)

By taking the norm at both sides of (27), we carry out with the simple manipulations as

e(k+1)e(k){1-μwsi(k)Tsi(k)}. (28)

Let assume that |e(k+1)|<|e(k)| in order to achieve the convergence, we arrive at

1=1-μwsi(k)Tsi(k), (29)

that indicates the bound on the learning rate of step-size μw of w(k) as

0<μw<1si(k)Tsi(k). (30)

It is seen that all quantities following in (30) are positive.

Correspondingly, the approximation form of adaptive control points q(k) on the sign orthogonal gradient algorithm is updated by

q(k)=q(k-1)+μqu(k)TCw(k)sgn{e(k)}. (31)

So, we can determine a bound on the choice of μq by the first order of Taylor series expansion of error e(k) related with the adaptive control points q(k) as

e(k+1)e(k)+e(k)q(k)q(k), (32)

where e(k)q(k)q(k) is the first order of Taylor series expansion and q(k) is the difference of previous and present of q(k) in (31) as

q(k)=q(k)-q(k-1)μqu(k)TCw(k)sgn{e(k)}. (33)

Substituting si(k) in (3) into (25), we have

e(k)=d(k)-w(k-1)T{u(k)C q(k)}. (34)

Taking the derivative on e(k) in (34) with respect to q(k), that is

e(k)q(k)=-w(k-1)T{u(k)C}. (35)

After the simple manipulations, the estimation error e(k) can be obtained as

e(k+1) e(k)-μqΨ(k)TΨ(k)sgn{e(k)}, (36)
Ψ(k) =w(k-1)TCu(k). (37)

By determining the convergence of error in (36) following in (8), we have

1=1-μqΨ(k)TΨ(k), (38)

that imposes a bound on the learning rate μq as

0<μw<1Ψ(k)TΨ(k). (39)

3.2 Mean Square Analysis

The purpose of this section is to derive the mean square error performance of orthogonal gradient-based sign algorithm on the spline-based Hammerstein filtering at the steady-state. The mean square analysis is separated to the first case of adaptive FIR linear filter w(k) and then the spline control points q(k).

The excess mean square error (EMSE) is considered by following these parameters. A posterior error ep(k) imposed on the error of system. A priori error epw(k) is determined when the adaptive FIR linear filter w(k) is updated, while fixing the spline control points q(k). Similarly, a posteriori error epq(k) is imposed on the adaptive control points q(k), while fixing the linear filter w(k).

According to the mathematical derivation, these assumptions are introduced.

Assumption 1: The estimated error vector ζw(k) involved with the adaptive linear filter w(k) is under the circumstance of independent and identically distributed (i.i.d) condition with the finite variance and zero mean.

The estimated error vector ζw(k) is determined with the adaptive linear filter w(k) as

ζw(k+1)=ζw(k)-Δw(k), (40)

And Δw(k) is given as

w(k)μwsi(k)sgn{epw(k)}, (41)

where epw(k) denotes a posteriori error related with w(k).

Assumption 2: We assume that

E{ζw(k+1)2}E{ζw(k)2},k.

Following Assumption 2, we can consider the energies of error in (40) by taking the expectation of square at both sides of (40) as

ζw(k+1)2 =ζw(k)2-2μwζw(k-1)si(k)sgn{epw(k)}
+μw2si(k)sgn{epw(k)}2, (42)

We obtain

2si(k)Tζw(k)sgn{epw(k)}=μwsi(k)Tsi(k)|epw(k)|2, (43)

We assume that a posteriori error epw(k) associated with the estimated error vector ζw(k) of the adaptive FIR linear filtering w(k) as

epw(k)=ξw(k)+ζw(k), (44)

where

ξw(k)=si(k)Tζw(k). (45)

By taking the expectation operator on the left of (43) and following (44), we have

E{ξw(k)sgn{epw(k)}} =E{ξw(k)}sgn{ξw(k)+ζw(k)}
E{ξw2(k)}, (46)

where ζw(k)1.

Note that the error at the steady-state is very small, that is

E{epw(k)2} =E{|ξw(k)+ζw(k)|2}
E{|ξw2(k)+2ξw(k)ζw(k)+ζw2(k)|}. (47)

Substituting (45) and (3.2) into (43), we have

E{ξw2(k)}=μwsi(k)Tsi(k)2-μwsi(k)Tsi(k)E{|ζw2(k)+2ξw(k)ζw(k)|}.

Therefore, the EMSE εwex on the adaptive FIR linear filter w(k) is described by

εwex=E{ξw2(k)}=12μwsi(k)Tsi(k)E{νw}, (49)

where E{νw}=E{|ζw2(k)+2ξw(k)ζw(k)|} and μw1.

Assumption 3: The estimated error vector ζq(k) associated with the spline control points q(k) is under the i.i.d condition with the finite variance and zero mean.

In a similar manner, the estimated error vector ζq(k) considered with the spline control points q(k) is determined by

ζq(k+1)=ζq(k)-Δq(k), (50)

And Δq(k) is given as

w(k)μwϕ(k)sgn{epq(k)}, (51)

where epq(k) denotes a posteriori error related with q(k) and

ϕ(k)=u(k)TCw(k). (52)

Assumption 4: We consider that

E{ζq(k+1)2}E{ζq(k)2},k.

Following Assumption 4, the expectation of energies of error in (50) is expressed as

ζq(k+1)2 =ζq(k)2-2μqζq(k-1)ϕ(k)sgn{epq(k)}
+μq2ϕ(k)2|epq(k)2|, (53)

We obtain

2ζq(k)Tϕ(k)sgn{epq(k)}=μqϕ(k)Tϕ(k)|epq(k)|2, (54)

where a posteriori error epq(k) related with the estimated error vector ζq(k) of the spline control points q(k) as

epq(k)=ξq(k)+ζq(k), (55)

where

ξq(k)=ζq(k)Tϕ(k). (56)

In order to approximate the error, we take the expectation on the left-side of (54) using (55) as

E{ξq(k)Tϕ(k)sgn{epq(k)}} =E{ξq(k)}sgn{ξq(k)+ζq(k)}
E{ξq2(k)}, (57)

where ζq(k)1.

We assume that the error is very small at the steady-state, we obtain

E{epq(k)2} =E{|ξq(k)+ζq(k)|2}
E{|ξq2(k)+2ξq(k)ζq(k)+ζq2(k)|}. (58)

Substituting (56) and (3.2) into (54), we obtain

E{ξq2(k)}=μqϕ(k)Tϕ(k)2-μqϕ(k)Tϕ(k)E{|ζq2(k)+2ξq(k)ζq(k)|}. (59)

Therefore, the EMSE εqex on the spline control points filter q(k) can be obtained as

εqex=E{ξq2(k)}=12μqϕ(k)Tϕ(k)E{νq}, (60)

where E{νq}=E{|ζq2(k)+2ξq(k)ζq(k)|} and μq1.

4 Experiment and Simulation Results

For the simulation, the input colour signal is generated with the white Gaussian noise as [11]

x(k)=αx(k-1)+1-α2ϵ, (61)

where α is a correlation level parameter between the adjacent samples as 0<α<1 and ϵ is a white Gaussian noise with the unitary variance. These experiments are used for α=0.10,0.65 and signal to noise ratio at 35 dB. The spline basis matrix; CB called ‘B-spline matrix’ are used as follows [2]

CB=16[-13-31-3-630-10301410]. (62)

For the experiments, the short range electrocardiogram (ECG) input signals were recorded from a 25 years male for the motion artifact effect on ECG signals [1617]. ECG signals from the motion artifact contaminated ECG databased [18] consist of two types of ECG recorded from a male performing the different physical activities when standing and walking. The ECG recoding information is as follows. The sampling rate is of 500 Hz with 16 bits resolution. Motion artifact is a kind of noise happened from motion of the electrode placing on the skin that can produce the large amplitude signals when doing the ECG test.

These experiments are proved to present the proposed sign normalised orthogonal gradient algorithm (SNOGA) performance for the spline-based Hammerstein adaptive filtering (HSAF) against the white Gaussian noise on the motion artifact contaminated ECG input signals compared with the LMS algorithm [6] and NLMS algorithm [19].

images

Figure 2 Motion artifacts on ECG input signal from a healthy male standing [18].

images

Figure 3 MSE trajectories of SNOGA-HSAF compared with NLMS-HSF [19] and LMS-HSAF [6] using ECG input shown in Figure 2 and xn=αxn-1+1-α2ζn, where α=0.10, CB in (62) and SNR = 35 dB.

images

Figure 4 MSE trajectories of SNOGA-HSAF, NLMS-HSF [19] and LMS-HSAF [6] using ECG input shown in Figure 2 and xn=αxn-1+1-α2ζn, where α=0.65, CB in (62) and SNR = 35 dB.

images

Figure 5 Motion artifacts on ECG input signal from a healthy male walking [18].

images

Figure 6 MSE trends of SNOGA-HSAF, NLMS-HSF [19] and LMS-HSAF [6] using ECG input shown in Figure 5 and xn=αxn-1+1-α2ζn, where α=0.10, CB in (62) and SNR = 35 dB.

images

Figure 7 MSE trends of SNOGA-HSAF, NLMS-HSF [19] and LMS-HSAF [6] using ECG input shown in Figure 5 and xn=αxn-1+1-α2ζn, where α=0.65, CB in (62) and SNR = 35 dB.

The initial parameters for HSAF model of adaptive linear FIR filter are as δw=1×10-3 and the length of coefficients M is equal to 7 taps. Other the initial parameters of proposed SNOGA-HSAF algorithm are used as follows: μw=1.35×10-3, μq=1.25×10-3, λw(0)=λq(0)=1.25×10-4.

For the first experiment, the ECG contaminated input signal performing when standing is shown in Figure 2. MSE learning curves are presented with the different values of α=0.10,0.65 and SNR = 35 dB. Figures 3 and 4 depict the MSE curves of SNOGA-HSAF, NLMS-HSAF and LMS-HSAF at α=0.10,0.65. It is seen that the MSE trajectories of proposed SNOGA-HSAF and NLMS-HSAF algorithms are close quickly to the steady-state, while the proposed SNOGA-HSAF algorithm can converge rapidly in comparison with conventional LMS-HSAF algorithm.

For the second experiment, the ECG contaminated input signal performing when walking is shown in Figure 5. And Figures 6 and 7 depict the MSE learning curves of proposed SNOGA-HSAF, NLMS-HSAF and LMS-HSAF at α=0.10,0.65. It is confirmed that the MSE curves of proposed SNOGA-HSAF and NLMS-HSAF algorithms are close quickly to the steady-state, while the proposed SNOGA-HSAF algorithm can converge rapidly compared with conventional LMS-HSAF algorithm.

5 Conclusion

A proposed spline-based Hammerstein adaptive filtering on the sign normalised orthogonal gradient adaptive algorithm (SNOGA-HSAF) has been proposed. The proposed SNOGA algorithm has been described how to derive on the spline-based Hammerstein adaptive filtering. Minimum mean square error is used as a measurement model. The proposed SNOGA-HSAF algorithm has been investigated using the MMSE criterion. Performance analysis of proposed SNOGA-HSAF algorithm has been proven in the form of the excess mean square error. Simulation experiments are used the motion artifact ECG input signal recorded when standing and walking. Experimental results perform that the proposed SNOGA algorithm exceeds consistently the standard LMS and NLMS algorithms based on HSAF approach.

Our research will enhance the conventional adaptive filters for the real-time dynamic system. Hammerstein model is being interested in the adaptive signal processing and data analysis. This study is also expected to be useful in adaptive filtering for smart application.

References

[1] S. Kalluri, G.R. Arce. General Class of Nonlinear Normalized Adaptive Filtering Algorithms. IEEE Transactions on Signal Processing, 48(8): 2262–2272, 1999.

[2] M. Scarpiniti, D. Comminiello, R. Parisi and A. Uncini. Nonlinear Spline Adaptive Filtering. Signal Processing, 93(4): 772–783, 2013.

[3] M. Scarpiniti, D. Comminiello, R. Parisi and A. Uncini. Spline Adaptive Filters: Theory and Applications, Adaptive Learning Methods for Nonlinear System Modeling, ELSEVIER, 47–69, 2018.

[4] S. Guan and Z. Li. Normalised Spline Adaptive Filtering Algorithm for Nonlinear System Identification. Neural Processing Letter, 46(2), 595–607, 2017.

[5] M. Scarpiniti, D. Comminiello and A. Uncini. Convex Combination of Spline Adaptive Filters. In Proceedings of IEEE European Signal Processing Conference (EUSIPCO), 2019.

[6] M. Scarpiniti, D. Comminiello, R. Parisi and A. Uncini. Nonlinear System Identification using IIR Spline Adaptive Filters. Signal Processing, 108, 30–35, 2015.

[7] M. Scarpiniti, D. Comminiello, R. Parisi and A. Uncini. Novel Cascade Spline Architectures for the Identification of Nonlinear Systems. IEEE Transactions on Circuits and systems-I: Regular Papers, 62(7), 1825–1835, July 2015.

[8] C. Liu and Z. Zhang. Set-membership Normalised Least M-estimate Spline Adaptive Filtering Algorithm in Impulsive Noise. Electronics Letters, 54(6), 393–395, 2018.

[9] S. Sitjongsataporn and T. Wiangtong. Spline Adaptive Filtering based on Normalised Orthogonal Gradient Adaptive Algorithm. In Proceedings of IEEE International Conference on Engineering, Applied Sciences and Technology (ICEAST), pp. 575–578, 2019.

[10] S. Guarnieri, F. Piazza and A. Uncini. Multilayer Feedforward Networks with Adaptive Spline Activation Function. IEEE Transactions on Neural Network, 10(3): 672–683, 1999.

[11] M. Scarpiniti, D. Comminiello, R. Parisi and A. Uncini. Hammerstein Uniform Cubic Spline Adaptive Filtering: Learning and Convergence Properties. Signal Processing, 100: 112–123, 2014.

[12] P. P. Campo, D. Korpi, L. Anttila and M. Valkama. Nonlinear Digital Cancellation in Full-duplex Devices using Spline-based Hammerstein Model. In Proceedings of IEEE Globecom Workshops (GC Wkshps), 2018.

[13] S. Prongnuch and S. Sitjongsataporn. Stability and Steady-State Performance of Hammerstein Spline Adaptive Filter Based on Stochastic Gradient Algorithm. International Journal of Intelligent Engineering and Systems (IJIES), 13(3), 112–123, 2020.

[14] P.S.R. Diniz. Adaptive Filtering: Algorithms and Practical Implementation. Springer, 2008.

[15] S. Sitjongsataporn. Convergence Analysis of Greedy Normalised Orthogonal Gradient Adaptive Algorithm. In Proceedings of IEEE International Symposium on Communications and Information Technologies (ISCIT), Bangkok, Thailand, pp. 345–348, 2018.

[16] A. L. Goldberger, L. Amaral, L Glass, J. M. Hausdorff, P. Ivanov, R. Mark, J. Mietus, G. Moody, C. Peng, H. Stanley. PhysioBank, PhysioToolkit, and PhysioNet: Components of a New Research Resource for Complex Physiologic Signals. Circulation, 101(23): e215–e220, 2000.

[17] V. Behravan, N. E. Glover, R. Farry, M. Shoaib, P. Y. Chiang. Rate-Adaptive Compressed-Sensing and Sparsity Variance of Biomedical Signals. In Proceedings of IEEE International Conference on Wearable and Implantable Body Sensor Networks (BSN), 2015.

[18] PhysioBank ATM. Retrieved from https://archive.physionet.org/cgi-bin/atm/ATM [Online]. Access on March 22, 2021.

[19] S. Prongnuch, S. Sitjongsataporn, T. Wiangtong. Hammerstein Spline Adaptive Filtering based on Normalised Least Mean Square Algorithm. In Proceedings of IEEE International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS), Taipei, Taiwan, 2019.

Biographies

images

Suchada Sitjongsataporn received the B.Eng. (First-class honours) and D.Eng. degrees in Electronic Engineering from Mahanakorn University of Technology, Bangkok, Thailand in 2002 and 2009. She has worked as lecturer at department of Electronic Engineering, Mahanakorn University of Technology since 2002. Currently, she is an Associate Professor and the Associate Dean for Research at Faculty of Engineering and Technology in Mahanakorn University of Technology. Her research interests are mathematical and statistical models in the area of adaptive signal processing for communications, networking, embedded system, image and video processing.

images

Sethakarn Prongnuch received his B.Eng. degree in computer engineering from the Rajamangala University of Technology Phra Nakhon in Bangkok, Thailand in 2011, and the M.Eng. and D.Eng. degrees in Computer Engineering from the Mahanakorn University of Technology in Bangkok, Thailand in 2013 and 2019, respectively. He has worked as a lecturer at the department of Robotics Engineering at Faculty of Industrial Technology in Suan Sunandha Rajabhat University in Bangkok, Thailand since 2013. His research interests include computer architectures and systems, embedded system, and heterogeneous system architecture.

Abstract

1 Introduction

2 Proposed Spline-based Hammerstein Algorithm and System Model

2.1 Spline Interpolation

images

2.2 Spline-based Hammerstein Model

2.3 Proposed Orthogonal Gradient Sign Algorithm for Spline-based Hammerstein Adaptive Filtering

3 Convergence Properties

3.1 Stability

3.2 Mean Square Analysis

4 Experiment and Simulation Results

images

images

images

images

images

images

5 Conclusion

References

Biographies