1 Introduction

In modern communication systems, such as digital mobile systems and digital TV systems, the transmitted symbols are distorted by inter-symbol interference (ISI) [21] due to multipath propagation [17]. Several equalization algorithms have been developed to compensate for the ISI with the assistance of training sequences [10]. Different from the training-based approaches, blind equalization algorithms [20, 29] also eliminate or reduce the ISI introduced by nonideal transmission channels without requiring training sequences. Among these blind algorithms [6,7,8,9,10,11,12], the constant modulus algorithm (CMA) [6, 16], the multi-modulus algorithm (MMA) [13], and the soft decision-directed algorithm (SDDA) [12] are the well-known representatives. Nowadays, many improved versions [1, 7,8,9,10,11,12,13,14,15,16,17,18,19, 19, 20, 20, 21, 21,22,23,24,25,26,27] of the above-mentioned three approaches have been developed to improve the equalization performance [32].

A traditional approach to blind equalizer design assumes that there is no impulsive noise in the physical channel. However, the ambient noise in many physical channels is proven to be impulsive through experimental measurements. For example, man-made noise [7], low-frequency atmospheric noise, and switching transients all have a typical impulsive characteristic. More specially, unlike Gaussian noise, the probability density function (PDF) of impulsive noise is heavy-tailed. There are three typical PDFs for modeling impulsive noise: the Gaussian mixture model (GMM) [6], generalized Gaussian distribution (GGD) [16], and \(\alpha \)-stable distribution [12, 13]. Different from GMM and GGD, an important characteristic of the \(\alpha \)-stable distribution noise is that the moments of orders higher than \(\alpha \) tend to infinity, and this is the case to be considered in this study. Unfortunately, the performance of most existing blind equalization algorithms degrades significantly because they are derived based on the second-order statistic (SOS), which is infinite under \(\alpha \)-stable noise. In other words, the iterative procedure of most existing SOS-based algorithms will diverge because the \(\alpha \)-stable noise causes an overflow of numerical calculations.

In addition to the SOS, the statistic with an order of higher than 2 is called higher-order statistic (HOS). In contrast to HOS, the statistic with an order of lower than 2 is called lower-order statistic (LOS). Also, the statistic with a fractional order of lower than 2 is called fractional LOS (FLOS). Compared with SOS and HOS, the FLOS is less affected by impulsive noise. Therefore, to improve equalization performance, some extended and hybrid CMAs based on FLOS have been proposed in recent years. In [24], a robust CMA is formulated based on FLOS (FLOS_CMA) to handle the case of impulsive noise. However, the residual ISI of [24] may not be sufficiently low for the system to achieve satisfactory performance. To reduce the residual ISI of [24], a concurrent structure is designed, where decision-directed equalizer cooperatives with the FLOS_CMA equalizer under the control of a nonlinear link [15]. Moreover, the normalization method [31] is applied to equalization, where the gradient of the cost function is normalized by the samples and an iterative algorithm is designed. In [28], the influence of impulsive noise on the stepsize of CMA is considered and improved by the sigmoid function.

Except for the CMAs, some recent works related to MMA and SDDA have been proposed. In [14], the problem of blind source separation and equalization for multi-input multi-output (MIMO) [30] systems under heavy-tailed noise is investigated, and an improved MMA based on FLOS is proposed. The MMA under the impulsive noise is also considered in [13], and a modified iterative algorithm with an approximately quadratic convergence rate is designed to promptly update the value of the equalizer taps. Moreover, a robust SDDA is derived by minimizing the constellation matching error [12], and it can effectively compensate for the ISI and suppress the influence of impulsive noise. However, the smaller the value of \(\alpha \), the larger the amplitude of impulsive noise. The order of FLOS of the above algorithms may be larger than \(\alpha \) under large impulsive noise. As a result, the performance of [17], [28], [29] degrades due to the overflow of numerical calculation in the case of large impulsive noise. Therefore, a higher impulsive noise suppression ability of the blind equalizer is needed to avoid the overflow of numerical calculation under large impulsive noise.

It is important to notice that, in [1, 19, 20, 20, 21, 21,22,23,24,25,26,27], two facts about the above equalizer are revealed: 1) The random variable function (RVF) of FLOS is a power function, and it is essentially a nonlinear function; 2) The normalization method is an efficient way to reduce the influence of impulsive noise. According to the first fact, a more appropriate RVF can be selected to improve the blind equalizer. Meanwhile, the normalization technique can cooperate with the equalizer based on the new RVF to further reduce the influence of large impulsive noise.

Inspired by the above two facts, a robust equalizer is developed in this paper. The main contributions of this paper are summarized as follows. Firstly, a new LOS with a lower influence of impulsive noise is constructed based on the logarithm function. Secondly, a new cost function is formulated by minimizing the mean square error related to the new LOS, and a good equalization algorithm is derived to effectively compensate for the ISI. Thirdly, to further enhance the robustness of the proposed algorithm, the new iterative algorithm is handled by normalizing samples. Finally, numerical simulation results are provided to demonstrate the better performance of the proposed algorithm under Gaussian and impulsive noise.

The rest of this paper is organized as follows. Section 2 briefly describes the background of the blind equalization technique. The classical CMA is shown in Sect. 3. In Sect. 4, a robust equalization algorithm is proposed based on the new LOS. The simulation results are presented in Sect. 5. Section 6 is the Discussion. Section 7 is devoted to the conclusions.

2 Problem Formulation

It is assumed that there is a linear time-invariant channel in the practical communication system. Hence, the received signal at symbol-space n is formulated as:

$$\begin{aligned} x\left( n \right) = \sum \limits _{i = 0}^{L - 1} {h\left( i \right) s\left( {n - i} \right) } + v\left( n \right) \ \end{aligned}$$
(1)

where \(s\left( n \right) \) represents the transmitted symbol sequence that is assumed to be independently identically distributed and takes the value from the symbol set S, \(\mathbf{{h}} = {\left[ {{h_0},{h_1}, \ldots ,{h_{L - 1}}} \right] ^T}\) is the channel impulse response, \({\left( \bullet \right) ^T}\) denotes the transpose, L represents the order of the channel, and \(\nu \left( n \right) \) denotes impulsive noise. In some recent works [13, 14] and this study, the \(\alpha \)-stable distribution is employed to simulate the impulsive noise. The \(\alpha \)-stable distribution \({S_\alpha }\left( {\beta ,\gamma ,\delta } \right) \) is determined by four parameters, where \(\alpha \) is the characteristic exponent that determines the tail of the PDF (the smaller the value of \(\alpha \), the larger the impulsive amplitude of the noise), \(\beta \in \left( { - 1,1} \right) \) is the skewness that represents the asymmetry of the distribution center, \(\gamma \in \left( {0, + \infty } \right) \) is the dispersion coefficient that indicates the dispersion level of the distribution, and \(\delta \) is the location parameter that denotes the mean of the distribution.

To compensate for the ISI caused by the multipath channel, the output of the equalizer is given by:

$$\begin{aligned} y\left( n \right) = \sum \limits _{k = 0}^{K - 1} {{w^*}\left( k \right) x\left( {n - k} \right) = \mathbf{{w}}_{}^H\mathbf{{x}}\left( n \right) } \end{aligned}$$
(2)

where \({\left( \bullet \right) ^*}\) and \(\left( \bullet \right) _{}^H\) represent the complex conjugation and Hermitian transpose, respectively, K is the order of the equalizer, \(\mathbf{{w}}\) denotes the weight vector of the equalizer, and the sliding-window vector of the received data is denoted by \(\mathbf{{x}}\left( n \right) \).

3 The Classical CMA Algorithm

The CMA algorithm with high simplicity and stability, which is based on the minimum mean square error (MMSE) criterion, has achieved significant development and application under Gaussian noise. The cost function of the CMA is given by:

$$\begin{aligned} J_C^{} = E\left[ {{{\left( {{{\left| {y\left( n \right) } \right| }^2} - {R_2}} \right) }^2}} \right] \end{aligned}$$
(3)

where \(E\left( \bullet \right) \) represents the expectation operator, and \({R_2} = {{E\left[ {{{\left| {s\left( n \right) } \right| }^4}} \right] } \big / {E\left[ {{{\left| {s\left( n \right) } \right| }^2}} \right] }} \) is a second-order moment of the transmitted symbol sequences. By minimizing Eq. (3) through the gradient descend approach (GDA), an iterative formula of CMA is given below:

$$\begin{aligned} \mathbf{{w}}_C^{}\left( {n + 1} \right) = \mathbf{{w}}_C^{}\left( n \right) + \left( { - \mu _{C} \nabla J_C^{}} \right) \end{aligned}$$
(4)

where \({\mu _C}\) denotes the stepsize of CMA and is a small positive constant, and \(\nabla J_C^{}\) is the gradient of \(J_C^{}\) with respect to \(\mathbf{{w}}_C^H\left( n \right) \).

Let:

$$\begin{aligned} r_C^{}\left( n \right) = {\left| {y\left( n \right) } \right| ^2} - {R_2} \end{aligned}$$
(5)

with

$$\begin{aligned} k_C^{}\left( n \right) = {y^*}\left( n \right) \end{aligned}$$
(6)

Substituting Eqs. (5) and (6) into Eq. (4), it yields:

$$\begin{aligned} \begin{array}{ll} \mathbf{{w}}_C^{}\left( {n + 1} \right) &{}= \mathbf{{w}}_C^{}\left( n \right) - {\mu _C}{y^*}\left( n \right) \left( {{{\left| {y\left( n \right) } \right| }^2} - {R_2}} \right) \mathbf{{x}}\left( n \right) \\ \mathrm{{ }} &{}= \mathbf{{w}}_C^{}\left( n \right) - {\mu _C}{r_C}\left( n \right) {k_C}\left( n \right) \mathbf{{x}}\left( n \right) \\ \mathrm{{ }} &{}= \mathbf{{w}}_C^{}\left( n \right) - {\mu _C}{e_C}\left( n \right) \mathbf{{x}}\left( n \right) \end{array} \end{aligned}$$
(7)

where \( {e_C}\left( n \right) = {r_C}\left( n \right) {k_C}\left( n \right) \).

It is apparent that the CMA is derived based on SOS. From the perspective of statistics, SOS can completely represent the statistical characteristics of samples if they obey the complex normal distribution. Therefore, several SOS-based equalization algorithms have been developed for the assumption of Gaussian noise. However, SOS and HOS are infinite under \(\alpha \)-stable noise. Therefore, these algorithms fail to converge when the samples are “contaminated” by \(\alpha \)-stable noise.

4 Proposed Algorithm

To achieve higher equalization performance, several cost functions are constructed based on the FLOS. However, the FLOS of orders higher than \(\alpha \) tend to infinity under \(\alpha \)-stable noise. Therefore, the performance of FLOS-based equalization algorithms relies heavily on the accuracy of the estimated characteristic exponent. Meanwhile, there is no efficient way to determine the characteristic exponent \(\alpha \) in a real scenario. Hence, it is urgent to construct a new LOS that is almost independent of the accuracy of the estimated characteristic exponent.

It is acknowledged that the logarithm function has a similar trend to the power function, which is the RVF of FLOS and SOS. Fortunately, the value of \(\ln \left( {{{\left| {y\left( n \right) } \right| }^p} + \kappa } \right) \) (\(\kappa \) and p are positive constant, and the values of \(\kappa \) and p are fixed at 1 for a fair comparison) is smaller than that of \(\left| {y\left( n \right) } \right| \) and \(\left| {y\left( n \right) } \right| _{}^2\) when the samples are contaminated by large \(\alpha \)-stable noise. This suggests that the influence of large \(\alpha \)-stable noise on \(\ln \left( {{{\left| {y\left( n \right) } \right| }^p} + \kappa } \right) \) is lower than that of \(\left| {y\left( n \right) } \right| \) and \(\left| {y\left( n \right) } \right| _{}^2\). Meanwhile, it can be inferred that \(\ln \left( {{{\left| {y\left( n \right) } \right| }^p} + \kappa } \right) \) is less insensitive to the accuracy of the estimated characteristic exponent. Furthermore, the values of \(\ln \left( {{{\left| {y\left( n \right) } \right| }^p} + \kappa } \right) - \ln \left( {{R_p} + \kappa } \right) \) (\( {R_p} = {{E\left[ {{{\left| {s\left( n \right) } \right| }^p}} \right] } \big / {E\left[ {{{\left| {s\left( n \right) } \right| }^p}} \right] }}\) ), \(\left| {y\left( n \right) } \right| - {R_1}\), and \({\left| {y\left( n \right) } \right| ^2} - {R_2}\) are very small under small \(\alpha \)-stable noise. It indicates that the algorithm based on the logarithm function has the same good steady-state performance with CMA and FLOS_CMA under small \(\alpha \)-stable noise. Therefore, to achieve better equalization performance, the logarithm function instead of the power function is employed to construct the LOS.

Referring to the logarithm function and the MMSE criterion, the cost function is designed as:

$$\begin{aligned} {J_L} = E\left[ {{{\left( {\ln \left( {{{\left| {y\left( n \right) } \right| }^p} + \kappa } \right) - \ln \left( {{R_p} + \kappa } \right) } \right) }^2}} \right] \end{aligned}$$
(8)

According to the GDA, the iteration formula of Eq. (8) is given by:

$$\begin{aligned} \mathbf{{w}}_L^{}\left( {n + 1} \right) = \mathbf{{w}}_L^{}\left( n \right) + \left( { - {\mu _L}\nabla {J_L}} \right) \end{aligned}$$
(9)

where \(\mu _L^{}\) represents the stepsize of the proposed algorithm. The gradient of Eq. (8) with respect to \(\mathbf{{w}}_L^H\left( n \right) \) is denoted by \(\nabla {J_L}\), which is expressed as:

$$\begin{aligned} \begin{array}{ll} \nabla {J_L} &{}= \frac{{\partial {J_L}}}{{\partial \mathbf{{w}}_L^H\left( n \right) }}\\ \mathrm{{ }}&{}= \frac{{\ln \left( {{{\left| {y\left( n \right) } \right| }^p} + \kappa } \right) - \ln \left( {{R_p} + \kappa } \right) }}{{{{\left| {y\left( n \right) } \right| }^p} + \kappa }}{y^*}\left( n \right) {\left| {y\left( n \right) } \right| ^{p - 2}}{} \mathbf{{x}}\left( n \right) \end{array} \end{aligned}$$
(10)

Let:

$$\begin{aligned} {k_L}\left( n \right) = \frac{{{y^*}\left( n \right) {{\left| {y\left( n \right) } \right| }^{p - 2}}}}{{{{\left| {y\left( n \right) } \right| }^p} + \kappa }} \end{aligned}$$
(11)

with

$$\begin{aligned} {r_L}\left( n \right) = \ln \left( {{{\left| {y\left( n \right) } \right| }^p} + \kappa } \right) - \ln \left( {{R_p} + \kappa } \right) \end{aligned}$$
(12)

Applying Eqs. (11) and (12) into Eq. (10), the result yields:

$$\begin{aligned} \begin{array}{ll} \mathbf{{w}}_L^H\left( {n + 1} \right) &{}= \mathbf{{w}}_L^H\left( n \right) - {\mu _L}\frac{{\ln \left( {{{\left| {y\left( n \right) } \right| }^p} + \kappa } \right) - \ln \left( {{R_p} + \kappa } \right) }}{{{{\left| {y\left( n \right) } \right| }^p} + \kappa }}{y^*}\left( n \right) {\left| {y\left( n \right) } \right| ^{p - 2}}{} \mathbf{{x}}\left( n \right) \\ \mathrm{{ }} &{}= \mathbf{{w}}_L^H\left( n \right) - {\mu _L}{r_L}\left( n \right) {k_L}\left( n \right) \mathbf{{x}}\left( n \right) \\ \mathrm{{ }} &{}= \mathbf{{w}}_L^H\left( n \right) - {\mu _L}{e_L}\left( n \right) \mathbf{{x}}\left( n \right) \end{array} \end{aligned}$$
(13)

where \({e_L}\left( n \right) = {r_L}\left( n \right) {k_L}\left( n \right) \).

It is easily known that the modulus of equalizer output is quite large when there is large \(\alpha \)-stable noise in the received data. Also, Eq. (3) will further magnify the impact of large \(\alpha \)-stable noise. Moreover, \({e_C}\left( n \right) \) equals the product of Eqs. (5) and (6), whose modulus is extremely large. Consequently, large \(\alpha \)-stable noise will lead to an overflow of numerical calculation for the classical CMA.

Unlike the classical CMA, Eq. (13) introduces an adaptively normalized factor \( {{{{\left| {y\left( n \right) } \right| }^{p - 2}}} \big / {\left( {{{\left| {y\left( n \right) } \right| }^p} + \kappa } \right) }} \) (usually, the value of p is no larger than 2 and the value of \( {{{{\left| {y\left( n \right) } \right| }^{p - 2}}} \big / {\left( {{{\left| {y\left( n \right) } \right| }^p} + \kappa } \right) }} \) approximates to 0 when the samples are contaminated by large impulsive), which can efficiently suppress large \(\alpha \)-stable noise. To further improve the robustness of the proposed algorithm, the gradient of Eq. (8) is normalized by the \(\ell _2^{}\)-norm of the sliding-window vector \(\mathbf{{x}}\left( n \right) \). Therefore, the iterative formula of the proposed algorithm is represented as:

$$\begin{aligned} \mathbf{{w}}_L^H\left( {n + 1} \right) = \mathbf{{w}}_L^H\left( n \right) - {\mu _L}{e_L}\left( n \right) \frac{{\mathbf{{x}}\left( n \right) }}{{{{\left\| {\mathbf{{x}}\left( n \right) } \right\| }_2}}} \end{aligned}$$
(14)

where \({\left\| \bullet \right\| _2}\) is the \(\ell _2^{}\)-norm of a vector.

Hence, when the received samples are contaminated by large \(\alpha \)-stable noise, the gradient of the proposed algorithm is close to 0. As a result, the weight vector of the proposed algorithm almost does not change under large \(\alpha \)-stable noise. Therefore, the overflow of numerical calculations of the proposed algorithm can be efficiently addressed under large \(\alpha \)-stable noise.

5 Numerical Simulations

In this section, simulations are conducted to investigate the performance of the equalization methods described in the previous sections. The residual ISI is frequently used to measure the performance of the equalizer, which is given by:

$$\begin{aligned} ISI = \frac{{\sum \limits _{n = 0}^{K + L - 1} {{{\left| {C\left( n \right) } \right| }^2} - \left| C \right| _{\max }^2} }}{{\left| C \right| _{\max }^2}} \end{aligned}$$
(15)

where \(C\left( n \right) \) denotes the n-th element of \(\mathbf{{C}}\left( n \right) \), and \(\left| C \right| _{\max }^2\) denotes the maximum modulus of \(\mathbf{{C}}\left( n \right) \). Here, \(\mathbf{{C}}\left( n \right) \) is defined as:

$$\begin{aligned} \mathbf{{C}}\left( n \right) = \mathbf{{h}}\left( n \right) \otimes \mathbf{{w}}\left( n \right) \end{aligned}$$
(16)

where \(\otimes \) denotes convolution operation [20, 21].

The Gaussian noise is a special case of \(\alpha \)-stable noise when \(\alpha = 2\). For the case of Gaussian noise, the conventional signal-to-noise ratio (SNR) is defined as:

$$\begin{aligned} SNR = 10\lg \left( {\frac{{E\left[ {{{\left| {s\left( n \right) } \right| }^2}} \right] }}{{\sigma _n^2}}} \right) \end{aligned}$$
(17)

where \(\sigma _n^2\) represents the variance of Gaussian noise.

However, in the case of \(\alpha < 2\), SOS and HOS are infinite for the \(\alpha \)-stable distribution. Hence, the variance \(\sigma _n^2\) of \(\alpha \)-stable distribution does not exist when \(\alpha < 2\). As a result, the conventional SNR measurement cannot be applied to \(\alpha \)-stable noise. Fortunately, \(\gamma \in \left( {0, + \infty } \right) \) is the dispersion coefficient that indicates the dispersion level of the distribution related to the power of noise. Then, a new SNR measure, generalized signal-to-noise ratio (GSNR) [2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21], is given by:

$$\begin{aligned} GSNR = 10\lg \left( {\frac{{E\left[ {{{\left| {s\left( n \right) } \right| }^2}} \right] }}{\gamma }} \right) \end{aligned}$$
(18)

In our simulation, the expectation \(E\left[ {{{\left| {s\left( n \right) } \right| }^2}} \right] \) in (17) and (18) is replaced by the time average \( \frac{1}{N}\sum \limits _{n = 1}^N {{{\left| {s\left( n \right) } \right| }^2}}\), and the Gaussian and \(\alpha \)-stable noises are generated by and stblrnd function of MATLAB.

For each experiment, this study uses an independent, identically and distributed transmitted sequence drawn from a 4QAM constellation. Moreover, a ten-tap ISI channel with complex tap coefficients:

$$\begin{aligned} \begin{array}{l} \mathbf{{h}} = \mathrm{{ }}[0.6172 + 0.493j,0.371 + 0.420j,0.173 + 0.150j,0.076 + 0.050j,\\ 0.026 + 0.017j,0.0002 + 0.0003j,0.0006 + 0.0010j,0.0013 + 0.002j,\\ 0.002 + 0.002j,0.002 + 0.0023j] \end{array} \end{aligned}$$
(19)

Additionally, the order of the equalizer taps is 6, and the equalizer is initialized with the unit center tap to avoid local convergence. The SNR/GSNR is fixed at 15 dB. The comparison of the performance of FLOS_CMA [24], VMNP_CMA [4], SVS_CMA [28], RDDA [12], MMA [13], and the proposed algorithm is shown in the following figures. The parameters of different competitive methods are set as follows: \({\tilde{p}}{{ = \alpha } \big / 2}\) for [24], \(\xi = 1.6\) and \(\zeta = 2\) for [28], \({\hat{p}} = 2\) for [4], \({\tilde{C}} = 0.4\) and \(\tilde{\sigma }_{}^2 = 10_{}^{ - 5}\) for [12], and \({\mathop {p}\limits ^{\smile }} = 1\) for [13]. It should be noted that the setting of the parameters of different competitive methods is consistent with related references for a fair comparison. The value of the parameter (\(p = 1\) and \(\kappa = 1\)) related to the proposed algorithm is determined by a preliminary experiment, as shown in Fig. 2. Moreover, the other settings of the simulation are given in the following for better comparisons of all algorithms.

The smaller the value of \(\alpha \), the larger the impulsive amplitude of the noise. Hence, the value of \(\alpha \) is set to 1.2, 1.6, and 2 corresponding to large, moderate, and small impulsive noise, respectively. Under small impulsive noise, a large stepsize can be adopted to accelerate the convergence of the equalization algorithm. However, under large impulsive noise, a large stepsize will make the equalization algorithm fail to converge. To make different equalization algorithms work stably, different stepsizes are selected for an equalizer under different cases of impulsive noise. The stepsizes of the six equalization algorithms are listed in Table 1.

Table 1 The stepsize of different algorithms

5.1 Example 1: ISI

In this experiment, the residual ISI versus the number of iterations under different cases of \(\alpha \)-stable noise is shown in Fig. 1. It can be seen from Fig. 1a that, under the Gaussian noise (\(\alpha =2\) ), VMNP_CMA, SVS_CMA, FLOS_CMA, RDDA, MMA, and the proposed algorithm converge successfully after about \(1.5 \times 10_{}^{ - 4}\) iterations. However, the residual ISI of VMNP_CMA, MMA, RDDA, and the proposed algorithm approximates -9dB, -12dB, -19dB, and -20dB, respectively. It indicates that the proposed algorithm can compensate for the ISI under Gaussian noise better than the competitive methods without additional iterations.

Different from Fig. 1a, under moderate impulsive noise, the increased residual ISI of the six equalization algorithms ranges between 3 dB to 14 dB, as shown in Fig. 1b. More exactly, the residual ISI of the FLOS_CMA increases by about 14 dB, while that of the proposed algorithm increases by about 3 dB. Unlike Fig. 1a, b, under large impulsive noise, only RDDA and the proposed algorithm converge successfully. However, the residual ISI of the proposed algorithm is about 10 dB lower than that of RDDA. This can be attributed to the following two reasons: 1) The cost function of the proposed algorithm is superior to that of the competitive methods; 2) The normalization method further improves the robustness of the proposed algorithm to impulsive noise.

Fig. 1
figure 1

The ISI versus the number of iterations. a With \(\alpha = 2\). b With \(\alpha = 1.6\). c With \(\alpha =1.2\)

5.2 Example 2: SER

In this experiment, the symbol error rate (SER) is used to measure the performance of different equalizers.

The SER of the proposed algorithm versus the value of \(\kappa \) under different impulsive noise is shown in Fig. 2, which is used to determine the optimal parameter of the proposed algorithm. It can be seen from Fig. 2 that the SER of the proposed algorithm is lower when \(\kappa \in \left[ {0.3,1.8} \right] \). Hence, the effective range of the proposed algorithm is \(\kappa \in \left[ {0.3,1.8} \right] \). Besides, the value of \(\kappa \) and p are chosen as 1 for a fair comparison.

Fig. 2
figure 2

The SER versus the value of \(\kappa \)

It can be seen from Fig. 3a that the six equalization algorithms work well under Gaussian noise. However, as shown in Fig. 3b, c, as the impulsive amplitude becomes larger, the performance of the five competitive methods is decreased, especially the VMNP_CMA, MMA, and SVS_CMA, due to their poor impulsive noise suppression capability. The performance of the proposed algorithm decreases by 13 dB and 17 dB under moderate and large impulsive noise, respectively, compared to its performance under Gaussian noise. However, it is still optimal compared with the performance of competitive algorithms in the case of the same impulsive noise. It is explicitly revealed that the proposed algorithm has better robustness than RDDA and other competitive algorithms under impulsive noise. This testifies to the statement of the high noise suppression capability of the proposed algorithm on the other side.

Fig. 3
figure 3

SER versus the input SNR/GSNR. a With \(\alpha \) = 2. b With \(\alpha \) = 1.6. c With \(\alpha \) = 1.2

5.3 Example 3: Constellation of Equalizer Output

In this experiment, under different cases of impulsive noise, the constellations of different equalizer outputs are shown as follows.

It can be seen from Figs. 4, 5, and 6 that the performance of six equalization algorithms decreases as the impulsive amplitude of the noise becomes larger. Fortunately, the performance degradation of the proposed algorithm is minimal even under large impulsive noise red owing to its high impulsive suppression capability.

Fig. 4
figure 4

The constellation of different equalizer outputs (\(\alpha = 2\))

Fig. 5
figure 5

The constellation of different equalizer outputs (\(\alpha = 1.6\))

Fig. 6
figure 6

The constellation of different equalizer outputs (\(\alpha = 1.2\))

6 Discussion

The performance degradation of competitive equalization algorithms is due to the under-suppressing of impulsive noise. For example, the iterative formula of SVS_CMA is the product of \({y^*}\left( n \right) \), \(f\left( {{{\left| {y\left( n \right) } \right| }^2} - {R_2}} \right) \), and \(\mathbf{{x}}\left( n \right) \), where \(f\left( \bullet \right) \) denotes the sigmoid function. Apparently, \({y^*}\left( n \right) \), \(f\left( {{{\left| {y\left( n \right) } \right| }^2} - {R_2}} \right) \), and \(\mathbf{{x}}\left( n \right) \) are affected by impulsive noise. Although \(f\left( {{{\left| {y\left( n \right) } \right| }^2} - {R_2}} \right) \) is a positive constant when samples are contaminated by large impulsive noise, the SVS_CMA may fail to converge due to overflow of the numerical calculation caused by the impulsive noise components of \({y^*}\left( n \right) \) and \(\mathbf{{x}}\left( n \right) \). As to FLOS_CMA, \(\left| {y\left( n \right) } \right| _{}^{p - 2}\) is a key component of FLOS_CMA because the value of \(\left| {y\left( n \right) } \right| _{}^{p - 2}\) approximates to 0 when the samples are contaminated by large impulsive noise. However, there is no efficient way to determine the optimal p for FLOS_CMA in real applications. Additionally, the very small moment of order p for FLOS_CMA will lead to another problem: over-suppressing of impulsive noise. Also, under large impulsive noise, more samples are required for better FLOS built.

Different from FLOS_CMA, the RDDA has superior performance to other competitive methods owing to its good cost function. More specifically, the constellation matching error is handled by a nonlinear function, which is a composite of the power function, exponent function, and logarithm function. However, under large impulsive noise, the residual ISI and SER of RDDA are higher than those of the proposed algorithm.

The more suitable the nonlinear function, the better the statistic, the better the equalization algorithm. Also, the robustness of the proposed algorithm is further improved by the normalization method. Hence, it can be concluded that the performance gain of the new algorithm is attributed to the better cost function and the normalization method.

7 Conclusion

In this paper, a new LOS of equalizer output is constructed based on a logarithm function. Meanwhile, an unconstrained optimization problem is formulated based on the new LOS and MMSE and solved by GDA. The robustness of the proposed algorithm is further improved by the normalization method. Also, numerical simulation results are provided to the better performance of the proposed algorithm under both Gaussian and impulsive noise.