Abstract
The primary evaluation criteria for image watermarking methods encompass imperceptibility, robustness, and capacity. However, there exists a mutual dependence and constraint relationship among these three factors, making it of paramount importance to investigate how to strike a balance between them. Statistical watermarking methods that leverage the statistical characteristics of the carrier and human visual features hold promise in achieving such equilibrium among imperceptibility, robustness, and capacity. Nevertheless, statistical watermarking methods face certain issues, including poor robustness in the embedding domain and a deficiency of decoding methods, etc. In this paper, we introduce a novel approach aimed at enhancing robustness in the embedding domain by utilizing the nonsubsampled shearlet transform (NSST) domain local pseudo-Zernike moments (PZMs) magnitudes. And we designed a two-component student’s-t mixture model (SMM) to represent the NSST-PZMs magnitudes, taking into account their “peak-tail” characteristics. Finally, by applying the SMM in conjunction with maximum-likelihood criteria, we derived a closed-form decoding expression. Our extensive experimental results indicate that when embedding a 1024-bit watermark sequence, our method exhibits outstanding performance, with a peak signal-to-noise ratio exceeding 58 dB and a bit error rate lower than 0.033 on the SIPI, CVG-UGR, and BOSSbase datasets. Compared to some advanced methods, our approach demonstrates significant advantages in terms of imperceptibility and robustness.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
In light of the widespread use of the internet and multimedia systems, digital data is increasingly susceptible to unauthorized distribution and misuse, resulting in a significant infringement of the intellectual property rights of data creators. To tackle this issue, digital watermarking technology [2, 11, 18, 23, 33, 43, 45, 49] has emerged and found extensive applications in the discipline of information security. The application domains of digital watermarking encompass various fields such as image, video, audio, and more. This paper, in particular, concentrates on digital image watermarking (DIW) technology [15, 22, 24, 26, 31, 32, 41, 48]. DIW technology is characterized by three key performance metrics: invisibility, robustness, and watermark capacity. These metrics are interconnected and impose mutual constraints. Typically, DIW technology comprises two primary components: watermark embedding and watermark detection. Watermark embedding is responsible for the incorporation of watermark signals into an image, while watermark detection is employed to ascertain the presence of watermark signals within an image.
During the process of watermark embedding, two main tasks need to be accomplished. The first task is to establish a robust embedding domain. Typically, there are three embedding domains, namely the spatial domain [21, 27, 42, 44], the frequency domain [1, 13, 14, 20, 58], and the hybrid domain [34, 47, 55, 56, 59]. Among these domains, the hybrid domain, especially methods based on singular value decomposition, is considered to be the most robust. However, hybrid domain methods based on singular value decomposition do face the issue of false positives, which requires a solution [17]. The second task is to determine the embedding rules, with additive \(Y = X + \alpha w\) [4, 5, 9] and multiplicative \(Y = X + \alpha Xw\) [7, 8, 12, 35, 36] rules being commonly used. In these rules, \(V_{nm} (x,y) = V_{nm} (\rho ,\theta ) = R_{nm} (\rho )\exp (jm\theta )\) and \(\rho = \sqrt {x^{2} + y^{2} }\) represent the original data and the watermarked data, \(0 \le |m| \le n\) is the watermark sequence, and \(\theta = \tan^{ - 1} (y/x)\) is the implantation factor. Increasing \(\theta = \tan^{ - 1} (y/x)\) can enhance robustness, but its limit is constrained by the demands of the human visual system (HVS). As the multiplicative rule adapts to image content, it has gained popularity in the field of watermarking.
Watermark detection methods can be broadly categorized into two main types: correlation-based watermark detection techniques and statistical watermark detection techniques. The principle of correlation-based watermark detection is to determine whether the host image contains watermark signal by analyzing the correlation between the extracted watermark signal and the original watermark signal. This method performs well when the carrier image follows a Gaussian distribution. Previous studies have shown that in many cases, the carrier image does not obey a Gaussian distribution either in the spatial domain or in the transform domain, which affects the effectiveness of correlation-based watermark detection techniques. In contrast, statistical watermark detection techniques, where watermark detection is performed by the collaboration of statistical models and decision rules, are not constrained by this problem. The accuracy of this watermark detection method is related to the accuracy of the statistical modeling research object, which refers to the watermark embedding domain.
The study of achieving a balance between invisibility, robustness, and watermark capacity holds significant importance in the field of watermarking technology. Statistical watermarking techniques that are based on human visual characteristics and carrier statistical characteristics have the potential to address this balance effectively. Numerous watermarking methods based on statistical models have been proposed. In [14], a multiplicative watermark scheme is developed in the contourlet domain. The contourlet coefficients are modeled using the t-location scale (TLS) distribution. The authors use the Kolmogorov–Smirnov (K-S) test to demonstrate the efficiency of modeling contourlet coefficients with the TLS. In [1], a non-additive scheme is proposed, modeling discrete shearlet transform (DST) coefficients using the Laplace distribution. DST serves as the embedding domain in this scheme. In [35], a blind multichannel multiplicative watermarking scheme in the sparse domain is presented. An effective closed-form watermark detector is designed using the multivariate Cauchy distribution. Experimental results and theoretical analysis confirm the effectiveness of this watermark detector. The [36] introduces a low-complexity watermarking detector found on the stable family of Cauchy distributions. The study investigates the robustness of this detector in opposition to a variety of assaults such as noise, filtering, and compression, showing it to outperform the generalized Gaussian detector. Amini et al. [7] present a new color image watermarking scheme and an associated sparse domain detector. This scheme detects the presence of a watermark by establishing binary hypothesis testing. Experimental results point that this detector performs well in terms of robustness. In [8], a scaling watermarking scheme is proposed. This scheme embeds the watermark into low-frequency wavelet coefficients and effectively describes wavelet coefficients using a Gaussian mixture model (GMM). A maximum-likelihood (ML) watermark detector is proposed based on the GMM. In [10], a uniformly most powerful watermark detector for detecting weak signals in the wavelet domain is proposed. The detector models the noise distribution using the Bessel K-form (BKF) probability density function (PDF). The authors extensively discuss the BKF detector and assess the practical achievement of the suggested detector through receiver operating characteristic (ROC) analysis. The [37] tackles the blind watermark detection problem in the contourlet domain. It models the contourlet domain coefficients using the normal inverse Gaussian (NIG) distribution and designs an optimal blind watermark detector in the contourlet domain. The experiments demonstrate the robustness of this detector against attacks such as JPEG compression and Gaussian noise. Niu et al. [29] introduce a statistical color image watermarking method based on the Cauchy-Rayleigh distribution and the local quaternion polar complex exponential transform (QPCET). They derive a closed-form expression for the watermark detector using Cauchy-Rayleigh.
Although numerous statistical-based watermark detectors have been proposed, they primarily serve the purpose of determining whether an image contains watermark information. Some researchers have realized that decoders that can extract watermark information are of more practical value, so they have developed several watermark decoders based on statistical methods. Akhaee et al. [3] propose a blind image watermarking method that is not influenced by the main signal’s distribution and is applicable to any distribution in the transform domain. This method divides the host signal into two small blocks, with one block embedding the watermark while the other block remains unchanged. Watermark extraction is achieved by analyzing the ratio of the sum of samples in each block. In [38], a watermark decoder based on the contourlet domain, which is also the embedding domain, is introduced. This method employs the NIG distribution to model contourlet domain coefficients and designs a watermark extraction method based on the NIG distribution using ML criteria. Experimental results show the robustness of this decoder against several attacks, including noise, rotation, cropping, filtering, and compression. Amini et al. [6] propose a blind multi-bit watermark decoder found on wavelet domain vector hidden markov model (HMM). They derive a closed-form expression for the bit error rate (BER) and validate the correctness of this expression through Monte Carlo simulation experiments. In [39], a multiplicative image watermarking scheme in the contourlet domain is introduced. Although contourlet coefficients are not Gaussian distributed within subbands, the authors argue that the local distribution closely resembles a Gaussian distribution. Therefore, they use a bivariate Gaussian (BVG) distribution to model the distribution of contourlet coefficients. Based on BVG, they design an optimal blind watermark decoder in the contourlet domain. Wang et al. [51] propose a blind image watermark decoder based on the discrete nonsubsampled transform (DNST) domain. They utilize a vector Gaussian-Cauchy mixture model to model the singular values in the DNST domain. Additionally, several decoding methods are introduced in the nonsubsampled contourlet transform (NSCT) domain [53], the contourlet domain [52], and the nonsubsampled shearlet transform (NSST) domain [28]. Liu et al. [25] present a color image watermarking method found on the quaternion polar harmonic transform (QPHT) and a ML decoder. They model the QPHT magnitudes using the BKF. In [50], a statistical watermarking approach found on the local radial harmonic Fourier moments (RHFMs) magnitudes and the beta exponential distribution model is introduced. A decoder for image watermarks designed for multiplicative watermarks is designed by using the Beta exponential distribution and the ML. Xia et al. [54] offer a robust multiplicative watermark decoder based on fitting the phase harmonic Fourier moments (PHFMs) magnitudes with the Weibull distribution. They establish a statistical watermark decoder using the Weibull distribution as the prior for PHFMs magnitudes. In [57], a blind multiplicative watermark system in the curvelet domain is introduced. They employ the NIG distribution to fit the curvelet coefficients and design a watermark decoder using the NIG and ML criteria.
The aforementioned methods provide valuable insights for the design of statistical model decoder methods. However, they suffer from several drawbacks, such as low robustness in watermark embedding domain, weak model performance, and inferior decoding accuracy. To address these drawbacks, this paper proposes a watermark decoder utilizing the Student’s-t mixture model (SMM) to model the low-order pseudo-Zernike moments (PZMs) magnitude in the nonsubsampled shearlet transform (NSST) domain. The primary contributions of this study are outlined as follows:
-
To enhance the robustness of watermark embedding domain and the visual quality of watermarked images, NSST domain local PZMs magnitudes are designed, leveraging the decomposition characteristics of NSST and the robustness of low-order PZMs.
-
To ensure accuracy in modeling the NSST-PZMs magnitudes, a two-component SMM is designed based on the analysis of the statistical characteristics of NSST-PZMs magnitudes.
-
A closed-form decoder expression is derived based on SMM, which is the prior of the NSST-PZMs magnitudes, and maximum-likelihood (ML) criteria.
The remaining sections of this paper are organized as follows: Sect. 2 offers an introduction to local NSST-PZMs magnitudes, conducts an analysis of the robustness and statistical properties of NSST-PZMs magnitudes, and introduces the utilization of SMM for modeling NSST-PZMs magnitudes. Section 3 details the watermark embedding process and provides an in-depth explanation of the decoder. In Sect. 4, the performance of the proposed method is analyzed from the perspectives of invisibility, robustness, and watermark capacity, and it is compared with some advanced methods. The paper concludes in Sect. 5.
2 Preliminaries
2.1 Pseudo-Zernike Moments (PZMs)
Pseudo-Zernike moments (PZMs) [46] appear in our scheme as they have some important properties, i.e., their magnitudes are invariant to image rotation and flipping. Properties of PZMs to note: firstly, low-order PZMs are capable of representing image features, while higher-order PZMs excel in representing image details; secondly, low-order PZMs exhibit better robustness compared to higher-order PZMs; and lastly, the number of magnitudes generated by the PZMs is related to the number of orders, with the number of magnitudes obtained by an \(T\)-order PZMs being \((T + 1) \times (2T + 1)\). This leads to the conclusion that PZMs cannot simultaneously possess strong robustness and high image representation capability.
When constructing an embedding domain, robustness, imperceptibility, and watermark capacity must be considered. Considering that low-order PZMs exhibit strong robustness and the number of invariants generated by PZMs is related to their order, a local decomposition strategy is proposed to ensure the generation of sufficient invariants and utilize the robustness of low-order PZMs. Specifically, the “non-overlapping equal-sized” strategy is used to segment the host image into multiple local blocks, and low-order PZM is applied to these local blocks, as shown in Fig. 1. In this study, the block size is set to \(8 \times 8\), and the order of PZMs is set to 5.
Taking Couple, Airplane, Boat, Lena, and Peppers as examples, Fig. 2 displays the original images and images reconstructed based on 5-order PZMs. Based on Fig. 2, the quality of the reconstructed images using PZMs is extremely low and fails to meet the requirements of the HVS. This is because low-order PZMs can only reconstruct the feature of the image and are unable to reconstruct the detail of an image.
2.2 Nonsubsampled Shearlet Transform (NSST) Domain Local PZMs Magnitudes
The shearlet transform (ST), compared to other multiscale methods used for image representation like pyramids, wavelets, and curvelets, demonstrates superior effectiveness in capturing features of the host image in various directions. Unfortunately, the standard ST lacks translation invariance due to the downsampling operation, making it susceptible to exhibiting pseudo-Gibbs artifacts during image fusion. To tackle this issue, Easley et al. [13] introduced a nonsubsampled version of the ST, known as the nonsubsampled shearlet transform (NSST). Since the low-level NSST decomposition preserves critical image feature information, we incorporated NSST into this study to filter out less important image data. The NSST decomposition level is set to 2.
Features based on NSST and PZMs, we propose the construction strategy of the NSST domain local PZMs magnitude. This strategy aims to leverage the robustness of low-order PZM, generate sufficient invariants to accommodate watermarks, and ensure image reconstruction quality. Taking a 512 × 512 image as an example, the generation process of NSST-PZMs magnitude is as follows:
First, a two-level NSST decomposition is applied to the host image, resulting in high-frequency subbands at two scales, with each scale containing high-frequency subbands in four different directions.
Next, the high-frequency subbands are segmented using a “non-overlapping equal-sized” strategy, with a block size of 8 × 8.
Finally, a 5-order PZMs calculation is executed on each frequency domain block to yield the locally NSST-PZMs magnitude.
The essence of this strategy lies in utilizing NSST to filter out redundant information in the image, thereby reducing the amount of information required for PZMs reconstruction. This effectively ensures the quality of image reconstruction while fully leveraging the robustness of low-order PZMs and generating sufficient invariants. Taking a high-frequency subband obtained by NSST decomposition as an example, Fig. 3 shows the local NSST-PZMs magnitude generation process. Each high-frequency subband can generate 4096 local NSST-PZMs magnitude blocks, and assembling these blocks in an orderly manner yields the NSST-PZMs magnitude image.
Taking Couple, Airplane, Boat, Lena, and Peppers as examples, Fig. 4 displays the original images and the images reconstructed based on NSST-PZMs magnitude. Comparing the reconstructed images in Fig. 4 to those in Fig. 2, it is clear that the rebuilding quality in Fig. 4 is higher, making it challenging for the HVS to discern any differences between the reconstructed images and the original images.
2.3 Robustness Analysis of Local NSST-PZMs Magnitudes
We employ the normalization error to measure the robustness of NSST-PZMs magnitudes, and its calculation formula is as shown in (1):
where \(E\) represents the result of normalization error calculation, with a smaller \(E\) indicating stronger robustness. \(M \times N\) represents the size of the sample. \(f(a,b)\) represents the original image, and \(f^{\prime}(a,b)\) is the image after the attack. \(X(a,b)\) denotes the absolute difference between \(f(a,b)\) and \(f^{\prime}(a,b)\). \(X_{\min }\) represents the minimum value of \(X(a,b)\), and \(X_{\max }\) denotes the maximum value. 2000 images are randomly selected from the BOSSbase dataset as test images, and the spatial domain, NSST domain, PZMs magnitude domain, and NSST-PZMs magnitude domain are the testing subjects. Here, the NSST-PZMs magnitude domain is generated by the high-frequency subband with the highest variance in Scale 2 after the 2-level decomposition of NSST, which is the target region for hidden watermarks. The results of the normalization error experiments are summarized in Table 1. It is clear from Table 1 that, in comparison to the spatial domain, NSST domain, and PZMs domain, the normalization error of the NSST-PZMs domain is smaller. This indicates the greater resistance to attacks achieved by NSST-PZMs.
The NSST-PZMs magnitude image is composed of 4096 NSST-PZMs magnitude blocks, which are organized in a specific order. Based on the order and repetition of PZMs, the magnitude at the corresponding order and repetition is calculated for each NSST-PZMs magnitude block. To assess the robustness of NSST-PZMs magnitude, the attacked NSST-PZMs magnitude is compared with the unattacked NSST-PZMs magnitude for judging its performance under various attacks. The results of the robustness assessment experiment for NSST-PZMs magnitudes, conducted with 500 images, are presented in Fig. 5. As indicated by Fig. 5, the magnitudes at the partial order and repetition exhibit only slight fluctuations of less than 0.2. This observation suggests that NSST-PZMs magnitude demonstrates robustness against various attacks.
2.4 Statistical Characteristics Analysis of NSST-PZMs Magnitudes
The test subjects are generated from the high-frequency subband with the maximum variance within the second scale of NSST. The statistical characteristics of NSST-PZMs magnitudes are analyzed using frequency histograms and kurtosis. We select several hundred images, and the frequency histograms and corresponding kurtosis of six images are illustrated in Fig. 6. It is important to note that the peak value for a standard Gaussian distribution is 3. According to Fig. 6, NSST-PZMs magnitudes exhibit non-Gaussian characteristics with a sharp peak and heavy tail.
2.5 Statistical Modeling of NSST-PZMs Magnitudes
One of the key aspects of building a high-quality decoder is the design of an effective and simple statistical model. In this study, “effective” means the model can accurately capture NSST-PZMs magnitudes, while “simple” implies that the model has a limited number of parameters. The student’s-t distribution features a shape-adjustable parameter, \(\nu\), which imparts a high degree of flexibility to the distribution. The student’s-t distribution approximates the normal distribution when \(\nu \to + \infty\), but has heavy tails when \(\nu \to 0\). Given the characteristics of NSST-PZMs magnitudes, with a sharp peak and heavy tail, the student’s-t mixture model (SMM) [30] is introduced. SMM comprises multiple student’s-t distributions. The PDF of SMM is represented as follows:
where \(K\) represents the number of components, \(\eta_{k}\) is the weight of \(k - th\) the component, \(\sum\limits_{k = 1}^{K} {\eta_{k} } = 1\). \(\upsilon_{k}\) denotes the degrees of freedom of the \(k - th\) component. \(d\) is the dimension. \(\Sigma_{k}\) is the covariance matrix of the \(k - th\) component. \(\mu_{k}\) is the mean of the \(k - th\) component. Given the “sharp peak and heavy tail” characteristics of NSST-PZMs magnitudes, \(K\) is set to 2, which results in the SMM consisting of two student’s-t distributions. This choice allows one component to model the “sharp peak” while the other component models the “heavy tail.”
Six different distributions are selected for analysis, including the Gamma distribution, Rayleigh distribution, Exponential distribution, Gaussian distribution, Weibull distribution, and SMM. The NSST-PZMs magnitude images, generated from the high-frequency subband of scale 2 after NSST 2-level decomposition of Lena, Barbara, and Couple images, are used as test objects. The K-S test is employed to measure the goodness of fit of these distributions to the test objects, where a smaller K-S value indicates a better fit to the data. The K-S experimental results are summarized in Table 2. In comparison to the other distributions, the SMM displays the smallest K-S value, suggesting that the SMM is particularly well-suited for modeling NSST-PZMs magnitudes. This conclusion can be extended to other natural images as well.
Furthermore, using the Peppers as an example, the NSST-PZMs magnitude images generated from scale 2 are taken as the target objects. Figure 7 displays the fit of the Gamma distribution, Rayleigh distribution, Exponential distribution, Gaussian distribution, Weibull distribution, and SMM to the target objects. Maximum likelihood estimation is employed to estimate the parameters for the Gamma, Rayleigh, Exponential, Gaussian, and Weibull distributions, while the parameter of SMM is estimated using the Expectation–Maximization (EM) method [30]. As depicted in Fig. 7, in comparison to the other distributions, the SMM exhibits the closest fit to the target objects. This further demonstrates that the SMM is suitable for modeling NSST-PZMs magnitudes.
3 Proposed Method
3.1 Digital Watermark Embedding
Let \(I = \{ f(i,j),0 \le i < M,0 \le i < N\}\) represents the carrier image, and let \({\mathbf{w}} = \{ w_{1} , \ldots ,w_{l} , \ldots ,w_{L} \}\) be a binary watermark sequence of length \(L\) generated using a pseudo-random sequence (PRS), where \(w_{l}\) consists of \(\{ - 1, + 1\}\). Figure 8 illustrates the process of watermark embedding, and the following are detailed explanations:
Step 1: The 2-level NSST decomposition is applied to \(I\), resulting in four high-frequency subbands for each of the two scales. The high frequency subband with the highest variance at scale 2 is the goal subband \(Sub_{tar}\).
Step 2: \(Sub_{tar}\) is divided into \(B_{N}\) NSST domain blocks using a “non-overlapping and equal-sized” strategy, \(B_{N} \ge L\). The entropy values of these \(B_{N}\) blocks are calculated. The entropy-based method is utilized to select the optimal watermark embedding locations because the HVS is less sensitive to changes in regions with higher entropy.
Step 3: \(L\) high-entropy blocks are selected, and these blocks undergo PZMs. As a result, \(L\) target NSST-PZMs magnitude blocks, \({\mathbf{B}} = \{ B_{1} , \ldots ,B_{l} , \ldots ,B_{L} \}\), are obtained.
Step 4: The watermark sequence is embedded into \({\mathbf{B}}\) using the multiplicative way, which is specified as:
where \(x_{i}\) represents the original NSST-PZMs magnitude; \(w_{l}\) is the \(l\) th watermark bit from \({\mathbf{w}}\); \(y_{i}\) denotes the NSST-PZMs magnitude containing watermarks; \(\alpha\) is the embedding strength, \(0 < \alpha \le 1\), it is resoluted by the embedding domain variance and the watermark document ratio (WDR). The rule for \(\alpha\) are as follows:
In this research, a single target NSST-PZMs magnitude block is used to embed only one watermark bit. This means that one watermark bit is combined with \((T + 1) \times (2T + 1)\) magnitudes within the target block, with the aim of increasing the watermark containment rate.
Step 5: The \(L\) NSST-PZMs magnitude blocks containing watermarks are reconstructed using the PZMs. As a result, \(L\) NSST domain blocks containing watermarks are obtained. These \(L\) NSST domain blocks with watermarks are then combined with the \(\left( {N_{b} - L} \right)\) original NSST domain blocks to perform NSST reconstruction, resulting in the watermarked image \(I_{w}\), \(I_{w} = \{ f_{w} (i,j),0 \le i < M,0 \le i < N\}\).
3.2 Digital Watermark Extraction
Essentially, the task of a decoder is to extract the intended signal from a noisy context. In this research, the decoder needs to extract \(w\) from the NSST-PZMs magnitudes. A ML decoder based on the SMM is proposed to extract the watermark in \(I_{w} = \{ f_{w} (i,j),0 \le i < M,0 \le i < N\}\). Figure 9 illustrates the process of watermark decoding, and the following are the details:
Firstly, the target subband \(Sub_{w\_tar}\) is selected from the 2-level NSST decomposition of \(I_{w}\) based on the highest variance at scale two. \(Sub_{w\_tar}\) is divided into \(B_{N}\) equally-sized non-overlapping blocks.
Next, \(L\) high-entropy blocks from \(B_{N}\) are chosen, and PZMs are applied to them to identify \(L\) NSST-PZMs magnitude blocks for decoding \(w\).
Furthermore, all \(B_{N}\) blocks undergo PZMs, resulting in \(B_{N}\) NSST-PZMs magnitude blocks. Using the EM method, the coefficients in \(B_{N}\) NSST-PZMs magnitude blocks are used to train the SMM parameters.
Finally, the decoder extracts the \(l\) th watermark bit from the \(l\) th target block, \(l = 1,2, \ldots ,L\). The derivation process for the decoder expression \(SMM\_ML\) is as follows:
The watermark sequence created by PRS consists of “ + 1” and “ − 1,” with each watermark bit having an equal probability. Based on the characteristics of the watermark sequence, the regions containing “ + 1” can be represented as \(H_{1}\), and the regions containing “ − 1” can be represented as \(H_{0}\).
where \(x_{i}\) represents the NSST-PZMs magnitude without watermark;\(y_{i}\) is the NSST-PZMs magnitude with watermark. \(\alpha\) is the embedding strength; \(i \in B_{l}\), \(B_{l}\) is the \(l\) th watermark embedding block. Based on the ML criterion, (6) is represented as:
We take the logarithmic of (7) and bring it to the same side, as follows:
Substitute \(f_{{\mathbf{y}}} (y_{i} |w_{l} = \pm 1) = \frac{1}{1 \pm \alpha }f_{{\mathbf{x}}} (\frac{{y_{i} }}{1 \pm \alpha })\) into (8) to obtain the detailed version of \(SMM\_ML_{l} ({\mathbf{y}})\):
According to (9), the criterion for defining the operation of the decoder is as follows:
where \(\mathop {w_{l} }\limits^{ \wedge }\) denotes the \(l\)-th watermark bit of the decoding. As (10), if \(Z_{l} ({\mathbf{y}}) \ge T_{l}\), the embedded watermark bit in the target domain is “ + 1”; otherwise, it is “ − 1.” The expression for \(Z_{l} ({\mathbf{y}})\) is
The expression for \(T_{l}\) is
4 Experimental Results
We conducted an extensive performance evaluation of the proposed method, considering imperceptibility, robustness, and watermark capacity. The evaluation also includes a comparison with other existing methods. For this experiment, we used 512 × 512 grayscale images obtained from three different datasets: the SIPI dataset (http://sipi.usc.edu/database/), the CVG-UGR dataset (http://decsai.ugr.es/cvg/dbimagenes/g512.php), and the BOSSbase dataset (http://agents.fel.cvut.cz/boss/). The watermark sequence utilized in these experiments comprises “ + 1” or “ − 1,” with each watermark bit having an equal probability in the sequence. The experimental settings were as follows: NSST level set to 2, PZM order set to 5, block size at 8 × 8, and a WDR of -40.
4.1 Performance Evaluation
4.1.1 Imperceptibility Analysis
We conducted an evaluation of the proposed method, focusing on its imperceptibility. In the first phase, we assessed the imperceptibility of the proposed method from an objective standpoint using two widely recognized metrics: peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM). A PSNR exceeding 40 dB is typically considered to meet the requirements of the HVS [19], while an SSIM value close to 1 indicates minimal differences between the watermarked image and the original image. Twenty groups of watermark sequences of lengths 128-bit, 256-bit, 512-bit, 1024-bit, and 2048-bit were created. We performed the evaluation on the SIPI, CVG-UGR, and BOSSbase datasets. The results of the evaluation are presented in Tables 3 and 4. As observed in these tables, the proposed method attains \(PSNR > 51dB\) and \(SSIM > 0.99\), demonstrating its effectiveness in imperceptibility.
The imperceptibility of the proposed method was further evaluated from a subjective perspective, focusing on the observation of differences between the watermark-containing image and the original image through the HVS. Test images, including Lena, Barbara, Peppers, Airplane, and Couple, were utilized, and a 2048-bit watermark sequence was created. In Fig. 10, we can observe the original image, the watermark-containing image, and the difference image. To make the modifications made during watermark embedding more visible, the pixels of the difference image were magnified by a factor of 30. As indicated in Fig. 10, without the assistance of the 30 × magnification, it is challenging for the HVS to perceive the modification position, highlighting the strong imperceptibility of the proposed method.
The method we have developed ensures that PSNR exceeds 40 for several reasons: a. information in high-entropy regions of an image is usually more chaotic, and the HVS is less sensitive to minor changes in these areas; b. we implemented an adaptive image content multiplication embedding method, allowing for the flexible adjustment of \(\alpha\); c. our use of NSST helps filter out detailed information, effectively compensating for the limitations of low-order PZMs in reconstructing image details.
4.1.2 Robustness Analysis
The robustness of the proposed method was evaluated in three main aspects. Firstly, the evaluation used three datasets: SIPI, CVG-UGR, and BOSSbase, and for each dataset, 1024-bit watermark sequences were randomly generated in 20 groups. The robustness is quantified using the BER, which is the ratio of the number of incorrectly decoded bits to the total number of bits. Various attacks were introduced in the test, including JPEG compression (60), Gaussian filtering (7 × 7), Rotation (1.5°), Scaling (1.5), Additive white Gaussian noise (AWGN) (35), Salt and pepper noise (0.07), Gamma correction (0.75), and Translations (H5, V3). Table 5 presents the results of the BER experiments conducted under these multiple attacks. It is worth noting that the BER of the decoder remains consistently below 0.033, even when subjected to these diverse attacks.
Second, the proposed method is applied in practice to evaluate its robustness. We embedded into Lena according to the proposed method and then attacked the Lena containing watermarks with different types of noise. We used the proposed decoder to decode the watermarks from the attacked watermark-containing Lena. Figure 11 shows watermarked image obtained by decoding. According to Fig. 11, the decoder recovers the watermarked image from the attacked watermarked-containing image and ensures that the key information of the watermarked image is discriminable.
In addition, we evaluated the robustness of our algorithm in real-world scenarios. We randomly selected 500 grayscale images of 512 × 512 from the SIPI, CVG-UGR, and BOSSbase databases as test images. The watermark sequences of 128-bit, 512-bit, and 2048-bit generate five groups each. Following our proposed watermark embedding method, these watermark sequences were embedded into the test images, respectively. Then, the watermarked images were uploaded to 8 different social platforms: Weibo, Zhihu, QQ, WeChat, Bilibili, Baidu Tieba, Hupu, and LOFTER, and these images were subsequently downloaded. Afterwards, the watermark within the images was extracted using our proposed decoding algorithm. The robustness of the method is assessed by the BER. We calculated the PSNR between the original image and the watermarked image downloaded from the social media platform. Table 6 shows the experimental data for BER and PSNR. According to Table 6, when the embedded watermark sequence is 2048-bit, the PSNR exceeds 40 dB, and the BER is below 0.015. In the past, images uploaded to social media platforms often suffered from compression attacks because the platforms needed to save storage space and speed up image loading. However, with the intensification of competition among social platforms and increasing emphasis on user experience, they do not overly compress uploaded images, and some may not compress at all. This is one of the key reasons our algorithm performed well in this test.
4.1.3 Watermark Capacity
We assessed the performance of the proposed method with respect to watermark capacity, recognizing that embedding a substantial amount of watermark information necessitates extensive modifications to the original image. These modifications inevitably impact the image quality, leading to a conflict between watermark capacity and imperceptibility. To evaluate the method, we embedded watermark sequences of varying lengths into images and examined whether the PSNR of the watermarked images exceeded 40 dB. This assessment allowed us to gauge the performance of the proposed method in terms of watermark capacity. We selected 500 test images and used watermark sequences with lengths of 500-bit, 1000-bit, 1500-bit, 2000-bit, 2500-bit, and 3000-bit, with each watermark sequence created in 20 sets. Figure 12 illustrates the PSNR results at different watermark capacities. It is important to note that in our method, to enhance watermark retention, all magnitude coefficients in a target block are modified when embedding a single watermark bit. As shown in Fig. 12, benefiting from the local decomposition strategy and the NSST-PZMs magnitude domain, our method maintains PSNR > 40 dB even when embedding a 3000-bit watermark sequence.
4.2 Comparisons with State-of-The-Art Schemes
We conducted a comparative analysis of our method with methods [6, 16, 26, 38, 40, 50, 52,53,54], and [56] in respect of robustness and imperceptibility when embedding watermark sequences of the same length. Methods [6, 38, 40, 50, 52, 53], and [54] are watermark methods based on statistical models, while methods [16, 26], and [56] are watermark methods based on other techniques.
In our comparative analysis, we first compared our method with [6, 50, 52], and [53] in terms of imperceptibility and robustness. To conduct this comparison, we selected 200 images and generated 20 groups of watermark sequences of lengths 128-bit, 256-bit, 512-bit, 1024-bit, and 2048-bit. The results of the imperceptibility comparison are presented in Table 7. In terms of robustness, we compared our method with [6, 50, 52], and [53] against various attacks, including JPEG (10, 20, 30, 40, 50, 60, 70, 80), AWGN (5, 10, 15, 20, 25, 30, 35), Median filtering and Gaussian filtering (3 × 3, 5 × 5, 7 × 7, 9 × 9), Salt (0.01, 0.03, 0.05, 0.07, 0.1), Gamma correction (2, 1.5, 0.9, 0.75), Rotation (− 2, − 1.5, − 1, − 0.5, 0, 0.5, 1, 1.5, 2), and Scaling (0.8, 0.9, 1.1, 1.2, 1.5). Since the results under scaling attacks are not provided in [50] and [52], to ensure the fairness of the comparison, we only compared with methods [6] and [53]. We referred to methods [6, 50, 52], and [53] as VB-Gaussian, Beta-exponential, Cauchy-HMT, and VB-Cauchy, respectively. The results of the BER comparison are depicted in Fig. 13. According to the experimental results in Fig. 13 and Table 7, our proposed method outperforms [6, 50, 52], and [53] in respect of robustness and imperceptibility.
According to the results in Table 7, conclusions can be drawn regarding the performance comparison of watermark capacity. The performance of watermark capacity is typically assessed using imperceptibility evaluations, such as PSNR. It is important to clarify that watermark capacity refers to the number of watermark bits that can be accommodated by the carrier under conditions satisfying visual perception (PSNR > 40 dB). In other words, a method with a higher PSNR when embedding the same number of watermarks indicates better performance in terms of watermark capacity, implying that its PSNR may only decrease to that of other methods after continuing to embed a certain number of watermarks. In Table 7, compared to methods [6, 50, 52], and [53], the proposed method demonstrates higher PSNR when embedding the same number of watermarks, indicating an advantage in watermark capacity performance.
Additionally, we performed a comparative analysis with the method [54], using a 512-bit watermark sequence created in 50 sets. We randomly selected 100 images from the BOSSbase dataset for this evaluation. Table 8 presents the results of the robustness comparison against [54]. For assessing imperceptibility, we utilized the test images: Couple, Barbara, Lena, and Peppers. The results of the imperceptibility comparison are given in Table 9. Based on the outcomes in Tables 8 and 9, under the same watermark capacity, our method outperforms [54] in both robustness and imperceptibility.
We conducted a comparative analysis with methods [26, 38, 40], and [56] in respect of imperceptibility and robustness. In addition to ensuring the same test images, we generated 50 sets of watermark sequences for testing. Table 10 provides the results of the imperceptibility comparison, while Table 11 presents the BER comparison results in the absence of attacks. Based on the data in Tables 10 and 11, our method outperforms [26, 38, 40], and [56] in both imperceptibility and robustness.
Fang et al. [16] introduced a pioneering learning-based watermarking algorithm. The proposed method includes a new two-stage deep neural network and two newly designed templates. This method uses deep neural networks to enhance the resistance to attacks on digital editing. We selected 400 test images from the SIPI, CVG-UGR, and BOSSbase datasets and randomly generated 20 sets of 256-bit watermark sequences. Table 12 displays the comparison of the average BER under various attacks with [16]. As indicated in Table 12, compared to [16], the proposed method shows an advantage in resisting conventional attacks; however, its performance against geometric attacks is inferior to [16]. This discrepancy arises from the characteristics of the learning-based watermarking technology, which employs feature learning and data augmentation techniques to train models. These trained models are effectively equipped to handle geometric attacks of varying types and intensities.
The results presented in Tables 7–11, as well as Fig. 13, collectively demonstrate that when embedding watermark sequences of the same length, our method outperforms [6, 26, 38, 40, 50, 52,53,54, 56] in respect of robustness and imperceptibility. According to Table 12, the proposed method outperforms [16] in resisting conventional attacks but is less effective than [16] against geometric attacks. The superior performance of the proposed method can be attributed to several key factors: a. leveraging the robustness of low-order PZMs and the decomposition characteristics of NSST to construct local NSST-PZMs magnitudes; b. the strategy of watermark embedding in entire blocks, which helps enhance robustness; c. the design of the two-component SMM based on the characteristics of NSST-PZMs magnitudes, allowing for accurate modeling; d. closed-form decoder expression based on SMM and ML. It is important to note that all scaling results are obtained in a state where prior knowledge of scaling is known.
5 Conclusion
In this paper, the watermark decoder that uses SMM to model the NSST-PZMs magnitudes is proposed. During watermark embedding, the robustness of low-order PZMs and the decomposition characteristics of NSST are utilized to construct local NSST-PZMs magnitudes as the embedding domain. The watermarks are embedded into these local NSST-PZMs magnitudes using a multiplicative method. A two-component SMM is designed to describe the NSST-PZMs magnitudes based on their “peak-tail” characteristics. During decoding, a closed-form decoder expression is derived based on the SMM and the ML criterion. According to experimental results, when embedding a 1024-bit watermark, the proposed method achieves PSNR > 58 dB and BER < 0.033 on the SIPI, CVG-UGR, and BOSSbase datasets. When compared to some decoding methods for embedding the same capacity watermark, the proposed method demonstrates superior performance in imperceptibility and robustness. We are actively working to mitigate the limitation of requiring prior knowledge when confronted with scaling attacks. In the next step, we will extend this method to color images. Meanwhile, we will try designing a new watermarking method in combination with deep learning techniques.
Data Availability
The datasets of this study are available from http://sipi.usc.edu/database/, http://decsai.ugr.es/cvg/dbimagenes/g512.php, and http://agents.fel.cvut.cz/boss/. The simulated datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
References
B. Ahmaderaghi, F. Kurugollu, J.M. Rincon et al., Blind image watermark detection algorithm based on discrete shearlet transform using statistical decision theory. IEEE Trans. Comput. Imaging 4(1), 46–59 (2018)
S.B.B. Ahmadi, G. Zhang, M. Rabbani et al., An intelligent and blind dual color image watermarking for authentication and copyright protection. Appl. Intell. 51, 1701–1732 (2021)
M.A. Akhaee, S.S.M. Ebrahim, Scaling-based watermarking with universally optimum decoder. Multimed. Tools Appl. 74, 5995–6018 (2015)
M. Amini, M.O. Ahma, M.N.S. Swamy, Digital watermark extraction in wavelet domain using hidden Markov mode. Multimed. Tools Appl. 76, 3731–3749 (2017)
M. Amini, M.O. Ahma, M.N.S. Swamy, A new locally optimum watermark detection using vector-based hidden Markov model in wavelet domain. Signal Process. 137, 213–222 (2017)
M. Amini, M.O. Ahmad, M.N.S. Swamy, A robust multibit multiplicative watermark decoder using vector-based hidden Markov model in wavelet domain. IEEE Trans. Circuits Syst. Video Technol. 28(2), 402–413 (2018)
M. Amini, H. Sadreazami, M.O. Ahmad et al., A channel-dependent statistical watermark detector for color images. IEEE Trans. Multimed. 21(1), 65–73 (2018)
M. Amirmazlaghani, M. Rezghi, H. Amindavar, A novel robust scaling image watermarking scheme based on Gaussian Mixture Model. Expert Syst. Appl. 42(4), 1960–1971 (2015)
P. Bhinder, N. Jindal, K. Singh, An improved robust image-adaptive watermarking with two watermarks using statistical decoder. Multimed. Tools Appl. 79, 183–217 (2020)
Y. Bian, S. Liang, Locally optimal detection of image watermarks in the wavelet domain using Bessel K form distribution. IEEE Trans. Image Process. 22(6), 2372–2384 (2013)
Z.G. Chen, L.X. Li, H.P. Peng et al., A novel digital watermarking based on general non-negative matrix factorization. IEEE Trans. Multimed. 20(8), 1973–1986 (2018)
L. Dong, Q. Yan, Y. Lv et al., Full band watermarking in DCT domain with Weibull model. Multimed. Tools Appl. 76, 1983–2000 (2017)
G. Easley, D. Labate, W.Q. Lim, Sparse directional image representations using the discrete shearlet transform. Appl. Comput. Harmon. Anal. 25(1), 25–46 (2008)
S. Etemad, M. Amirmazlaghani, A new multiplicative watermark detector in the contourlet domain using t location-scale distribution. Pattern Recognit. 77, 99–112 (2018)
O. Evsutin, K. Dzhanashia, Watermarking schemes for digital images: robustness overview. Signal Process.-Image Commun. 100, 116523 (2022)
H. Fang, D. Chen, Q. Huang et al., Deep template-based watermarking. IEEE Trans. Circuits Syst. Video Technol. 31(4), 1436–1451 (2021)
H.T. Hu, L.Y. Hsu, H.H. Chou, An improved SVD-based blind color image watermarking algorithm with mixed modulation incorporated. Inf. Sci. 519, 161–182 (2020)
W.N. Huan, S. Li, Z.X. Qian et al., Exploring stable coefficients on joint sub-bands for robust video watermarking in DT CWT domain. IEEE Trans. Circuits Syst. Video Technol. 32(4), 1955–1965 (2021)
Y. Huang, B.N. Niu et al., Enhancing image watermarking with adaptive embedding parameter and PSNR guarantee. IEEE Trans. Multimed. 21(10), 2447–2460 (2019)
N.K. Kalantari, S.M. Ahadi, M. Vafadust, A robust image watermarking in the ridgelet domain using universally optimum decoder. IEEE Trans. Circuits Syst. Video Technol. 20(3), 396–406 (2009)
S. Kumar, B.K. Singh, Entropy based spatial domain image watermarking and its performance analysis. Multimed. Tools Appl. 80(6), 9315–9331 (2021)
D.C. Liu, Q.T. Su, Z.H. Yuan et al., A blind color digital image watermarking method based on image correction and eigenvalue decomposition. Signal Process.-Image Commun. 95, 116292 (2021)
X.L. Liu, C.C. Lin, S.M. Yuan, Blind dual watermarking for color images’ authentication and copyright protection. IEEE Trans. Circuits Syst. Video Technol. 28(5), 1047–1055 (2016)
Y. Liu, S.Y. Tang, R. Liu et al., Secure and robust digital image watermarking scheme using logistic and RSA encryption. Expert Syst. Appl. 97, 95–105 (2018)
Y.N. Liu, S.S. Zhang, J. Yang, Color image watermark decoder by modeling quaternion polar harmonic transform with BKF distribution. Signal Process.-Image Commun. 88, 115946 (2020)
D.K. Mahto, A.K. Singh, Firefly optimization-based dual watermarking for colour images with improved capacity. Multimed. Tools Appl. 83(22), 61539–61557 (2022). https://doi.org/10.1007/s11042-022-13795-y
S. Mehraj, S. Mushtaq, S.A. Parah et al., Spatial domain-based robust watermarking framework for cultural images. IEEE Access 10, 117248–117260 (2022)
P.P. Niu, X. Shen, Y.F. Song et al., Locally optimum watermark decoder in NSST domain using RSS-based Cauchy distribution. Multimed. Tools Appl. 79, 33071–33101 (2020)
P.P. Niu, L. Wang, J.L. Tian et al., A statistical color image watermarking scheme using local QPCET and Cauchy-Rayleigh distribution. Circuits Syst. Signal Process. 40, 4516–4545 (2021)
D. Peel, G.J. McLachlan, Robust mixture modelling using the t distribution. Stat. Comput. 10, 339–348 (2000)
D. Rajani, P.R. Kumar, An optimized blind watermarking scheme based on principal component analysis in redundant discrete wavelet domain. Signal Process. 172, 107556 (2020)
L. Rakhmawati, W. Wirawan, S. Suwadi et al., Blind robust image watermarking based on adaptive embedding strength and distribution of quantified coefficients. Expert Syst. Appl. 187, 115906 (2022)
A. Ray, S. Roy, Recent trends in image watermarking techniques for copyright protection: a survey. Int. J. Multimed. Inf. Retr. 9(4), 249–270 (2020)
S. Roy, A.K. Pal, A hybrid domain color image watermarking based on DWT–SVD. Iran. J. Sci. Technol.-Trans. Electr. Eng. 43, 201–217 (2019)
H. Sadreazami, M.O. Ahmad, M.N.S. Swamy, A robust multiplicative watermark detector for color images in sparse domain. IEEE Trans. Circuits Syst. II-Express Briefs 62(12), 1159–1163 (2015)
H. Sadreazami, M.O. Ahmad, M.N.S. Swamy, A study of multiplicative watermark detection in the contourlet domain using alpha-stable distributions. IEEE Trans. Image Process. 23(10), 4348–4360 (2014)
H. Sadreazami, M.O. Ahmad, M.N.S. Swamy, Optimum multiplicative watermark detector in contourlet domain using the normal inverse Gaussian distribution, in 2015 IEEE international symposium on circuits and systems, (2015) pp. 1050–1053.
H. Sadreazami, M.O. Ahmad, M.N.S. Swamy, Multiplicative watermark decoder in contourlet domain using the normal inverse Gaussian distribution. IEEE Trans. Multimed. 18(2), 196–207 (2015)
H. Sadreazami, M. Amini, A robust image watermarking scheme using local statistical distribution in the contourlet domain. IEEE Trans. Circuits Syst. II-Express Briefs 66(1), 151–155 (2018)
A. Shaik, V. Masilamani, A robust multiplicative watermarking technique for digital images in curvelet domain using normal inverse Gaussian distribution. Multimed. Tools Appl. 82(6), 9223–9241 (2023)
R.S. Sinha, A. Basu, A. Chattopadhyay, On the implementation of a copyright protection scheme using digital image watermarking. Multimed. Tools Appl. 79, 13125–13138 (2020)
Q.T. Su, B.J. Chen, Robust color image watermarking technique in the spatial domain. Soft. Comput. 22, 91–106 (2018)
Q.T. Su, D.C. Liu, Y.H. Sun, A robust adaptive blind color image watermarking for resisting geometric attacks. Inf. Sci. 606, 194–212 (2022)
Q.T. Su, D.C. Liu, Z.H. Yuan et al., New rapid and robust color image watermarking technique in spatial domain. IEEE Access 7, 30398–30409 (2019)
W.W. Sun, J.T. Zhou, Y.M. Li et al., Robust high-capacity watermarking over online social network shared images. IEEE Trans. Circuits Syst. Video Technol. 31(3), 1208–1221 (2020)
C.H. Teh, R.T. Chin, On image analysis by the methods of moments. IEEE Trans. Pattern Anal. Mach. Intell. 10(4), 496–513 (1988)
R. Thanki, A. Kothari, D. Trivedi, Hybrid and blind watermarking scheme in DCuT–RDWT domain. J. Inf. Secur. Appl. 46, 231–249 (2019)
W.B. Wan, J. Wang, Y.M. Zhang et al., A comprehensive survey on robust image watermarking. Neurocomputing 488, 226–247 (2022)
B.W. Wang, J.W. Shi, W.S. Wang et al., Image copyright protection based on blockchain and zero-watermark. IEEE Trans. Netw. Sci. Eng. 9(4), 2188–2199 (2022)
X.Y. Wang, J. Tian, J.L. Tian et al., Statistical image watermarking using local RHFMs magnitudes and beta exponential distribution. J. Vis. Commun. Image Represent. 77, 103123 (2021)
X.Y. Wang, T.T. Wen, X. Shen et al., A new watermark decoder in DNST domain using singular values and Gaussian-Cauchy mixture-based vector HMT. Inf. Sci. 535, 81–106 (2020)
X.Y. Wang, T.T. Wen, L. Wang et al., Contourlet domain locally optimum image watermark decoder using Cauchy mixtures based vector HMT model. Signal Process.-Image Commun. 88, 115972 (2020)
X.Y. Wang, S.Y. Zhang, L. Wang et al., Locally optimum image watermark decoder by modeling NSCT domain difference coefficients with vector based Cauchy distribution. J. Vis. Commun. Image Represent. 62, 309–329 (2019)
Z.Q. Xia, C.P. Wang, Y.W. Li et al., Geometrical attacks resilient statistical watermark decoder using polar harmonic Fourier moments. J. Frankl. Inst.-Eng. Appl. Math. 360(7), 4493–4518 (2023)
Z. Zainol, J.S. Teh, M. Alawida et al., Hybrid SVD-based image watermarking schemes: a review. IEEE Access 9, 32931–32968 (2021)
A. Zear, P.K. Singh, Secure and robust color image dual watermarking based on LWT-DCT-SVD. Multimed. Tools Appl. 81(19), 26721–26738 (2022)
C. Zhang, L.L. Cheng, Z.D. Qiu et al., Multipurpose watermarking based on multiscale curvelet transform. IEEE Trans. Inf. Forensic Secur. 3(4), 611–619 (2008)
L. Zhang, D. Wei, Image watermarking based on matrix decomposition and gyrator transform in invariant integer wavelet domain. Signal Process. 169, 107421 (2020)
P.J. Zheng, Y.H. Zhang, A robust image watermarking scheme in hybrid transform domains resisting to rotation attacks. Multimed. Tools Appl. 79(25), 18343–18365 (2020)
Acknowledgements
The authors gratefully acknowledge the anonymous reviewers who read the manuscript and provided many helpful suggestions. We sincerely acknowledge the guidance provided by Prof. XiangYang Wang of Liaoning Normal University on the manuscript.
Funding
This research was funded by the Young Science and Technology Star Project of Dalian, China [grant number 2022RQ092] and the Basic Scientific Research Project for Colleges and Universities from Department of Education of Liaoning Province, China [grant number JYTQN2023480].
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare no conflict of interest regarding this research work.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Peng, F., Xiao, Y., Zhang, L. et al. NSST Domain Statistical Watermark Decoder Using Local Low-Order PZMs Magnitudes and Student’s-t Mixture Model. Circuits Syst Signal Process 43, 6661–6688 (2024). https://doi.org/10.1007/s00034-024-02779-y
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00034-024-02779-y