1 Introduction
In imaging science, the obtained images usually have undesirable degradation that is caused by the limitation of external environment, electronic equipment and human factors. Among those, the pollution of Poisson noise is a widely considered issue, which commonly occurs in medical imaging (Sarder and Nehorai, 2006), single particle emission computed tomography (Bardsley and Goldes, 2011) and various other applications. Therefore, the problem of Poisson noise removal is an important and urgent task.
To solve the above problem, Setzer et al. (2010) proposed a classical Poissonian image restoration model based on the total variation (TV) regularization as
where ∇ denotes the gradient operator, λ is a positive tuning parameter that controls the data fidelity term, K means a nonnegative linear compact operator, and then u and f represent the original image and the positive, bounded observed version separately. With the aim to optimize the minimization problem (1), the researches in (Setzer et al., 2010; Liu and Huang, 2012; Pham et al., 2020) adopted the alternating split Bregman iteration to avoid the inner loop. Besides, several valid numerical algorithms have been proposed in recent years, such as the scaled gradient projection method (Bonettini et al., 2009), and alternating direction algorithm (Figueiredo and Bioucas-Dias, 2010).
Generally speaking, the TV-based models have the edge-preserving property in the process. That being said, the TV penalty often makes homogeneous regions easy to be over-divided to appear piecewise constant. To compensate for this technical drawback, there have emerged many efficient solvers, such as the high-order TV (HOTV, Chan et al., 2000; Lysaker et al., 2003), nonlocal TV (Gilboa and Osher, 2008), and total generalized variation (Bredies et al., 2010; Liu, 2016, 2021) regularized schemes. Devoted to removing Poisson noise, the HOTV-based model formally reads as
with ${\nabla ^{2}}$ denoting the second-order gradient operator. The merit of high-order strategy is able to eliminate the staircase effect caused by the TV model. Lately, another innovation to conquer the unexpected distortion is the fractional-order TV (FOTV, Chowdhury et al., 2020). Unfortunately, the images resulting from the aforesaid techniques sometimes suffer from the over-smoothness of contours or the residue of noise.
(2)
\[ \underset{u}{\min }{\big\| {\nabla ^{2}}u\big\| _{1}}+\lambda \langle 1,Ku-f\log Ku\rangle ,\]As opposed to the convex models mentioned above, Nikolova et al. (2010) illustrated that the nonconvex regularizers in the aspect of preserving shapes are superior to the convex regularizers. As a matter of fact, nonconvex TV (Nikolova et al., 2010; Chartrand, 2007) possesses some excellent features of TV, but it may cause more serious staircase artifacts. In our opinion, nonconvex HOTV (Adam and Paramesran, 2019; Oh et al., 2013) is more appropriate for processing the degraded images relatively. Recently, the work (Lv et al., 2016) considers the TV with overlapping group sparsity (OGS-TV) for deblurring Poisson noisy images. It follows from the experiments that this approach can alleviate the blocky aspects to a certain extent, however, the staircasing effect is still present in the recovered image.
For the purpose of overcoming the staircasing effect and obtaining sharp jump discontinuities simultaneously, as well as improving the accuracy of image restoration, this paper focuses on a novel hybrid regularizers strategy for image deblurring under Poisson noise. The proposed model closely incorporates the advantages of OGS-TV and nonconvex HOTV regularizers. Mathematically, the resulting optimization model is formulated as follows
where $\phi (\cdot )$ is defined by the overlapping group sparsity, and $\| \cdot {\| _{p}^{p}}$ is a nonconvex ${\ell _{p}}$ norm with $0<p<1$. The parameter $\alpha >0$ controls the tradeoff between the first and second regularizers.
(3)
\[ \underset{u}{\min }\phi (\nabla u)+\alpha {\big\| {\nabla ^{2}}u\big\| _{p}^{p}}+\lambda \langle 1,Ku-f\log Ku\rangle ,\]The main ideas and contributions of the current article are summarized as follows. First, a novel hybrid regularizers model, which combines the superiorities of OGS-TV and nonconvex HOTV, is explored for deblurring Poissonian images. The inclusion of double regularizers contributes to achieving accurate restoration, and preserving sharp edges while suppressing the staircasing artifacts. Second, to deal with the resulting nonconvex minimization problem, we design an efficient alternating direction method of multipliers, integrating it with the popular variable splitting method, majorization-minimization (MM) method and iteratively reweighted ${\ell _{1}}$ algorithm. Lastly, compared with several state-of-the-art denoising techniques, numerous numerical experiments are presented to illustrate the superior performance of our newly developed scheme.
The remainder of this paper is generalized as follows. In Section 2, we briefly present some mathematical notations and necessary definitions, as well as the algorithms related to our research. Moreover, the detailed solving steps of the proposed minimization are described in Section 3 together with some efficient methods and remark. Subsequently, in Section 4 we conduct several numerical experiments to display the comparisons of different models, and demonstrate our significant improvements. Finally, Section 5 presents a conclusion of this paper.
2 Preliminaries
In this section, our main target is to outline some necessary background knowledge, which is tailored for the sequel numerical computations.
2.1 Notation
Suppose that $\Omega \subset {\mathbb{R}^{n}}$ is an open, bounded domain. Then the TV of $u\in {L^{1}}(\Omega )$ is formulated as
with div being the divergence operator. As for the high-order version, it takes the form of
where $|\psi (x)|=\sqrt{{\textstyle\sum _{i,j=1}^{n}}{({\psi ^{ij}})^{2}}}$, and ${C_{c}^{2}}(\Omega ;{\mathbb{R}^{n\times n}})$ is the set of continuous quadratic differentiable vector functions on the compact support set in Ω.
(4)
\[ \mathrm{TV}(u)={\int _{\Omega }}|\nabla u|\mathrm{d}x=\sup \bigg\{{\int _{\Omega }}u\mathrm{div}(\psi )\mathrm{d}x:\psi \in {C_{c}^{1}}\big(\Omega ;{\mathbb{R}^{n}}\big),|\psi |\leqslant 1\bigg\},\](5)
\[ \mathrm{HTV}(u)={\int _{\Omega }}\big|{\nabla ^{2}}u\big|\mathrm{d}x=\sup \Bigg\{{\int _{\Omega }}{\sum \limits_{i,j=1}^{n}}u{\partial _{j}}{\partial _{i}}{\psi ^{ij}}\mathrm{d}x:\psi \in {C_{c}^{2}}\big(\Omega ;{\mathbb{R}^{n\times n}}\big),|\psi |\leqslant 1\Bigg\},\]At present, we are in a position to give the discrete setting. For notational convenience, we assume that the size of an image is ${n_{1}}\times {n_{2}}$, and define the function space $U={C_{c}^{2}}(\Omega ;{\mathbb{R}^{n\times n}})$. By the theory of finite difference method, the first-order forward and backward difference operators are respectively characterized by
\[\begin{aligned}{}& {\big({\partial _{x}^{+}}u\big)_{i,j}}=\left\{\begin{array}{l@{\hskip4.0pt}l}{u_{i+1,j}}-{u_{i,j}},\hspace{1em}& 1\leqslant i<{n_{1}},\\ {} 0,\hspace{1em}& i={n_{1}},\end{array}\right.\\ {} & {\big({\partial _{y}^{+}}u\big)_{i,j}}=\left\{\begin{array}{l@{\hskip4.0pt}l}{u_{i,j+1}}-{u_{i,j}},\hspace{1em}& 1\leqslant j<{n_{2}},\\ {} 0,\hspace{1em}& j={n_{2}},\end{array}\right.\\ {} & {\big({\partial _{x}^{-}}u\big)_{i,j}}=\left\{\begin{array}{l@{\hskip4.0pt}l}{u_{i,j}}-{u_{i-1,j}},\hspace{1em}& 1<i<{n_{1}},\\ {} {u_{1,j}},\hspace{1em}& i=1,\\ {} -{u_{{n_{1}}-1,j}},\hspace{1em}& i={n_{1}},\end{array}\right.\\ {} & {\big({\partial _{y}^{-}}u\big)_{i,j}}=\left\{\begin{array}{l@{\hskip4.0pt}l}{u_{i,j}}-{u_{i,j-1}},\hspace{1em}& 1<j<{n_{2}},\\ {} {u_{i,1}},\hspace{1em}& j=1,\\ {} -{u_{i,{n_{2}}-1}},\hspace{1em}& j={n_{2}}.\end{array}\right.\end{aligned}\]
Consequently, the discrete gradient operator enjoys the following expression
and the counterpart of second-order operator is thus defined by
(6)
\[ {(\nabla u)_{i,j}}={\big({\big({\partial _{x}^{+}}u\big)_{i,j}},{\big({\partial _{y}^{+}}u\big)_{i,j}}\big)^{T}},\](7)
\[ {\big({\nabla ^{2}}u\big)_{i,j}}=\left(\begin{array}{c@{\hskip4.0pt}c}{\partial _{x}^{-}}{({\partial _{x}^{+}}u)_{i,j}}& {\partial _{x}^{+}}{({\partial _{y}^{+}}u)_{i,j}}\\ {} {\partial _{y}^{-}}{({\partial _{x}^{-}}u)_{i,j}}& {\partial _{y}^{-}}{({\partial _{y}^{+}}u)_{i,j}}\end{array}\right).\]Furthermore, the discrete forms of the first-order and second-order divergence operators on the space U are described as
\[\begin{aligned}{}& {(\mathrm{div}u)_{i,j}}={\big({\partial _{x}^{-}}u\big)_{i,j}}+{\big({\partial _{y}^{-}}u\big)_{i,j}},\\ {} & {\big({\mathrm{div}^{2}}u\big)_{i,j}}={\partial _{x}^{+}}{\big({\partial _{x}^{-}}u\big)_{i,j}}+{\partial _{x}^{-}}{\big({\partial _{y}^{-}}u\big)_{i,j}}+{\partial _{y}^{+}}{\big({\partial _{x}^{+}}u\big)_{i,j}}+{\partial _{y}^{+}}{\big({\partial _{y}^{-}}u\big)_{i,j}}.\end{aligned}\]
2.2 OGS-TV Method
As stated in Selesnick and Chen (2013), an L-point group of the vector s is represented as
where L denotes the size of group, and ${s_{i,L}}$ is a block that is composed of L continuous components of s beginning with index i. On the basis of this notation, the work (Peyre and Fadili, 2011) puts forward a general 1D group sparsity regularizer
Note that if $L=1$, $\varphi (s)$ in (9) degrades into the commonly used 1D TV functional, and when $L>1$, $\varphi (s)$ is beneficial for block sparsity (Bayram, 2011).
(9)
\[ \varphi (s)=\sum \limits_{i}\| {s_{i,L}}{\| _{2}}=\sum \limits_{i}{\Bigg({\sum \limits_{l=0}^{L-1}}|{s_{i+l}}{|^{2}}\Bigg)^{1/2}}.\]Regarding the case of 2D image $u\in {\mathbb{R}^{m\times 1}}$ with $m={n_{1}}\times {n_{2}}$, the associated $L\times L$-point group can be defined as the following square matrix. That is,
where ${m_{1}}=\lceil \frac{L-1}{2}\rfloor $ and ${m_{2}}=\lceil \frac{L}{2}\rfloor $, with $\lceil \cdot \rfloor $ being the operation of rounding. By means of piling up the L columns of the $L\times L$ matrix shown in (10), we obtain a vector ${u_{i,j,L}}$ such that ${u_{i,j,L}}={\tilde{u}_{i,j,L}}(:)$. Thus, the OGS function in a 2D arrangement is read as
(10)
\[ {\tilde{u}_{i,j,L}}=\left[\begin{array}{c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c}{u_{i-{m_{1}},j-{m_{1}}}}\hspace{1em}& {u_{i-{m_{1}},j-{m_{1}}+1}}\hspace{1em}& \cdots \hspace{1em}& {u_{i-{m_{1}},j+{m_{2}}}}\\ {} {u_{i-{m_{1}}+1,j-{m_{1}}}}\hspace{1em}& {u_{i-{m_{1}}+1,j-{m_{1}}+1}}\hspace{1em}& \cdots \hspace{1em}& {u_{i-{m_{1}}+1,j+{m_{2}}}}\\ {} \vdots \hspace{1em}& \vdots \hspace{1em}& \ddots \hspace{1em}& \vdots \\ {} {u_{i+{m_{2}},j-{m_{1}}}}\hspace{1em}& {u_{i+{m_{2}},j-{m_{1}}+1}}\hspace{1em}& \cdots \hspace{1em}& {u_{i+{m_{2}},j+{m_{2}}}}\end{array}\right],\]Consequently,the function $\phi (\nabla u)$ in (3) acted as a regularizer can be defined as
It follows from the above formula that if $L=1$, $\phi (\nabla u)$ is the anisotropic version of TV function. Otherwise, when $L>1$, this regularizer is denominated as the OGS-TV.
2.3 MM Method
The MM method (Hunter and Lange, 2004; Figueiredo et al., 2007) is frequently applied to cope with the minimization problem. Rather than directly handling a complex cost function $P(u)$, this technique achieves the computational efficiency by solving a series of more tractable optimization issues $Q(u,{u^{k}})$ $(k=0,1,2,\dots )$. In fact, an MM iterative approach that minimizes $P(u)$ can be set as the model
which requires that $Q(u,{u^{k}})$ is not less than $P(u)$, and $Q(u,u)$ is equal to $P(u)$.
For example, the form of an optimization scheme is as follows
with $\lambda >0$, and φ denotes a penalty function that is given by (11).
(14)
\[ \underset{u}{\min }\bigg\{P(u)=\frac{1}{2}\| u-{u_{0}}{\| _{2}^{2}}+\lambda \varphi (u)\bigg\},\]To obtain an effective strategy for dealing with the minimization (14) by employing the MM method, we need to seek out a majorizor of $P(u)$. Owing to the quadratic form of the first term in (14), we actually need to find a majorizor of $\varphi (u)$. The average value inequality gives that
where $u,t\ne 0$. We remark that if and only if $t=u$, the equal sign in (15) holds. Substituting each group of $\varphi (u)$ into (15), and then adding them up, this yields a majorizor of $\varphi (u)$ as
with $R(u,t)\geqslant \varphi (u)$ and $R(t,t)=\varphi (t)$. And for $\forall i,j$, (16) needs to meet $\| {t_{i,j,L}}{\| _{2}}\ne 0$. By an uncomplicated calculation, it can be reformulated as
where C is a constant related to t, and each element in the diagonal matrix $\Lambda (t)$ is expressed by
with the subscript $d=1,2,\dots ,m$.
(15)
\[ \frac{1}{2\| t{\| _{2}}}\| u{\| _{2}^{2}}+\frac{1}{2}\| t{\| _{2}}\geqslant \| u{\| _{2}},\](16)
\[ R(u,t)=\frac{1}{2}{\sum \limits_{i=1}^{{n_{1}}}}{\sum \limits_{j=1}^{{n_{2}}}}\bigg[\frac{1}{\| {t_{i,j,L}}{\| _{2}}}\| {u_{i,j,L}}{\| _{2}^{2}}+\| {t_{i,j,L}}{\| _{2}}\bigg],\](18)
\[ {\big[\Lambda (t)\big]_{d,d}}={\sum \limits_{i=-{m_{1}}}^{{m_{2}}}}{\sum \limits_{j=-{m_{1}}}^{{m_{2}}}}{\Bigg(\hspace{0.1667em}{\sum \limits_{{k_{1}}=-{m_{1}}}^{{m_{2}}}}{\sum \limits_{{k_{2}}=-{m_{1}}}^{{m_{2}}}}|{u_{d-i+{k_{1}},d-j+{k_{2}}}}{|^{2}}\Bigg)^{-1/2}},\]So as to minimize $P(u)$, an iterative algorithm using the MM approach is defined by
3 Numerical Algorithm
Our task, in this section, is chiefly to resolve our proposed hybrid model that combines the OGS-TV and nonconvex HOTV functions for image restoration.
For solving the ${\ell _{p}}$-norm based nonconvex optimization problem, in all other typical algorithms that we are aware of, the iteratively reweighted ${\ell _{1}}$ algorithm (Candes et al., 2008; Ochs et al., 2015) is generally an ideal treatment. Applying this method to the above minimization reduces to the following approximation
where the weight ω is calculated at the k-th iteration by ${\omega ^{k}}=p/{(|{\nabla ^{2}}{u^{k}}|+\epsilon )^{1-p}}$, with ϵ denoting a small positive number that prevents the denominator from being equal to zero.
(21)
\[ \underset{u}{\min }\phi (\nabla u)+\alpha {\omega ^{k}}{\big\| {\nabla ^{2}}u\big\| _{1}}+\lambda \langle 1,Ku-f\log Ku\rangle ,\]To deal with the non-linearity and non-differentiability properties of the constructed model, we then resort to the classical variable splitting technique. Therefore, the objective function (21) can be transformed into a constrained optimization problem by introducing three auxiliary variables v, w and z. Namely,
The above minimization problem can be effectually solved adopting the alternating direction method of multipliers (Gabay and Mercier, 1976; Bertsekas and Tsitsiklis, 1997). Consequently, we define the corresponding augmented Lagrangian function
where the symbols ${\mu _{1}},{\mu _{2}},{\mu _{3}}$ stand for the Lagrange multipliers, and three positive penalty parameters ${\gamma _{1}},{\gamma _{2}},{\gamma _{3}}$ are used to measure the quadratic penalization.
(23)
\[\begin{aligned}{}& \mathcal{L}(v,w,z,u;{\mu _{1}},{\mu _{2}},{\mu _{3}})\\ {} & \hspace{1em}=\phi (v)+\alpha {\omega ^{k}}\| w{\| _{1}}+\lambda \langle 1,z-f\log z\rangle \\ {} & \hspace{2em}-\langle {\mu _{1}},v-\nabla u\rangle +\frac{{\gamma _{1}}}{2}\| v-\nabla u{\| _{2}^{2}}-\big\langle {\mu _{2}},w-{\nabla ^{2}}u\big\rangle \\ {} & \hspace{2em}+\frac{{\gamma _{2}}}{2}\big\| w-{\nabla ^{2}}u{\big\| _{2}^{2}}-\langle {\mu _{3}},z-Ku\rangle +\frac{{\gamma _{3}}}{2}\| z-Ku{\| _{2}^{2}},\end{aligned}\]Obtaining the optimal solution of (22) is equivalent to seeking a saddle point of $\mathcal{L}$. Since it is technically more difficult to settle all the variables simultaneously, this is done by alternately minimizing $\mathcal{L}(v,w,z,u;{\mu _{1}},{\mu _{2}},{\mu _{3}})$ with respect to v, w, z and u,
with the updating Lagrange multipliers ${\mu _{1}^{k+1}}$, ${\mu _{2}^{k+1}}$ and ${\mu _{3}^{k+1}}$
(24)
\[\begin{aligned}{}& \big({v^{k+1}},{w^{k+1}},{z^{k+1}},{u^{k+1}}\big)\\ {} & \hspace{1em}=\mathrm{arg}\underset{v,w,z,u}{\min }\phi (v)+\alpha {\omega ^{k}}\| w{\| _{1}}+\lambda \langle 1,z-f\log z\rangle \\ {} & \hspace{2em}-\big\langle {\mu _{1}^{k}},v-\nabla u\big\rangle +\frac{{\gamma _{1}}}{2}\| v-\nabla u{\| _{2}^{2}}-\big\langle {\mu _{2}^{k}},w-{\nabla ^{2}}u\big\rangle \\ {} & \hspace{2em}+\frac{{\gamma _{2}}}{2}\big\| w-{\nabla ^{2}}u{\big\| _{2}^{2}}-\big\langle {\mu _{3}^{k}},z-Ku\big\rangle +\frac{{\gamma _{3}}}{2}\| z-Ku{\| _{2}^{2}},\end{aligned}\](25)
\[ \left\{\begin{array}{l}{\mu _{1}^{k+1}}={\mu _{1}^{k}}+{\gamma _{1}}\big(\nabla {u^{k+1}}-{v^{k+1}}\big),\\ {} {\mu _{2}^{k+1}}={\mu _{2}^{k}}+{\gamma _{2}}\big({\nabla ^{2}}{u^{k+1}}-{w^{k+1}}\big),\\ {} {\mu _{3}^{k+1}}={\mu _{3}^{k}}+{\gamma _{3}}\big(K{u^{k+1}}-{z^{k+1}}\big).\end{array}\right.\]More precisely, the optimization problem (24) can be effectively solved by yielding the decoupled decomposition as
(26)
\[ \left\{\begin{array}{l}{v^{k+1}}=\mathrm{arg}\underset{v}{\min }\phi (v)+\frac{{\gamma _{1}}}{2}{\big\| v-\nabla {u^{k}}\big\| _{2}^{2}}-\big\langle {\mu _{1}^{k}},v-\nabla {u^{k}}\big\rangle ,\\ {} {w^{k+1}}=\mathrm{arg}\underset{w}{\min }\alpha {\omega ^{k}}\| w{\| _{1}}+\frac{{\gamma _{2}}}{2}{\big\| w-{\nabla ^{2}}{u^{k}}\big\| _{2}^{2}}-\big\langle {\mu _{2}^{k}},w-{\nabla ^{2}}{u^{k}}\big\rangle ,\\ {} {z^{k+1}}=\mathrm{arg}\underset{z}{\min }\lambda \langle 1,z-f\log z\rangle +\frac{{\gamma _{3}}}{2}{\big\| z-K{u^{k}}\big\| _{2}^{2}}-\big\langle {\mu _{3}^{k}},z-K{u^{k}}\big\rangle ,\\ {} {u^{k+1}}=\mathrm{arg}\underset{u}{\min }\frac{{\gamma _{1}}}{2}{\big\| {v^{k+1}}-\nabla u\big\| _{2}^{2}}-\big\langle {\mu _{1}^{k}},{v^{k+1}}-\nabla u\big\rangle +\frac{{\gamma _{2}}}{2}{\big\| {w^{k+1}}-{\nabla ^{2}}u\big\| _{2}^{2}}\\ {} \hspace{34.14322pt}-\big\langle {\mu _{2}^{k}},{w^{k+1}}-{\nabla ^{2}}u\big\rangle +\displaystyle \frac{{\gamma _{3}}}{2}{\big\| {z^{k+1}}-Ku\big\| _{2}^{2}}-\big\langle {\mu _{3}^{k}},{z^{k+1}}-Ku\big\rangle .\end{array}\right.\]In what follows, our purpose is to settle each subproblem in detail one by one. At first, fixing the variables $\{w,z,u\}$, the minimization of energy $\mathcal{L}$ regarding v is of the form
This, together with the definitions of OGS and MM algorithms, leads to a converted formation that we are interested in:
Based on the preliminaries described above, the solution of v is obviously given by
(27)
\[ {v^{k+1}}=\mathrm{arg}\underset{v}{\min }\phi (v)+\frac{{\gamma _{1}}}{2}{\bigg\| v-\bigg(\nabla {u^{k}}+\frac{{\mu _{1}^{k}}}{{\gamma _{1}}}\bigg)\bigg\| _{2}^{2}}.\]Subsequently, we aim to figure out the subproblem pertaining to w. The second expression in (26) equivalently shares the following more concise formula
For notational simplicity, let us denote by ${X^{k}}=({\nabla ^{2}}{u^{k}}+{\mu _{2}^{k}}/{\gamma _{2}})$. The solving process of sequence (30) is given explicitly by utilizing a soft thresholding with the shrink operator. Concretely, which takes the form of
with the 1D shrinkage formula being defined by
(30)
\[ {w^{k+1}}=\mathrm{arg}\underset{w}{\min }\alpha {\omega ^{k}}\| w{\| _{1}}+\frac{{\gamma _{2}}}{2}{\bigg\| w-\bigg({\nabla ^{2}}{u^{k}}+\frac{{\mu _{2}^{k}}}{{\gamma _{2}}}\bigg)\bigg\| _{2}^{2}}.\]Thereafter, the concerned subproblem with regard to z corresponds to the equivalent minimization problem as follows
According to the variational method, this results in the first-order optimization condition as
More explicitly, by using the extract roots formula of quadratic equation, we acquire the following closed-form solution
(33)
\[ {z^{k+1}}=\mathrm{arg}\underset{z}{\min }\lambda \langle 1,z-f\log z\rangle +\frac{{\gamma _{3}}}{2}{\bigg\| z-\bigg(K{u^{k}}+\frac{{\mu _{3}^{k}}}{{\gamma _{3}}}\bigg)\bigg\| _{2}^{2}}.\]The last procedure of the alternating method is to minimize for u. Deducing from the fourth equation in (26), we thus obtain
Considering that the subproblem (36) is a simple quadratic problem, the specific Euler–Lagrange equation can be derived as follows
where ∗ indicates the conjugate operator. By doing a simple arrangement, the above formulation can be represented as
with the periodic boundary condition for u. Note that all the operators ${K^{\ast }}K$, ${\nabla ^{\ast }}\nabla $ and ${({\nabla ^{2}})^{\ast }}{\nabla ^{2}}$ have block circulant structure. Hence, the solution to ${u^{k+1}}$ can be solved efficiently by using fast Fourier transform (FFT) and its inverse. As a matter of convenience, let ${\upsilon _{1}^{k+1}}={\gamma _{1}}{v^{k+1}}-{\mu _{1}^{k}}$, ${\upsilon _{2}^{k+1}}={\gamma _{2}}{w^{k+1}}-{\mu _{2}^{k}}$ and ${\upsilon _{3}^{k+1}}={\gamma _{3}}{z^{k+1}}-{\mu _{3}^{k}}$. Therefore, we have
where $\mathcal{F}$ stands for the FFT operator and “∘” is the componentwise multiplication operator. Additionally, we also employ the discrete cosine transform to solve the problem (38) under the Neumann boundary condition.
(36)
\[\begin{aligned}{}{u^{k+1}}& =\mathrm{arg}\underset{u}{\min }\frac{{\gamma _{1}}}{2}{\bigg\| \nabla u-{v^{k+1}}+\frac{{\mu _{1}^{k}}}{{\gamma _{1}}}\bigg\| _{2}^{2}}+\frac{{\gamma _{2}}}{2}{\bigg\| {\nabla ^{2}}u-{w^{k+1}}+\frac{{\mu _{2}^{k}}}{{\gamma _{2}}}\bigg\| _{2}^{2}}\\ {} & \hspace{1em}+\frac{{\gamma _{3}}}{2}{\bigg\| Ku-{z^{k+1}}+\frac{{\mu _{3}^{k}}}{{\gamma _{3}}}\bigg\| _{2}^{2}}.\end{aligned}\](37)
\[\begin{aligned}{}0& ={\gamma _{1}}{\nabla ^{\ast }}\bigg(\nabla {u^{k+1}}-{v^{k+1}}+\frac{{\mu _{1}^{k}}}{{\gamma _{1}}}\bigg)+{\gamma _{2}}{\big({\nabla ^{2}}\big)^{\ast }}\bigg({\nabla ^{2}}{u^{k+1}}-{w^{k+1}}+\frac{{\mu _{2}^{k}}}{{\gamma _{2}}}\bigg)\\ {} & \hspace{1em}+{\gamma _{3}}{K^{\ast }}\bigg(K{u^{k+1}}-{z^{k+1}}+\frac{{\mu _{3}^{k}}}{{\gamma _{3}}}\bigg),\end{aligned}\](38)
\[\begin{aligned}{}& \big[{\gamma _{1}}{\nabla ^{\ast }}\nabla +{\gamma _{2}}{\big({\nabla ^{2}}\big)^{\ast }}{\nabla ^{2}}+{\gamma _{3}}{K^{\ast }}K\big]{u^{k+1}}\\ {} & \hspace{1em}={\nabla ^{\ast }}\big({\gamma _{1}}{v^{k+1}}-{\mu _{1}^{k}}\big)+{\big({\nabla ^{2}}\big)^{\ast }}\big({\gamma _{2}}{w^{k+1}}-{\mu _{2}^{k}}\big)+{K^{\ast }}\big({\gamma _{3}}{z^{k+1}}-{\mu _{3}^{k}}\big),\end{aligned}\](39)
\[ {u^{k+1}}={\mathcal{F}^{-1}}\bigg(\frac{\mathcal{F}{(\nabla )^{\ast }}\circ \mathcal{F}({\upsilon _{1}^{k+1}})+\mathcal{F}{({\nabla ^{2}})^{\ast }}\circ \mathcal{F}({\upsilon _{2}^{k+1}})+\mathcal{F}{(K)^{\ast }}\circ \mathcal{F}({\upsilon _{3}^{k+1}})}{{\gamma _{1}}\mathcal{F}{(\nabla )^{\ast }}\circ \mathcal{F}(\nabla )+{\gamma _{2}}\mathcal{F}{({\nabla ^{2}})^{\ast }}\circ \mathcal{F}({\nabla ^{2}})+{\gamma _{3}}\mathcal{F}{(K)^{\ast }}\circ \mathcal{F}(K)}\bigg),\]To recap, putting the above solving steps altogether, we achieve at the following alternating direction method of multipliers (ADMM) that settles the proposed optimization problem (21).
Remark 1.
Algorithm 1 actually consists of four subproblems, and it is necessary to discuss the computational complexity of each subproblem. In the first place, the problem with respect to v can be executed with a linear time complexity order $O({n_{1}}{n_{2}})$ for an ${n_{1}}\times {n_{2}}$ image. Secondly, we note in passing that the solution of the w subproblem is exactly represented by shrinkage, thus there is a linear relationship between calculated cost and ${n_{1}}{n_{2}}$. As for the subproblem of z, the solving structure shows that its complexity is $O({n_{1}}{n_{2}})$. Lastly, the u-subproblem is computed by FFT and inverse FFT, thus it needs $O({n_{1}}{n_{2}}\log ({n_{1}}{n_{2}}))$ arithmetic operations.
4 Experimental Results
In this section, we present some standard test images, which have been corrupted by different blurs and noise intensities, to demonstrate the performance of the proposed model for Poissonian image restoration. Note that Poisson noise is simulated by using the MATLAB library function “poissrnd(I)”. Also, our introduced strategy will be compared with the classical TV, FOTV, HOTV and OGS-TV models in terms of measuring the visual effects and reconstruction accuracy. All experiments are performed under Windows 7 and MATLAB R2011b on a PC with an Intel(R) Core(TM) i5-6500U CPU at 3.20 GHz and 4 GB of RAM.
During the simulations, we apply the widely used peak signal-to-noise ratio (PSNR) criterion as the measure of restored image quality that is defined by
where P denotes the maximum peak value of image, u is the original image, $\tilde{u}$ is the reconstructed one, and ${n_{1}}\times {n_{2}}$ implies the image size. To have a fairer comparison, the recovered perceptual quality is measured by calculating the structure similarity (SSIM, Wang et al., 2004) index as
with ${\mu _{u}}$ and ${\mu _{\tilde{u}}}$ being the means of u and $\tilde{u}$, respectively, and ${\sigma _{u}}$, ${\sigma _{\tilde{u}}}$ indicating the variations of u and $\tilde{u}$. Besides, ${\sigma _{u\tilde{u}}}$ means the covariance between u and $\tilde{u}$, and ${C_{1}}$, ${C_{2}}$ are two small constants to avoid instability in (41). Moreover, we also adopt the feature similarity (FSIM, Zhang et al., 2011) criterion to evaluate the feature-preserving ability of different methods.
(40)
\[ \mathrm{PSNR}=10{\log _{10}}\bigg(\frac{{P^{2}}}{\mathrm{MSE}}\bigg),\hspace{1em}\text{with MSE}=\frac{{\textstyle\textstyle\sum _{i=1}^{{n_{1}}}}{\textstyle\textstyle\sum _{j=1}^{{n_{2}}}}{({u_{i,j}}-{\tilde{u}_{i,j}})^{2}}}{{n_{1}}\times {n_{2}}},\](41)
\[ \mathrm{SSIM}=\frac{(2{\mu _{u}}{\mu _{\tilde{u}}}+{C_{1}})(2{\sigma _{u\tilde{u}}}+{C_{2}})}{({\mu _{u}^{2}}+{\mu _{\tilde{u}}^{2}}+{C_{1}})({\sigma _{u}^{2}}+{\sigma _{\tilde{u}}^{2}}+{C_{2}})},\]It is noteworthy that the equivalent parameters chosen for the subsequent experiments are selected as $L=3$, $p=0.1$, and the iteration of MM method is fixed to 5. Three equal parameters ${\gamma _{1}}$, ${\gamma _{2}}$ and ${\gamma _{3}}$ are updated by using the rule stated in Chan et al. (2011) with the initial value 0.1. In addition, we set $\alpha =3$ for image denoising and $\alpha =5$ for image deblurring. With regard to the stopping condition, each algorithm is terminated using the following formula
In this simulation, we illustrate the compared results by five different methods on two test data for image denoising. The first original image peppers sized by $256\times 256$ pixels is shown on the upper left of Fig. 1. Its noisy version displayed in Fig. 1(b) is obtained by adding Poisson noise to the clean image with the parameter $P=90$. Figures 1(c)–(g), which correspond to the different outcomes, represent the intuitive comparison of TV, FOTV, HOTV, OGS-TV models and our proposed scheme. As a declaration, our model is implemented by setting the parameter $\lambda =65$. For a more convincing explanation, we also present in Fig. 2 the zoom-in regions of the restorations in detail. Besides, the same applications are performed on the image Sailboats ($512\times 512$), as shown in Fig. 3, with the adjusted parameter $\lambda =60$. Meanwhile, the quantitative measures of PSNR, SSIM and FSIM values and the CPU time (in seconds) are listed in Table 1.
Fig. 1
Restoration results for the image Peppers ($P=90$) by using five methods. (a) original image, (b) noisy image, (c) TV, (d) FOTV, (e) HOTV, (f) OGS-TV, (g) our model.
Fig. 2
Zoomed-in results for the image Peppers ($P=90$) by using five methods. (a) original image, (b) noisy image, (c) TV, (d) FOTV, (e) HOTV, (f) OGS-TV, (g) our model.
Fig. 3
Restoration results for the image Sailboats ($P=90$) by using five methods. (a) original image, (b) noisy image, (c) TV, (d) FOTV, (e) HOTV, (f) OGS-TV, (g) our model.
Table 1
Comparison of the performance via five different methods.
Model | Peppers ($P=90$) | Sailboats ($P=90$) | ||||||||
Iter | Time (s) | PSNR | SSIM | FSIM | Iter | Time (s) | PSNR | SSIM | FSIM | |
TV | 36 | 1.3103 | 29.9055 | 0.9568 | 0.9653 | 38 | 4.8038 | 30.8020 | 0.9563 | 0.9787 |
FOTV | 38 | 1.4198 | 29.9683 | 0.9577 | 0.9665 | 37 | 4.8965 | 30.5041 | 0.9526 | 0.9769 |
HOTV | 76 | 2.3989 | 30.3281 | 0.9624 | 0.9692 | 79 | 9.3371 | 30.9519 | 0.9570 | 0.9785 |
OGS-TV | 41 | 1.6928 | 30.4509 | 0.9613 | 0.9709 | 22 | 3.4210 | 31.3271 | 0.9598 | 0.9811 |
Ours | 31 | 1.8225 | 30.8580 | 0.9640 | 0.9730 | 33 | 7.8308 | 31.7135 | 0.9633 | 0.9836 |
The second experiment aims to further verify the denoising ability and effectiveness of our proposed approach. For this purpose, we increase the Poisson noise density by setting a smaller peak value. Two grayscale images Brid and Boats are depicted in Figs. 4(a) and 5(a), where Brid image of size of $256\times 256$ contains rich counters and details, the other has dimensions of $512\times 512$ pixels and complex background. Figures 4(b) and 5(b) indicate the contaminated images with $P=60$. The remaining parts of Figs. 4 and 5 are the corresponding restoration results of the above five different techniques on two images, respectively. It is worth mentioning in this case, the proposed new strategy is computed with the same setting as in the simulation of Sailboats image. Moreover, Table 2 reports in detail the measurable comparisons regarding the PSNR, SSIM and FSIM values and the CPU time.
Fig. 4
Restoration results for the image Bird ($P=60$) by using five methods. (a) original image, (b) noisy image, (c) TV, (d) FOTV, (e) HOTV, (f) OGS-TV, (g) our model.
Fig. 5
Restoration results for the image Boats ($P=60$) by using five methods. (a) original image, (b) noisy image, (c) TV, (d) FOTV, (e) HOTV, (f) OGS-TV, (g) our model.
As is obvious from both visual images and objective data, the TV-based approach can preserve the object boundaries well, but it indeed produces the serious staircase artifacts in smooth regions. On the contrary, we notice that the awful artifacts disappear when the HOTV regularizer is utilized, while it causes the loss of important structural features because of excessive smoothness of the edges. The images restored by the FOTV model exhibit no staircasing but show a little residual noise. For the OGS-TV model, it still produces slight piecewise constant aspects while denoising. As can be observed, the proposed model enables better image quality compared to the other models, and keeps sharp and neat edges with less blocky effect.
Thirdly, the point of this experiment is to test the ability of restoring blurred and noisy images. The standard test images Saturn and Lady, which are of size $256\times 256$ pixels and consist of homogeneous regions and abundant features, are chosen for this numerical example. Their corresponding deteriorated versions shown in Figs. 6(b) and 7(b) are corrupted by Gaussian blur with a kernel size of $7\times 7$ pixels and standard deviation of 2, and noisy due to Poisson noise with $P=255$. In this case, the weighting parameter is adjusted as $\lambda =300$ for the former and $\lambda =370$ for the latter. The recovered results and comparisons of different models can be observed in sequence in Figs. 6(c)–(g), 7(c)–(g) and Table 3.
Table 2
Comparison of the performance via five different methods.
Model | Bird ($P=60$) | Boats ($P=60$) | ||||||||
Iter | Time (s) | PSNR | SSIM | FSIM | Iter | Time (s) | PSNR | SSIM | FSIM | |
TV | 52 | 1.8598 | 28.1826 | 0.9622 | 0.9672 | 45 | 5.8043 | 28.4419 | 0.9527 | 0.9806 |
FOTV | 53 | 1.9209 | 28.2058 | 0.9582 | 0.9698 | 50 | 6.5016 | 28.3214 | 0.9520 | 0.9813 |
HOTV | 84 | 2.7963 | 28.2452 | 0.9620 | 0.9703 | 75 | 9.2308 | 28.6667 | 0.9556 | 0.9840 |
OGS-TV | 44 | 1.9079 | 28.4524 | 0.9641 | 0.9715 | 34 | 5.5546 | 28.7160 | 0.9558 | 0.9807 |
Ours | 35 | 2.1098 | 28.9880 | 0.9673 | 0.9746 | 35 | 8.3198 | 29.3100 | 0.9605 | 0.9848 |
Fig. 6
Restoration results for the image Saturn ($P=255$) by using five methods. (a) original image, (b) degraded image, (c) TV, (d) FOTV, (e) HOTV, (f) OGS-TV, (g) our model.
As for the last experiment, to further comprehensively evaluate the performance of the proposed model for Poissonian image restoration, another classical motion blur is added to this simulation. Here we take the images Moon ($358\times 537$) and Lena ($512\times 512$), which contain flat zones and neat counters embodied in Figs. 8(a) and 9(a), as the objects for image denoising and deblurring. Figures 8(b) and 9(b) stand for the contaminated images blurred by motion blur with motion distance $l=10$ and angle $\theta =45$, and noisy due to Poisson noise with the noise level $P=200$. Thereinto, we alter the parameter $\lambda =360$ for Moon image, and $\lambda =300$ for Lena image. Figures 8 and 9 represent the visual restorations by five different methods separately. And the PSNR, SSIM and FSIM values of the reconstructions are listed in Table 4 minutely.
Fig. 7
Restoration results for the image Lady ($P=255$) by using five methods. (a) original image, (b) degraded image, (c) TV, (d) FOTV, (e) HOTV, (f) OGS-TV, (g) our model.
Table 3
Comparison of the performance via five different methods.
Model | Saturn ($P=255$) | Lady ($P=255$) | ||||||||
Iter | Time (s) | PSNR | SSIM | FSIM | Iter | Time (s) | PSNR | SSIM | FSIM | |
TV | 133 | 4.6795 | 33.0938 | 0.9416 | 0.9367 | 85 | 3.0500 | 28.9949 | 0.8478 | 0.8586 |
FOTV | 73 | 2.6083 | 33.3967 | 0.9453 | 0.9551 | 44 | 1.5527 | 28.2643 | 0.8419 | 0.8833 |
HOTV | 105 | 3.4096 | 33.1280 | 0.9468 | 0.9561 | 97 | 3.1181 | 28.6701 | 0.8593 | 0.8884 |
OGS-TV | 41 | 1.6945 | 33.4080 | 0.9470 | 0.9543 | 43 | 1.8380 | 29.1683 | 0.8595 | 0.8881 |
Ours | 37 | 2.2526 | 33.6515 | 0.9473 | 0.9563 | 30 | 1.9765 | 29.5221 | 0.8656 | 0.9012 |
Fig. 8
Restoration results for the image Moon ($P=200$) by using five methods. (a) original image, (b) degraded image, (c) TV, (d) FOTV, (e) HOTV, (f) OGS-TV, (g) our model.
Fig. 9
Restoration results for the image Lena ($P=200$) by using five methods. (a) original image, (b) degraded image, (c) TV, (d) FOTV, (e) HOTV, (f) OGS-TV, (g) our model.
All in all, it is clear to observe that the staircase-like and fuzzy effects caused by the TV-based model inevitably degrade the image quality. The HOTV model avoids the shortcoming of staircase aspects, but the resulting images are mostly composed of incorrect edges and blurred counters. Even though the FOTV and OGS-TV models are able to reduce the staircase artifacts to some extent, they lead to the unexpected phenomenon of residual noise or staircase distortion. However, our introduced novel approach, which combines the advantages of nonconvex second-order regularizer and OGS-TV function, performs prominently in comparison with other existing popular techniques. Our scheme not only substantially makes the artifacts disappear, but also shapes the edges sharper. Quantitatively, this results in the restorations possessing the best PSNR, SSIM and FSIM values in Tables 3 and 4.
Table 4
Comparison of the performance via five different methods.
Model | Moon ($P=200$) | Lena ($P=200$) | ||||||||
Iter | Time (s) | PSNR | SSIM | FSIM | Iter | Time (s) | PSNR | SSIM | FSIM | |
TV | 80 | 10.3288 | 33.3848 | 0.8623 | 0.8877 | 62 | 8.0070 | 28.6305 | 0.8349 | 0.9298 |
FOTV | 64 | 8.8045 | 33.2102 | 0.8915 | 0.9084 | 42 | 5.4066 | 28.2719 | 0.8363 | 0.9364 |
HOTV | 128 | 11.8345 | 33.6125 | 0.8743 | 0.9115 | 92 | 10.5619 | 28.7451 | 0.8497 | 0.9450 |
OGS-TV | 59 | 7.0703 | 33.8428 | 0.9079 | 0.8986 | 36 | 5.3908 | 28.8995 | 0.8474 | 0.9414 |
Ours | 36 | 6.7205 | 34.1525 | 0.9121 | 0.9140 | 33 | 7.8365 | 29.1134 | 0.8540 | 0.9492 |
5 Conclusion
A novel hybrid nonconvex variational model, which we have proposed in this paper, is applied to reconstruct the degraded images under Poisson noise. The introduced scheme closely integrates the superiorities of OGS-TV and nonconvex HOTV regularizers, this combination helps to improve the accuracy of image restoration. To efficiently get the optimal solution of the minimization problem, we develop a modified alternating direction method of multipliers by combining the iteratively reweighted ${\ell _{1}}$ algorithm and the majorization-minimization method. Lastly, for the case of reducing the staircase-like aspects and preserving edge features, numerous simulation examples, which are compared with other widely applied regularization models, have demonstrated that our presented strategy performs better for restoration of Poissonian images.