Informatica logo


Login Register

  1. Home
  2. Issues
  3. Volume 33, Issue 3 (2022)
  4. Restoration of Poissonian Images Using N ...

Informatica

Information Submit your article For Referees Help ATTENTION!
  • Article info
  • Full article
  • Related articles
  • Cited by
  • More
    Article info Full article Related articles Cited by

Restoration of Poissonian Images Using Nonconvex Regularizer with Overlapping Group Sparsity
Volume 33, Issue 3 (2022), pp. 573–592
Xinwu Liu   Wenhui Lian  

Authors

 
Placeholder
https://doi.org/10.15388/22-INFOR480
Pub. online: 24 March 2022      Type: Research Article      Open accessOpen Access

Received
1 April 2021
Accepted
1 March 2022
Published
24 March 2022

Abstract

Aimed at achieving the accurate restoration of Poissonian images that exhibit neat edges and no staircase effect, this article develops a novel hybrid nonconvex double regularizer model. The proposed scheme closely takes the advantages of total variation with overlapping group sparsity and nonconvex high-order total variation priors. The overlapping group sparsity is adopted to globally suppress the staircase artifacts, while the nonconvex high-order regularization plays the role of locally preserving the significant image features and edge details. Computationally, a quite efficient alternating direction method of multipliers, associated with the iteratively reweighted ${\ell _{1}}$ algorithm and the majorization-minimization method, is employed to settle the optimization problem iteratively. Finally, exhaustive simulation experiments are executed for recovering Poissonian images, which are made comparisons with several state-of-the-art restoration strategies, indicate the brilliant performance of our model in terms of intuitive effects and accuracy evaluation.

1 Introduction

In imaging science, the obtained images usually have undesirable degradation that is caused by the limitation of external environment, electronic equipment and human factors. Among those, the pollution of Poisson noise is a widely considered issue, which commonly occurs in medical imaging (Sarder and Nehorai, 2006), single particle emission computed tomography (Bardsley and Goldes, 2011) and various other applications. Therefore, the problem of Poisson noise removal is an important and urgent task.
To solve the above problem, Setzer et al. (2010) proposed a classical Poissonian image restoration model based on the total variation (TV) regularization as
(1)
\[ \underset{u}{\min }\| \nabla u{\| _{1}}+\lambda \langle 1,Ku-f\log Ku\rangle ,\]
where ∇ denotes the gradient operator, λ is a positive tuning parameter that controls the data fidelity term, K means a nonnegative linear compact operator, and then u and f represent the original image and the positive, bounded observed version separately. With the aim to optimize the minimization problem (1), the researches in (Setzer et al., 2010; Liu and Huang, 2012; Pham et al., 2020) adopted the alternating split Bregman iteration to avoid the inner loop. Besides, several valid numerical algorithms have been proposed in recent years, such as the scaled gradient projection method (Bonettini et al., 2009), and alternating direction algorithm (Figueiredo and Bioucas-Dias, 2010).
Generally speaking, the TV-based models have the edge-preserving property in the process. That being said, the TV penalty often makes homogeneous regions easy to be over-divided to appear piecewise constant. To compensate for this technical drawback, there have emerged many efficient solvers, such as the high-order TV (HOTV, Chan et al., 2000; Lysaker et al., 2003), nonlocal TV (Gilboa and Osher, 2008), and total generalized variation (Bredies et al., 2010; Liu, 2016, 2021) regularized schemes. Devoted to removing Poisson noise, the HOTV-based model formally reads as
(2)
\[ \underset{u}{\min }{\big\| {\nabla ^{2}}u\big\| _{1}}+\lambda \langle 1,Ku-f\log Ku\rangle ,\]
with ${\nabla ^{2}}$ denoting the second-order gradient operator. The merit of high-order strategy is able to eliminate the staircase effect caused by the TV model. Lately, another innovation to conquer the unexpected distortion is the fractional-order TV (FOTV, Chowdhury et al., 2020). Unfortunately, the images resulting from the aforesaid techniques sometimes suffer from the over-smoothness of contours or the residue of noise.
As opposed to the convex models mentioned above, Nikolova et al. (2010) illustrated that the nonconvex regularizers in the aspect of preserving shapes are superior to the convex regularizers. As a matter of fact, nonconvex TV (Nikolova et al., 2010; Chartrand, 2007) possesses some excellent features of TV, but it may cause more serious staircase artifacts. In our opinion, nonconvex HOTV (Adam and Paramesran, 2019; Oh et al., 2013) is more appropriate for processing the degraded images relatively. Recently, the work (Lv et al., 2016) considers the TV with overlapping group sparsity (OGS-TV) for deblurring Poisson noisy images. It follows from the experiments that this approach can alleviate the blocky aspects to a certain extent, however, the staircasing effect is still present in the recovered image.
For the purpose of overcoming the staircasing effect and obtaining sharp jump discontinuities simultaneously, as well as improving the accuracy of image restoration, this paper focuses on a novel hybrid regularizers strategy for image deblurring under Poisson noise. The proposed model closely incorporates the advantages of OGS-TV and nonconvex HOTV regularizers. Mathematically, the resulting optimization model is formulated as follows
(3)
\[ \underset{u}{\min }\phi (\nabla u)+\alpha {\big\| {\nabla ^{2}}u\big\| _{p}^{p}}+\lambda \langle 1,Ku-f\log Ku\rangle ,\]
where $\phi (\cdot )$ is defined by the overlapping group sparsity, and $\| \cdot {\| _{p}^{p}}$ is a nonconvex ${\ell _{p}}$ norm with $0<p<1$. The parameter $\alpha >0$ controls the tradeoff between the first and second regularizers.
The main ideas and contributions of the current article are summarized as follows. First, a novel hybrid regularizers model, which combines the superiorities of OGS-TV and nonconvex HOTV, is explored for deblurring Poissonian images. The inclusion of double regularizers contributes to achieving accurate restoration, and preserving sharp edges while suppressing the staircasing artifacts. Second, to deal with the resulting nonconvex minimization problem, we design an efficient alternating direction method of multipliers, integrating it with the popular variable splitting method, majorization-minimization (MM) method and iteratively reweighted ${\ell _{1}}$ algorithm. Lastly, compared with several state-of-the-art denoising techniques, numerous numerical experiments are presented to illustrate the superior performance of our newly developed scheme.
The remainder of this paper is generalized as follows. In Section 2, we briefly present some mathematical notations and necessary definitions, as well as the algorithms related to our research. Moreover, the detailed solving steps of the proposed minimization are described in Section 3 together with some efficient methods and remark. Subsequently, in Section 4 we conduct several numerical experiments to display the comparisons of different models, and demonstrate our significant improvements. Finally, Section 5 presents a conclusion of this paper.

2 Preliminaries

In this section, our main target is to outline some necessary background knowledge, which is tailored for the sequel numerical computations.

2.1 Notation

Suppose that $\Omega \subset {\mathbb{R}^{n}}$ is an open, bounded domain. Then the TV of $u\in {L^{1}}(\Omega )$ is formulated as
(4)
\[ \mathrm{TV}(u)={\int _{\Omega }}|\nabla u|\mathrm{d}x=\sup \bigg\{{\int _{\Omega }}u\mathrm{div}(\psi )\mathrm{d}x:\psi \in {C_{c}^{1}}\big(\Omega ;{\mathbb{R}^{n}}\big),|\psi |\leqslant 1\bigg\},\]
with div being the divergence operator. As for the high-order version, it takes the form of
(5)
\[ \mathrm{HTV}(u)={\int _{\Omega }}\big|{\nabla ^{2}}u\big|\mathrm{d}x=\sup \Bigg\{{\int _{\Omega }}{\sum \limits_{i,j=1}^{n}}u{\partial _{j}}{\partial _{i}}{\psi ^{ij}}\mathrm{d}x:\psi \in {C_{c}^{2}}\big(\Omega ;{\mathbb{R}^{n\times n}}\big),|\psi |\leqslant 1\Bigg\},\]
where $|\psi (x)|=\sqrt{{\textstyle\sum _{i,j=1}^{n}}{({\psi ^{ij}})^{2}}}$, and ${C_{c}^{2}}(\Omega ;{\mathbb{R}^{n\times n}})$ is the set of continuous quadratic differentiable vector functions on the compact support set in Ω.
At present, we are in a position to give the discrete setting. For notational convenience, we assume that the size of an image is ${n_{1}}\times {n_{2}}$, and define the function space $U={C_{c}^{2}}(\Omega ;{\mathbb{R}^{n\times n}})$. By the theory of finite difference method, the first-order forward and backward difference operators are respectively characterized by
\[\begin{aligned}{}& {\big({\partial _{x}^{+}}u\big)_{i,j}}=\left\{\begin{array}{l@{\hskip4.0pt}l}{u_{i+1,j}}-{u_{i,j}},\hspace{1em}& 1\leqslant i<{n_{1}},\\ {} 0,\hspace{1em}& i={n_{1}},\end{array}\right.\\ {} & {\big({\partial _{y}^{+}}u\big)_{i,j}}=\left\{\begin{array}{l@{\hskip4.0pt}l}{u_{i,j+1}}-{u_{i,j}},\hspace{1em}& 1\leqslant j<{n_{2}},\\ {} 0,\hspace{1em}& j={n_{2}},\end{array}\right.\\ {} & {\big({\partial _{x}^{-}}u\big)_{i,j}}=\left\{\begin{array}{l@{\hskip4.0pt}l}{u_{i,j}}-{u_{i-1,j}},\hspace{1em}& 1<i<{n_{1}},\\ {} {u_{1,j}},\hspace{1em}& i=1,\\ {} -{u_{{n_{1}}-1,j}},\hspace{1em}& i={n_{1}},\end{array}\right.\\ {} & {\big({\partial _{y}^{-}}u\big)_{i,j}}=\left\{\begin{array}{l@{\hskip4.0pt}l}{u_{i,j}}-{u_{i,j-1}},\hspace{1em}& 1<j<{n_{2}},\\ {} {u_{i,1}},\hspace{1em}& j=1,\\ {} -{u_{i,{n_{2}}-1}},\hspace{1em}& j={n_{2}}.\end{array}\right.\end{aligned}\]
Consequently, the discrete gradient operator enjoys the following expression
(6)
\[ {(\nabla u)_{i,j}}={\big({\big({\partial _{x}^{+}}u\big)_{i,j}},{\big({\partial _{y}^{+}}u\big)_{i,j}}\big)^{T}},\]
and the counterpart of second-order operator is thus defined by
(7)
\[ {\big({\nabla ^{2}}u\big)_{i,j}}=\left(\begin{array}{c@{\hskip4.0pt}c}{\partial _{x}^{-}}{({\partial _{x}^{+}}u)_{i,j}}& {\partial _{x}^{+}}{({\partial _{y}^{+}}u)_{i,j}}\\ {} {\partial _{y}^{-}}{({\partial _{x}^{-}}u)_{i,j}}& {\partial _{y}^{-}}{({\partial _{y}^{+}}u)_{i,j}}\end{array}\right).\]
Furthermore, the discrete forms of the first-order and second-order divergence operators on the space U are described as
\[\begin{aligned}{}& {(\mathrm{div}u)_{i,j}}={\big({\partial _{x}^{-}}u\big)_{i,j}}+{\big({\partial _{y}^{-}}u\big)_{i,j}},\\ {} & {\big({\mathrm{div}^{2}}u\big)_{i,j}}={\partial _{x}^{+}}{\big({\partial _{x}^{-}}u\big)_{i,j}}+{\partial _{x}^{-}}{\big({\partial _{y}^{-}}u\big)_{i,j}}+{\partial _{y}^{+}}{\big({\partial _{x}^{+}}u\big)_{i,j}}+{\partial _{y}^{+}}{\big({\partial _{y}^{-}}u\big)_{i,j}}.\end{aligned}\]

2.2 OGS-TV Method

As stated in Selesnick and Chen (2013), an L-point group of the vector s is represented as
(8)
\[ {s_{i,L}}=[{s_{i}},{s_{i+1}},\dots ,{s_{i+L-1}}]\in {\mathbb{R}^{L}},\]
where L denotes the size of group, and ${s_{i,L}}$ is a block that is composed of L continuous components of s beginning with index i. On the basis of this notation, the work (Peyre and Fadili, 2011) puts forward a general 1D group sparsity regularizer
(9)
\[ \varphi (s)=\sum \limits_{i}\| {s_{i,L}}{\| _{2}}=\sum \limits_{i}{\Bigg({\sum \limits_{l=0}^{L-1}}|{s_{i+l}}{|^{2}}\Bigg)^{1/2}}.\]
Note that if $L=1$, $\varphi (s)$ in (9) degrades into the commonly used 1D TV functional, and when $L>1$, $\varphi (s)$ is beneficial for block sparsity (Bayram, 2011).
Regarding the case of 2D image $u\in {\mathbb{R}^{m\times 1}}$ with $m={n_{1}}\times {n_{2}}$, the associated $L\times L$-point group can be defined as the following square matrix. That is,
(10)
\[ {\tilde{u}_{i,j,L}}=\left[\begin{array}{c@{\hskip4.0pt}c@{\hskip4.0pt}c@{\hskip4.0pt}c}{u_{i-{m_{1}},j-{m_{1}}}}\hspace{1em}& {u_{i-{m_{1}},j-{m_{1}}+1}}\hspace{1em}& \cdots \hspace{1em}& {u_{i-{m_{1}},j+{m_{2}}}}\\ {} {u_{i-{m_{1}}+1,j-{m_{1}}}}\hspace{1em}& {u_{i-{m_{1}}+1,j-{m_{1}}+1}}\hspace{1em}& \cdots \hspace{1em}& {u_{i-{m_{1}}+1,j+{m_{2}}}}\\ {} \vdots \hspace{1em}& \vdots \hspace{1em}& \ddots \hspace{1em}& \vdots \\ {} {u_{i+{m_{2}},j-{m_{1}}}}\hspace{1em}& {u_{i+{m_{2}},j-{m_{1}}+1}}\hspace{1em}& \cdots \hspace{1em}& {u_{i+{m_{2}},j+{m_{2}}}}\end{array}\right],\]
where ${m_{1}}=\lceil \frac{L-1}{2}\rfloor $ and ${m_{2}}=\lceil \frac{L}{2}\rfloor $, with $\lceil \cdot \rfloor $ being the operation of rounding. By means of piling up the L columns of the $L\times L$ matrix shown in (10), we obtain a vector ${u_{i,j,L}}$ such that ${u_{i,j,L}}={\tilde{u}_{i,j,L}}(:)$. Thus, the OGS function in a 2D arrangement is read as
(11)
\[ \varphi (u)=\sum \limits_{i}\sum \limits_{j}\| {u_{i,j,L}}{\| _{2}}.\]
Consequently,the function $\phi (\nabla u)$ in (3) acted as a regularizer can be defined as
(12)
\[ \phi (\nabla u)=\varphi ({\nabla _{x}}u)+\varphi ({\nabla _{y}}u).\]
It follows from the above formula that if $L=1$, $\phi (\nabla u)$ is the anisotropic version of TV function. Otherwise, when $L>1$, this regularizer is denominated as the OGS-TV.

2.3 MM Method

The MM method (Hunter and Lange, 2004; Figueiredo et al., 2007) is frequently applied to cope with the minimization problem. Rather than directly handling a complex cost function $P(u)$, this technique achieves the computational efficiency by solving a series of more tractable optimization issues $Q(u,{u^{k}})$ $(k=0,1,2,\dots )$. In fact, an MM iterative approach that minimizes $P(u)$ can be set as the model
(13)
\[ {u^{k+1}}=\mathrm{arg}\underset{u}{\min }Q\big(u,{u^{k}}\big),\]
which requires that $Q(u,{u^{k}})$ is not less than $P(u)$, and $Q(u,u)$ is equal to $P(u)$.
For example, the form of an optimization scheme is as follows
(14)
\[ \underset{u}{\min }\bigg\{P(u)=\frac{1}{2}\| u-{u_{0}}{\| _{2}^{2}}+\lambda \varphi (u)\bigg\},\]
with $\lambda >0$, and φ denotes a penalty function that is given by (11).
To obtain an effective strategy for dealing with the minimization (14) by employing the MM method, we need to seek out a majorizor of $P(u)$. Owing to the quadratic form of the first term in (14), we actually need to find a majorizor of $\varphi (u)$. The average value inequality gives that
(15)
\[ \frac{1}{2\| t{\| _{2}}}\| u{\| _{2}^{2}}+\frac{1}{2}\| t{\| _{2}}\geqslant \| u{\| _{2}},\]
where $u,t\ne 0$. We remark that if and only if $t=u$, the equal sign in (15) holds. Substituting each group of $\varphi (u)$ into (15), and then adding them up, this yields a majorizor of $\varphi (u)$ as
(16)
\[ R(u,t)=\frac{1}{2}{\sum \limits_{i=1}^{{n_{1}}}}{\sum \limits_{j=1}^{{n_{2}}}}\bigg[\frac{1}{\| {t_{i,j,L}}{\| _{2}}}\| {u_{i,j,L}}{\| _{2}^{2}}+\| {t_{i,j,L}}{\| _{2}}\bigg],\]
with $R(u,t)\geqslant \varphi (u)$ and $R(t,t)=\varphi (t)$. And for $\forall i,j$, (16) needs to meet $\| {t_{i,j,L}}{\| _{2}}\ne 0$. By an uncomplicated calculation, it can be reformulated as
(17)
\[ R(u,t)=\frac{1}{2}{u^{T}}\Lambda (t)u+C,\]
where C is a constant related to t, and each element in the diagonal matrix $\Lambda (t)$ is expressed by
(18)
\[ {\big[\Lambda (t)\big]_{d,d}}={\sum \limits_{i=-{m_{1}}}^{{m_{2}}}}{\sum \limits_{j=-{m_{1}}}^{{m_{2}}}}{\Bigg(\hspace{0.1667em}{\sum \limits_{{k_{1}}=-{m_{1}}}^{{m_{2}}}}{\sum \limits_{{k_{2}}=-{m_{1}}}^{{m_{2}}}}|{u_{d-i+{k_{1}},d-j+{k_{2}}}}{|^{2}}\Bigg)^{-1/2}},\]
with the subscript $d=1,2,\dots ,m$.
So as to minimize $P(u)$, an iterative algorithm using the MM approach is defined by
(19)
\[ {u^{k+1}}=\mathrm{arg}\underset{u}{\min }\frac{1}{2}\| u-{u_{0}}{\| _{2}^{2}}+\frac{\lambda }{2}{u^{T}}\Lambda \big({u^{k}}\big)u.\]
After some manipulations, we can easily get the final solution
(20)
\[ {u^{k+1}}={\big(I+\lambda \Lambda \big({u^{k}}\big)\big)^{-1}}{u_{0}},\]
where I means an identity matrix, which has the same size as $\Lambda ({u^{k}})$.

3 Numerical Algorithm

Our task, in this section, is chiefly to resolve our proposed hybrid model that combines the OGS-TV and nonconvex HOTV functions for image restoration.
For solving the ${\ell _{p}}$-norm based nonconvex optimization problem, in all other typical algorithms that we are aware of, the iteratively reweighted ${\ell _{1}}$ algorithm (Candes et al., 2008; Ochs et al., 2015) is generally an ideal treatment. Applying this method to the above minimization reduces to the following approximation
(21)
\[ \underset{u}{\min }\phi (\nabla u)+\alpha {\omega ^{k}}{\big\| {\nabla ^{2}}u\big\| _{1}}+\lambda \langle 1,Ku-f\log Ku\rangle ,\]
where the weight ω is calculated at the k-th iteration by ${\omega ^{k}}=p/{(|{\nabla ^{2}}{u^{k}}|+\epsilon )^{1-p}}$, with ϵ denoting a small positive number that prevents the denominator from being equal to zero.
To deal with the non-linearity and non-differentiability properties of the constructed model, we then resort to the classical variable splitting technique. Therefore, the objective function (21) can be transformed into a constrained optimization problem by introducing three auxiliary variables v, w and z. Namely,
(22)
\[ \underset{v,w,z,u}{\min }\phi (v)+\alpha {\omega ^{k}}\| w{\| _{1}}+\lambda \langle 1,z-f\log z\rangle ,\hspace{2.5pt}\mathrm{s}.\mathrm{t}.\hspace{2.5pt}v=\nabla u,\hspace{2.5pt}w={\nabla ^{2}}u,\hspace{2.5pt}z=Ku.\]
The above minimization problem can be effectually solved adopting the alternating direction method of multipliers (Gabay and Mercier, 1976; Bertsekas and Tsitsiklis, 1997). Consequently, we define the corresponding augmented Lagrangian function
(23)
\[\begin{aligned}{}& \mathcal{L}(v,w,z,u;{\mu _{1}},{\mu _{2}},{\mu _{3}})\\ {} & \hspace{1em}=\phi (v)+\alpha {\omega ^{k}}\| w{\| _{1}}+\lambda \langle 1,z-f\log z\rangle \\ {} & \hspace{2em}-\langle {\mu _{1}},v-\nabla u\rangle +\frac{{\gamma _{1}}}{2}\| v-\nabla u{\| _{2}^{2}}-\big\langle {\mu _{2}},w-{\nabla ^{2}}u\big\rangle \\ {} & \hspace{2em}+\frac{{\gamma _{2}}}{2}\big\| w-{\nabla ^{2}}u{\big\| _{2}^{2}}-\langle {\mu _{3}},z-Ku\rangle +\frac{{\gamma _{3}}}{2}\| z-Ku{\| _{2}^{2}},\end{aligned}\]
where the symbols ${\mu _{1}},{\mu _{2}},{\mu _{3}}$ stand for the Lagrange multipliers, and three positive penalty parameters ${\gamma _{1}},{\gamma _{2}},{\gamma _{3}}$ are used to measure the quadratic penalization.
Obtaining the optimal solution of (22) is equivalent to seeking a saddle point of $\mathcal{L}$. Since it is technically more difficult to settle all the variables simultaneously, this is done by alternately minimizing $\mathcal{L}(v,w,z,u;{\mu _{1}},{\mu _{2}},{\mu _{3}})$ with respect to v, w, z and u,
(24)
\[\begin{aligned}{}& \big({v^{k+1}},{w^{k+1}},{z^{k+1}},{u^{k+1}}\big)\\ {} & \hspace{1em}=\mathrm{arg}\underset{v,w,z,u}{\min }\phi (v)+\alpha {\omega ^{k}}\| w{\| _{1}}+\lambda \langle 1,z-f\log z\rangle \\ {} & \hspace{2em}-\big\langle {\mu _{1}^{k}},v-\nabla u\big\rangle +\frac{{\gamma _{1}}}{2}\| v-\nabla u{\| _{2}^{2}}-\big\langle {\mu _{2}^{k}},w-{\nabla ^{2}}u\big\rangle \\ {} & \hspace{2em}+\frac{{\gamma _{2}}}{2}\big\| w-{\nabla ^{2}}u{\big\| _{2}^{2}}-\big\langle {\mu _{3}^{k}},z-Ku\big\rangle +\frac{{\gamma _{3}}}{2}\| z-Ku{\| _{2}^{2}},\end{aligned}\]
with the updating Lagrange multipliers ${\mu _{1}^{k+1}}$, ${\mu _{2}^{k+1}}$ and ${\mu _{3}^{k+1}}$
(25)
\[ \left\{\begin{array}{l}{\mu _{1}^{k+1}}={\mu _{1}^{k}}+{\gamma _{1}}\big(\nabla {u^{k+1}}-{v^{k+1}}\big),\\ {} {\mu _{2}^{k+1}}={\mu _{2}^{k}}+{\gamma _{2}}\big({\nabla ^{2}}{u^{k+1}}-{w^{k+1}}\big),\\ {} {\mu _{3}^{k+1}}={\mu _{3}^{k}}+{\gamma _{3}}\big(K{u^{k+1}}-{z^{k+1}}\big).\end{array}\right.\]
More precisely, the optimization problem (24) can be effectively solved by yielding the decoupled decomposition as
(26)
\[ \left\{\begin{array}{l}{v^{k+1}}=\mathrm{arg}\underset{v}{\min }\phi (v)+\frac{{\gamma _{1}}}{2}{\big\| v-\nabla {u^{k}}\big\| _{2}^{2}}-\big\langle {\mu _{1}^{k}},v-\nabla {u^{k}}\big\rangle ,\\ {} {w^{k+1}}=\mathrm{arg}\underset{w}{\min }\alpha {\omega ^{k}}\| w{\| _{1}}+\frac{{\gamma _{2}}}{2}{\big\| w-{\nabla ^{2}}{u^{k}}\big\| _{2}^{2}}-\big\langle {\mu _{2}^{k}},w-{\nabla ^{2}}{u^{k}}\big\rangle ,\\ {} {z^{k+1}}=\mathrm{arg}\underset{z}{\min }\lambda \langle 1,z-f\log z\rangle +\frac{{\gamma _{3}}}{2}{\big\| z-K{u^{k}}\big\| _{2}^{2}}-\big\langle {\mu _{3}^{k}},z-K{u^{k}}\big\rangle ,\\ {} {u^{k+1}}=\mathrm{arg}\underset{u}{\min }\frac{{\gamma _{1}}}{2}{\big\| {v^{k+1}}-\nabla u\big\| _{2}^{2}}-\big\langle {\mu _{1}^{k}},{v^{k+1}}-\nabla u\big\rangle +\frac{{\gamma _{2}}}{2}{\big\| {w^{k+1}}-{\nabla ^{2}}u\big\| _{2}^{2}}\\ {} \hspace{34.14322pt}-\big\langle {\mu _{2}^{k}},{w^{k+1}}-{\nabla ^{2}}u\big\rangle +\displaystyle \frac{{\gamma _{3}}}{2}{\big\| {z^{k+1}}-Ku\big\| _{2}^{2}}-\big\langle {\mu _{3}^{k}},{z^{k+1}}-Ku\big\rangle .\end{array}\right.\]
In what follows, our purpose is to settle each subproblem in detail one by one. At first, fixing the variables $\{w,z,u\}$, the minimization of energy $\mathcal{L}$ regarding v is of the form
(27)
\[ {v^{k+1}}=\mathrm{arg}\underset{v}{\min }\phi (v)+\frac{{\gamma _{1}}}{2}{\bigg\| v-\bigg(\nabla {u^{k}}+\frac{{\mu _{1}^{k}}}{{\gamma _{1}}}\bigg)\bigg\| _{2}^{2}}.\]
This, together with the definitions of OGS and MM algorithms, leads to a converted formation that we are interested in:
(28)
\[ {v^{k+1}}=\mathrm{arg}\underset{v}{\min }\frac{1}{2}{v^{T}}\Lambda \big({v^{k}}\big)v+\frac{{\gamma _{1}}}{2}{\bigg\| v-\bigg(\nabla {u^{k}}+\frac{{\mu _{1}^{k}}}{{\gamma _{1}}}\bigg)\bigg\| _{2}^{2}}.\]
Based on the preliminaries described above, the solution of v is obviously given by
(29)
\[ {v^{k+1}}={\bigg(I+\frac{1}{{\gamma _{1}}}\Lambda \big({v^{k}}\big)\bigg)^{-1}}\bigg(\nabla {u^{k}}+\frac{{\mu _{1}^{k}}}{{\gamma _{1}}}\bigg).\]
Subsequently, we aim to figure out the subproblem pertaining to w. The second expression in (26) equivalently shares the following more concise formula
(30)
\[ {w^{k+1}}=\mathrm{arg}\underset{w}{\min }\alpha {\omega ^{k}}\| w{\| _{1}}+\frac{{\gamma _{2}}}{2}{\bigg\| w-\bigg({\nabla ^{2}}{u^{k}}+\frac{{\mu _{2}^{k}}}{{\gamma _{2}}}\bigg)\bigg\| _{2}^{2}}.\]
For notational simplicity, let us denote by ${X^{k}}=({\nabla ^{2}}{u^{k}}+{\mu _{2}^{k}}/{\gamma _{2}})$. The solving process of sequence (30) is given explicitly by utilizing a soft thresholding with the shrink operator. Concretely, which takes the form of
(31)
\[ {w^{k+1}}=\mathrm{shrink}\bigg({X^{k}},\frac{\alpha {\omega ^{k}}}{{\gamma _{2}}}\bigg)=\max \bigg\{\big|{X^{k}}\big|-\frac{\alpha {\omega ^{k}}}{{\gamma _{2}}},0\bigg\}\cdot \mathrm{sgn}\big({X^{k}}\big),\]
with the 1D shrinkage formula being defined by
(32)
\[ \mathrm{shrink}(t,\rho )=\max \big(|t|-\rho ,0\big)\cdot \mathrm{sgn}(t).\]
Thereafter, the concerned subproblem with regard to z corresponds to the equivalent minimization problem as follows
(33)
\[ {z^{k+1}}=\mathrm{arg}\underset{z}{\min }\lambda \langle 1,z-f\log z\rangle +\frac{{\gamma _{3}}}{2}{\bigg\| z-\bigg(K{u^{k}}+\frac{{\mu _{3}^{k}}}{{\gamma _{3}}}\bigg)\bigg\| _{2}^{2}}.\]
According to the variational method, this results in the first-order optimization condition as
(34)
\[ 0=\lambda \bigg(1-\frac{f}{{z^{k+1}}}\bigg)+{\gamma _{3}}\bigg({z^{k+1}}-K{u^{k}}-\frac{{\mu _{3}^{k}}}{{\gamma _{3}}}\bigg).\]
More explicitly, by using the extract roots formula of quadratic equation, we acquire the following closed-form solution
(35)
\[ {z^{k+1}}=\frac{K{u^{k}}-\frac{\lambda }{{\gamma _{3}}}+\frac{{\mu _{3}^{k}}}{{\gamma _{3}}}}{2}+\sqrt{{\bigg(\frac{K{u^{k}}-\frac{\lambda }{{\gamma _{3}}}+\frac{{\mu _{3}^{k}}}{{\gamma _{3}}}}{2}\bigg)^{2}}+\frac{\lambda }{{\gamma _{3}}}f}.\]
The last procedure of the alternating method is to minimize for u. Deducing from the fourth equation in (26), we thus obtain
(36)
\[\begin{aligned}{}{u^{k+1}}& =\mathrm{arg}\underset{u}{\min }\frac{{\gamma _{1}}}{2}{\bigg\| \nabla u-{v^{k+1}}+\frac{{\mu _{1}^{k}}}{{\gamma _{1}}}\bigg\| _{2}^{2}}+\frac{{\gamma _{2}}}{2}{\bigg\| {\nabla ^{2}}u-{w^{k+1}}+\frac{{\mu _{2}^{k}}}{{\gamma _{2}}}\bigg\| _{2}^{2}}\\ {} & \hspace{1em}+\frac{{\gamma _{3}}}{2}{\bigg\| Ku-{z^{k+1}}+\frac{{\mu _{3}^{k}}}{{\gamma _{3}}}\bigg\| _{2}^{2}}.\end{aligned}\]
Considering that the subproblem (36) is a simple quadratic problem, the specific Euler–Lagrange equation can be derived as follows
(37)
\[\begin{aligned}{}0& ={\gamma _{1}}{\nabla ^{\ast }}\bigg(\nabla {u^{k+1}}-{v^{k+1}}+\frac{{\mu _{1}^{k}}}{{\gamma _{1}}}\bigg)+{\gamma _{2}}{\big({\nabla ^{2}}\big)^{\ast }}\bigg({\nabla ^{2}}{u^{k+1}}-{w^{k+1}}+\frac{{\mu _{2}^{k}}}{{\gamma _{2}}}\bigg)\\ {} & \hspace{1em}+{\gamma _{3}}{K^{\ast }}\bigg(K{u^{k+1}}-{z^{k+1}}+\frac{{\mu _{3}^{k}}}{{\gamma _{3}}}\bigg),\end{aligned}\]
where ∗ indicates the conjugate operator. By doing a simple arrangement, the above formulation can be represented as
(38)
\[\begin{aligned}{}& \big[{\gamma _{1}}{\nabla ^{\ast }}\nabla +{\gamma _{2}}{\big({\nabla ^{2}}\big)^{\ast }}{\nabla ^{2}}+{\gamma _{3}}{K^{\ast }}K\big]{u^{k+1}}\\ {} & \hspace{1em}={\nabla ^{\ast }}\big({\gamma _{1}}{v^{k+1}}-{\mu _{1}^{k}}\big)+{\big({\nabla ^{2}}\big)^{\ast }}\big({\gamma _{2}}{w^{k+1}}-{\mu _{2}^{k}}\big)+{K^{\ast }}\big({\gamma _{3}}{z^{k+1}}-{\mu _{3}^{k}}\big),\end{aligned}\]
with the periodic boundary condition for u. Note that all the operators ${K^{\ast }}K$, ${\nabla ^{\ast }}\nabla $ and ${({\nabla ^{2}})^{\ast }}{\nabla ^{2}}$ have block circulant structure. Hence, the solution to ${u^{k+1}}$ can be solved efficiently by using fast Fourier transform (FFT) and its inverse. As a matter of convenience, let ${\upsilon _{1}^{k+1}}={\gamma _{1}}{v^{k+1}}-{\mu _{1}^{k}}$, ${\upsilon _{2}^{k+1}}={\gamma _{2}}{w^{k+1}}-{\mu _{2}^{k}}$ and ${\upsilon _{3}^{k+1}}={\gamma _{3}}{z^{k+1}}-{\mu _{3}^{k}}$. Therefore, we have
(39)
\[ {u^{k+1}}={\mathcal{F}^{-1}}\bigg(\frac{\mathcal{F}{(\nabla )^{\ast }}\circ \mathcal{F}({\upsilon _{1}^{k+1}})+\mathcal{F}{({\nabla ^{2}})^{\ast }}\circ \mathcal{F}({\upsilon _{2}^{k+1}})+\mathcal{F}{(K)^{\ast }}\circ \mathcal{F}({\upsilon _{3}^{k+1}})}{{\gamma _{1}}\mathcal{F}{(\nabla )^{\ast }}\circ \mathcal{F}(\nabla )+{\gamma _{2}}\mathcal{F}{({\nabla ^{2}})^{\ast }}\circ \mathcal{F}({\nabla ^{2}})+{\gamma _{3}}\mathcal{F}{(K)^{\ast }}\circ \mathcal{F}(K)}\bigg),\]
where $\mathcal{F}$ stands for the FFT operator and “∘” is the componentwise multiplication operator. Additionally, we also employ the discrete cosine transform to solve the problem (38) under the Neumann boundary condition.
To recap, putting the above solving steps altogether, we achieve at the following alternating direction method of multipliers (ADMM) that settles the proposed optimization problem (21).
infor480_g001.jpg
Algorithm 1
ADMM for solving the problem (21)
Remark 1.
Algorithm 1 actually consists of four subproblems, and it is necessary to discuss the computational complexity of each subproblem. In the first place, the problem with respect to v can be executed with a linear time complexity order $O({n_{1}}{n_{2}})$ for an ${n_{1}}\times {n_{2}}$ image. Secondly, we note in passing that the solution of the w subproblem is exactly represented by shrinkage, thus there is a linear relationship between calculated cost and ${n_{1}}{n_{2}}$. As for the subproblem of z, the solving structure shows that its complexity is $O({n_{1}}{n_{2}})$. Lastly, the u-subproblem is computed by FFT and inverse FFT, thus it needs $O({n_{1}}{n_{2}}\log ({n_{1}}{n_{2}}))$ arithmetic operations.

4 Experimental Results

In this section, we present some standard test images, which have been corrupted by different blurs and noise intensities, to demonstrate the performance of the proposed model for Poissonian image restoration. Note that Poisson noise is simulated by using the MATLAB library function “poissrnd(I)”. Also, our introduced strategy will be compared with the classical TV, FOTV, HOTV and OGS-TV models in terms of measuring the visual effects and reconstruction accuracy. All experiments are performed under Windows 7 and MATLAB R2011b on a PC with an Intel(R) Core(TM) i5-6500U CPU at 3.20 GHz and 4 GB of RAM.
During the simulations, we apply the widely used peak signal-to-noise ratio (PSNR) criterion as the measure of restored image quality that is defined by
(40)
\[ \mathrm{PSNR}=10{\log _{10}}\bigg(\frac{{P^{2}}}{\mathrm{MSE}}\bigg),\hspace{1em}\text{with MSE}=\frac{{\textstyle\textstyle\sum _{i=1}^{{n_{1}}}}{\textstyle\textstyle\sum _{j=1}^{{n_{2}}}}{({u_{i,j}}-{\tilde{u}_{i,j}})^{2}}}{{n_{1}}\times {n_{2}}},\]
where P denotes the maximum peak value of image, u is the original image, $\tilde{u}$ is the reconstructed one, and ${n_{1}}\times {n_{2}}$ implies the image size. To have a fairer comparison, the recovered perceptual quality is measured by calculating the structure similarity (SSIM, Wang et al., 2004) index as
(41)
\[ \mathrm{SSIM}=\frac{(2{\mu _{u}}{\mu _{\tilde{u}}}+{C_{1}})(2{\sigma _{u\tilde{u}}}+{C_{2}})}{({\mu _{u}^{2}}+{\mu _{\tilde{u}}^{2}}+{C_{1}})({\sigma _{u}^{2}}+{\sigma _{\tilde{u}}^{2}}+{C_{2}})},\]
with ${\mu _{u}}$ and ${\mu _{\tilde{u}}}$ being the means of u and $\tilde{u}$, respectively, and ${\sigma _{u}}$, ${\sigma _{\tilde{u}}}$ indicating the variations of u and $\tilde{u}$. Besides, ${\sigma _{u\tilde{u}}}$ means the covariance between u and $\tilde{u}$, and ${C_{1}}$, ${C_{2}}$ are two small constants to avoid instability in (41). Moreover, we also adopt the feature similarity (FSIM, Zhang et al., 2011) criterion to evaluate the feature-preserving ability of different methods.
It is noteworthy that the equivalent parameters chosen for the subsequent experiments are selected as $L=3$, $p=0.1$, and the iteration of MM method is fixed to 5. Three equal parameters ${\gamma _{1}}$, ${\gamma _{2}}$ and ${\gamma _{3}}$ are updated by using the rule stated in Chan et al. (2011) with the initial value 0.1. In addition, we set $\alpha =3$ for image denoising and $\alpha =5$ for image deblurring. With regard to the stopping condition, each algorithm is terminated using the following formula
(42)
\[ \frac{\| {u^{k+1}}-{u^{k}}{\| _{2}}}{\| {u^{k+1}}{\| _{2}}}<{10^{-4}}.\]
In this simulation, we illustrate the compared results by five different methods on two test data for image denoising. The first original image peppers sized by $256\times 256$ pixels is shown on the upper left of Fig. 1. Its noisy version displayed in Fig. 1(b) is obtained by adding Poisson noise to the clean image with the parameter $P=90$. Figures 1(c)–(g), which correspond to the different outcomes, represent the intuitive comparison of TV, FOTV, HOTV, OGS-TV models and our proposed scheme. As a declaration, our model is implemented by setting the parameter $\lambda =65$. For a more convincing explanation, we also present in Fig. 2 the zoom-in regions of the restorations in detail. Besides, the same applications are performed on the image Sailboats ($512\times 512$), as shown in Fig. 3, with the adjusted parameter $\lambda =60$. Meanwhile, the quantitative measures of PSNR, SSIM and FSIM values and the CPU time (in seconds) are listed in Table 1.
infor480_g002.jpg
Fig. 1
Restoration results for the image Peppers ($P=90$) by using five methods. (a) original image, (b) noisy image, (c) TV, (d) FOTV, (e) HOTV, (f) OGS-TV, (g) our model.
infor480_g003.jpg
Fig. 2
Zoomed-in results for the image Peppers ($P=90$) by using five methods. (a) original image, (b) noisy image, (c) TV, (d) FOTV, (e) HOTV, (f) OGS-TV, (g) our model.
infor480_g004.jpg
Fig. 3
Restoration results for the image Sailboats ($P=90$) by using five methods. (a) original image, (b) noisy image, (c) TV, (d) FOTV, (e) HOTV, (f) OGS-TV, (g) our model.
Table 1
Comparison of the performance via five different methods.
Model Peppers ($P=90$) Sailboats ($P=90$)
Iter Time (s) PSNR SSIM FSIM Iter Time (s) PSNR SSIM FSIM
TV 36 1.3103 29.9055 0.9568 0.9653 38 4.8038 30.8020 0.9563 0.9787
FOTV 38 1.4198 29.9683 0.9577 0.9665 37 4.8965 30.5041 0.9526 0.9769
HOTV 76 2.3989 30.3281 0.9624 0.9692 79 9.3371 30.9519 0.9570 0.9785
OGS-TV 41 1.6928 30.4509 0.9613 0.9709 22 3.4210 31.3271 0.9598 0.9811
Ours 31 1.8225 30.8580 0.9640 0.9730 33 7.8308 31.7135 0.9633 0.9836
The second experiment aims to further verify the denoising ability and effectiveness of our proposed approach. For this purpose, we increase the Poisson noise density by setting a smaller peak value. Two grayscale images Brid and Boats are depicted in Figs. 4(a) and 5(a), where Brid image of size of $256\times 256$ contains rich counters and details, the other has dimensions of $512\times 512$ pixels and complex background. Figures 4(b) and 5(b) indicate the contaminated images with $P=60$. The remaining parts of Figs. 4 and 5 are the corresponding restoration results of the above five different techniques on two images, respectively. It is worth mentioning in this case, the proposed new strategy is computed with the same setting as in the simulation of Sailboats image. Moreover, Table 2 reports in detail the measurable comparisons regarding the PSNR, SSIM and FSIM values and the CPU time.
infor480_g005.jpg
Fig. 4
Restoration results for the image Bird ($P=60$) by using five methods. (a) original image, (b) noisy image, (c) TV, (d) FOTV, (e) HOTV, (f) OGS-TV, (g) our model.
infor480_g006.jpg
Fig. 5
Restoration results for the image Boats ($P=60$) by using five methods. (a) original image, (b) noisy image, (c) TV, (d) FOTV, (e) HOTV, (f) OGS-TV, (g) our model.
As is obvious from both visual images and objective data, the TV-based approach can preserve the object boundaries well, but it indeed produces the serious staircase artifacts in smooth regions. On the contrary, we notice that the awful artifacts disappear when the HOTV regularizer is utilized, while it causes the loss of important structural features because of excessive smoothness of the edges. The images restored by the FOTV model exhibit no staircasing but show a little residual noise. For the OGS-TV model, it still produces slight piecewise constant aspects while denoising. As can be observed, the proposed model enables better image quality compared to the other models, and keeps sharp and neat edges with less blocky effect.
Thirdly, the point of this experiment is to test the ability of restoring blurred and noisy images. The standard test images Saturn and Lady, which are of size $256\times 256$ pixels and consist of homogeneous regions and abundant features, are chosen for this numerical example. Their corresponding deteriorated versions shown in Figs. 6(b) and 7(b) are corrupted by Gaussian blur with a kernel size of $7\times 7$ pixels and standard deviation of 2, and noisy due to Poisson noise with $P=255$. In this case, the weighting parameter is adjusted as $\lambda =300$ for the former and $\lambda =370$ for the latter. The recovered results and comparisons of different models can be observed in sequence in Figs. 6(c)–(g), 7(c)–(g) and Table 3.
Table 2
Comparison of the performance via five different methods.
Model Bird ($P=60$) Boats ($P=60$)
Iter Time (s) PSNR SSIM FSIM Iter Time (s) PSNR SSIM FSIM
TV 52 1.8598 28.1826 0.9622 0.9672 45 5.8043 28.4419 0.9527 0.9806
FOTV 53 1.9209 28.2058 0.9582 0.9698 50 6.5016 28.3214 0.9520 0.9813
HOTV 84 2.7963 28.2452 0.9620 0.9703 75 9.2308 28.6667 0.9556 0.9840
OGS-TV 44 1.9079 28.4524 0.9641 0.9715 34 5.5546 28.7160 0.9558 0.9807
Ours 35 2.1098 28.9880 0.9673 0.9746 35 8.3198 29.3100 0.9605 0.9848
infor480_g007.jpg
Fig. 6
Restoration results for the image Saturn ($P=255$) by using five methods. (a) original image, (b) degraded image, (c) TV, (d) FOTV, (e) HOTV, (f) OGS-TV, (g) our model.
As for the last experiment, to further comprehensively evaluate the performance of the proposed model for Poissonian image restoration, another classical motion blur is added to this simulation. Here we take the images Moon ($358\times 537$) and Lena ($512\times 512$), which contain flat zones and neat counters embodied in Figs. 8(a) and 9(a), as the objects for image denoising and deblurring. Figures 8(b) and 9(b) stand for the contaminated images blurred by motion blur with motion distance $l=10$ and angle $\theta =45$, and noisy due to Poisson noise with the noise level $P=200$. Thereinto, we alter the parameter $\lambda =360$ for Moon image, and $\lambda =300$ for Lena image. Figures 8 and 9 represent the visual restorations by five different methods separately. And the PSNR, SSIM and FSIM values of the reconstructions are listed in Table 4 minutely.
infor480_g008.jpg
Fig. 7
Restoration results for the image Lady ($P=255$) by using five methods. (a) original image, (b) degraded image, (c) TV, (d) FOTV, (e) HOTV, (f) OGS-TV, (g) our model.
Table 3
Comparison of the performance via five different methods.
Model Saturn ($P=255$) Lady ($P=255$)
Iter Time (s) PSNR SSIM FSIM Iter Time (s) PSNR SSIM FSIM
TV 133 4.6795 33.0938 0.9416 0.9367 85 3.0500 28.9949 0.8478 0.8586
FOTV 73 2.6083 33.3967 0.9453 0.9551 44 1.5527 28.2643 0.8419 0.8833
HOTV 105 3.4096 33.1280 0.9468 0.9561 97 3.1181 28.6701 0.8593 0.8884
OGS-TV 41 1.6945 33.4080 0.9470 0.9543 43 1.8380 29.1683 0.8595 0.8881
Ours 37 2.2526 33.6515 0.9473 0.9563 30 1.9765 29.5221 0.8656 0.9012
infor480_g009.jpg
Fig. 8
Restoration results for the image Moon ($P=200$) by using five methods. (a) original image, (b) degraded image, (c) TV, (d) FOTV, (e) HOTV, (f) OGS-TV, (g) our model.
infor480_g010.jpg
Fig. 9
Restoration results for the image Lena ($P=200$) by using five methods. (a) original image, (b) degraded image, (c) TV, (d) FOTV, (e) HOTV, (f) OGS-TV, (g) our model.
All in all, it is clear to observe that the staircase-like and fuzzy effects caused by the TV-based model inevitably degrade the image quality. The HOTV model avoids the shortcoming of staircase aspects, but the resulting images are mostly composed of incorrect edges and blurred counters. Even though the FOTV and OGS-TV models are able to reduce the staircase artifacts to some extent, they lead to the unexpected phenomenon of residual noise or staircase distortion. However, our introduced novel approach, which combines the advantages of nonconvex second-order regularizer and OGS-TV function, performs prominently in comparison with other existing popular techniques. Our scheme not only substantially makes the artifacts disappear, but also shapes the edges sharper. Quantitatively, this results in the restorations possessing the best PSNR, SSIM and FSIM values in Tables 3 and 4.
Table 4
Comparison of the performance via five different methods.
Model Moon ($P=200$) Lena ($P=200$)
Iter Time (s) PSNR SSIM FSIM Iter Time (s) PSNR SSIM FSIM
TV 80 10.3288 33.3848 0.8623 0.8877 62 8.0070 28.6305 0.8349 0.9298
FOTV 64 8.8045 33.2102 0.8915 0.9084 42 5.4066 28.2719 0.8363 0.9364
HOTV 128 11.8345 33.6125 0.8743 0.9115 92 10.5619 28.7451 0.8497 0.9450
OGS-TV 59 7.0703 33.8428 0.9079 0.8986 36 5.3908 28.8995 0.8474 0.9414
Ours 36 6.7205 34.1525 0.9121 0.9140 33 7.8365 29.1134 0.8540 0.9492

5 Conclusion

A novel hybrid nonconvex variational model, which we have proposed in this paper, is applied to reconstruct the degraded images under Poisson noise. The introduced scheme closely integrates the superiorities of OGS-TV and nonconvex HOTV regularizers, this combination helps to improve the accuracy of image restoration. To efficiently get the optimal solution of the minimization problem, we develop a modified alternating direction method of multipliers by combining the iteratively reweighted ${\ell _{1}}$ algorithm and the majorization-minimization method. Lastly, for the case of reducing the staircase-like aspects and preserving edge features, numerous simulation examples, which are compared with other widely applied regularization models, have demonstrated that our presented strategy performs better for restoration of Poissonian images.

References

 
Adam, T., Paramesran, R. (2019). Image denoising using combined higher order non-convex total variation with overlapping group sparsity. Multidimensional Systems and Signal Processing, 30(1), 503–527.
 
Bardsley, J.M., Goldes, J. (2011). Techniques for regularization parameter and hyper-parameter selection in PET and SPECT imaging. Inverse Problems in Science and Engineering, 19(2), 267–280.
 
Bayram, I. (2011). Mixed norms with overlapping groups as signal priors. In: IEEE International Conference on Acoustics Speech and Signal Processing. IEEE, Piscataway, pp. 4036–4039.
 
Bertsekas, D., Tsitsiklis, J. (1997). Parallel and Distributed Computation: Numerical Methods. Athena Scientific, Belmont.
 
Bonettini, S., Zanella, R., Zanni, L. (2009). A scaled gradient projection method for constrained image deblurring. Inverse Problems, 25(1), 015002.
 
Bredies, K., Kunisch, K., Pock, T. (2010). Total generalized variation. SIAM Journal on Imaging Sciences, 3(3), 492–526.
 
Candes, E.J., Wakin, M.B., Boyd, S.P. (2008). Enhancing sparsity by reweighted ${\ell _{1}}$ minimization. Journal of Fourier Analysis and Applications, 14(5-6), 877–905.
 
Chan, S.H., Khoshabeh, R., Gibson, K.B., Gill, P.E., Nguyen, T.Q. (2011). An augmented Lagrangian method for total variation video restoration. IEEE Transactions on Image Processing, 20(11), 3097–3111.
 
Chan, T., Marquina, A., Mulet, P. (2000). High-order total variation-based image restoration. SIAM Journal on Scientific Computing, 22(2), 503–516.
 
Chartrand, R. (2007). Nonconvex regularization for shape preservation. In: IEEE International Conference on Image Processing. IEEE, Piscataway, pp. 293–296.
 
Chowdhury, M.R., Qin, J., Lou, Y. (2020). Non-blind and blind deconvolution under poisson noise using fractional-order total variation. Journal of Mathematical Imaging and Vision, 62(9), 1238–1255.
 
Figueiredo, M., Bioucas-Dias, J. (2010). Restoration of Poissonian images using alternating direction optimization. IEEE Transactions on Image Processing, 19(12), 3133–3145.
 
Figueiredo, M., Bioucas-Dias, J., Nowak, R. (2007). Majorization-minimization algorithms for wavelet-based image restoration. IEEE Transactions on Image Processing, 16(12), 2980–2991.
 
Gabay, D., Mercier, B. (1976). A dual algorithm for the solution of nonlinear variational problems via finite element approximations. Computers & Mathematics with Applications, 2(1), 17–40.
 
Gilboa, G., Osher, S. (2008). Nonlocal operators with applications to image processing. Multiscale Modeling & Simulation, 7(3), 1005–1028.
 
Hunter, D.R., Lange, K. (2004). A tutorial on MM algorithms. American Statistician, 58(1), 30–37.
 
Liu, X. (2016). Augmented Lagrangian method for total generalized variation based Poissonian image restoration. Computers & Mathematics with Applications, 71(8), 1694–1705.
 
Liu, X. (2021). Nonconvex total generalized variation model for image inpainting. Informatica, 32(2), 357–370.
 
Liu, X., Huang, L. (2012). Total bounded variation-based Poissonian images recovery by split Bregman iteration. Mathematical Methods in the Applied Sciences, 35(5), 520–529.
 
Lv, X.G., Jiang, L., Liu, J. (2016). Deblurring Poisson noisy images by total variation with overlapping group sparsity. Applied Mathematics and Computation, 289, 132–148.
 
Lysaker, M., Lundervold, A., Tai, X.-C. (2003). Noise removal using fourth order partial differential equation with applications to medical magnetic resonance images in space and time. IEEE Transactions on Image Processing, 12(12), 1579–1590.
 
Nikolova, M., Ng, M.K., Tam, C.P. (2010). Fast nonconvex nonsmooth minimization methods for image restoration and reconstruction. IEEE Transactions on Image Processing, 19(12), 3073–3088.
 
Ochs, P., Dosovitskiy, A., Brox, T., Pock, T. (2015). On iteratively reweighted algorithms for nonsmooth nonconvex optimization in computer vision. SIAM Journal on Imaging Sciences, 8(1), 331–372.
 
Oh, S., Woo, H., Yun, S., Kang, M. (2013). Non-convex hybrid total variation for image denoising. Journal of Visual Communication and Image Representation, 24(3), 332–344.
 
Peyre, G., Fadili, J. (2011). Group sparsity with overlapping partition functions. In: 19th European Signal Processing Conference. IEEE, Piscataway, pp. 303–307.
 
Pham, C.T., Tran, T.T.T., Gamard, G. (2020). An efficient total variation minimization method for image restoration. Informatica, 31(3), 539–560.
 
Sarder, P., Nehorai, A. (2006). Deconvolution methods for 3-D fluorescence microscopy images. IEEE Signal Processing Magazine, 23(3), 32–45.
 
Selesnick, I.W., Chen, P.Y. (2013). Total variation denoising with overlapping group sparsity. In: IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, Piscataway, pp. 5696–5700.
 
Setzer, S., Steidl, G., Teuber, T. (2010). Deblurring Poissonian images by split Bregman techniques. Journal of Visual Communication and Image Representation, 21(3), 193–199.
 
Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P. (2004). Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4), 600–612.
 
Zhang, L., Zhang, L., Mou, X., Zhang, D. (2011). FSIM: a feature similarity index for image qualtiy assessment. IEEE Transactions on Image Processing, 20(8), 2378–2386.
Reading mode PDF XML

Table of contents
  • 1 Introduction
  • 2 Preliminaries
  • 3 Numerical Algorithm
  • 4 Experimental Results
  • 5 Conclusion
  • References

Copyright
© 2022 Vilnius University
by logo by logo
Open access article under the CC BY license.

Keywords
image restoration Poisson noise nonconvex regularizer overlapping group sparsity alternating direction method of multipliers

Funding
This work was supported by Hunan Provincial Natural Science Foundation of China (2020JJ4285) and Scientific Research Fund of Hunan Provincial Education Department (19B215).

Metrics
since January 2020
1002

Article info
views

544

Full article
views

529

PDF
downloads

173

XML
downloads

Export citation

Copy and paste formatted citation
Placeholder

Download citation in file


Share


RSS

  • Figures
    10
  • Tables
    4
infor480_g001.jpg
Algorithm 1
ADMM for solving the problem (21)
infor480_g002.jpg
Fig. 1
Restoration results for the image Peppers ($P=90$) by using five methods. (a) original image, (b) noisy image, (c) TV, (d) FOTV, (e) HOTV, (f) OGS-TV, (g) our model.
infor480_g003.jpg
Fig. 2
Zoomed-in results for the image Peppers ($P=90$) by using five methods. (a) original image, (b) noisy image, (c) TV, (d) FOTV, (e) HOTV, (f) OGS-TV, (g) our model.
infor480_g004.jpg
Fig. 3
Restoration results for the image Sailboats ($P=90$) by using five methods. (a) original image, (b) noisy image, (c) TV, (d) FOTV, (e) HOTV, (f) OGS-TV, (g) our model.
infor480_g005.jpg
Fig. 4
Restoration results for the image Bird ($P=60$) by using five methods. (a) original image, (b) noisy image, (c) TV, (d) FOTV, (e) HOTV, (f) OGS-TV, (g) our model.
infor480_g006.jpg
Fig. 5
Restoration results for the image Boats ($P=60$) by using five methods. (a) original image, (b) noisy image, (c) TV, (d) FOTV, (e) HOTV, (f) OGS-TV, (g) our model.
infor480_g007.jpg
Fig. 6
Restoration results for the image Saturn ($P=255$) by using five methods. (a) original image, (b) degraded image, (c) TV, (d) FOTV, (e) HOTV, (f) OGS-TV, (g) our model.
infor480_g008.jpg
Fig. 7
Restoration results for the image Lady ($P=255$) by using five methods. (a) original image, (b) degraded image, (c) TV, (d) FOTV, (e) HOTV, (f) OGS-TV, (g) our model.
infor480_g009.jpg
Fig. 8
Restoration results for the image Moon ($P=200$) by using five methods. (a) original image, (b) degraded image, (c) TV, (d) FOTV, (e) HOTV, (f) OGS-TV, (g) our model.
infor480_g010.jpg
Fig. 9
Restoration results for the image Lena ($P=200$) by using five methods. (a) original image, (b) degraded image, (c) TV, (d) FOTV, (e) HOTV, (f) OGS-TV, (g) our model.
Table 1
Comparison of the performance via five different methods.
Table 2
Comparison of the performance via five different methods.
Table 3
Comparison of the performance via five different methods.
Table 4
Comparison of the performance via five different methods.
infor480_g001.jpg
Algorithm 1
ADMM for solving the problem (21)
infor480_g002.jpg
Fig. 1
Restoration results for the image Peppers ($P=90$) by using five methods. (a) original image, (b) noisy image, (c) TV, (d) FOTV, (e) HOTV, (f) OGS-TV, (g) our model.
infor480_g003.jpg
Fig. 2
Zoomed-in results for the image Peppers ($P=90$) by using five methods. (a) original image, (b) noisy image, (c) TV, (d) FOTV, (e) HOTV, (f) OGS-TV, (g) our model.
infor480_g004.jpg
Fig. 3
Restoration results for the image Sailboats ($P=90$) by using five methods. (a) original image, (b) noisy image, (c) TV, (d) FOTV, (e) HOTV, (f) OGS-TV, (g) our model.
infor480_g005.jpg
Fig. 4
Restoration results for the image Bird ($P=60$) by using five methods. (a) original image, (b) noisy image, (c) TV, (d) FOTV, (e) HOTV, (f) OGS-TV, (g) our model.
infor480_g006.jpg
Fig. 5
Restoration results for the image Boats ($P=60$) by using five methods. (a) original image, (b) noisy image, (c) TV, (d) FOTV, (e) HOTV, (f) OGS-TV, (g) our model.
infor480_g007.jpg
Fig. 6
Restoration results for the image Saturn ($P=255$) by using five methods. (a) original image, (b) degraded image, (c) TV, (d) FOTV, (e) HOTV, (f) OGS-TV, (g) our model.
infor480_g008.jpg
Fig. 7
Restoration results for the image Lady ($P=255$) by using five methods. (a) original image, (b) degraded image, (c) TV, (d) FOTV, (e) HOTV, (f) OGS-TV, (g) our model.
infor480_g009.jpg
Fig. 8
Restoration results for the image Moon ($P=200$) by using five methods. (a) original image, (b) degraded image, (c) TV, (d) FOTV, (e) HOTV, (f) OGS-TV, (g) our model.
infor480_g010.jpg
Fig. 9
Restoration results for the image Lena ($P=200$) by using five methods. (a) original image, (b) degraded image, (c) TV, (d) FOTV, (e) HOTV, (f) OGS-TV, (g) our model.
Table 1
Comparison of the performance via five different methods.
Model Peppers ($P=90$) Sailboats ($P=90$)
Iter Time (s) PSNR SSIM FSIM Iter Time (s) PSNR SSIM FSIM
TV 36 1.3103 29.9055 0.9568 0.9653 38 4.8038 30.8020 0.9563 0.9787
FOTV 38 1.4198 29.9683 0.9577 0.9665 37 4.8965 30.5041 0.9526 0.9769
HOTV 76 2.3989 30.3281 0.9624 0.9692 79 9.3371 30.9519 0.9570 0.9785
OGS-TV 41 1.6928 30.4509 0.9613 0.9709 22 3.4210 31.3271 0.9598 0.9811
Ours 31 1.8225 30.8580 0.9640 0.9730 33 7.8308 31.7135 0.9633 0.9836
Table 2
Comparison of the performance via five different methods.
Model Bird ($P=60$) Boats ($P=60$)
Iter Time (s) PSNR SSIM FSIM Iter Time (s) PSNR SSIM FSIM
TV 52 1.8598 28.1826 0.9622 0.9672 45 5.8043 28.4419 0.9527 0.9806
FOTV 53 1.9209 28.2058 0.9582 0.9698 50 6.5016 28.3214 0.9520 0.9813
HOTV 84 2.7963 28.2452 0.9620 0.9703 75 9.2308 28.6667 0.9556 0.9840
OGS-TV 44 1.9079 28.4524 0.9641 0.9715 34 5.5546 28.7160 0.9558 0.9807
Ours 35 2.1098 28.9880 0.9673 0.9746 35 8.3198 29.3100 0.9605 0.9848
Table 3
Comparison of the performance via five different methods.
Model Saturn ($P=255$) Lady ($P=255$)
Iter Time (s) PSNR SSIM FSIM Iter Time (s) PSNR SSIM FSIM
TV 133 4.6795 33.0938 0.9416 0.9367 85 3.0500 28.9949 0.8478 0.8586
FOTV 73 2.6083 33.3967 0.9453 0.9551 44 1.5527 28.2643 0.8419 0.8833
HOTV 105 3.4096 33.1280 0.9468 0.9561 97 3.1181 28.6701 0.8593 0.8884
OGS-TV 41 1.6945 33.4080 0.9470 0.9543 43 1.8380 29.1683 0.8595 0.8881
Ours 37 2.2526 33.6515 0.9473 0.9563 30 1.9765 29.5221 0.8656 0.9012
Table 4
Comparison of the performance via five different methods.
Model Moon ($P=200$) Lena ($P=200$)
Iter Time (s) PSNR SSIM FSIM Iter Time (s) PSNR SSIM FSIM
TV 80 10.3288 33.3848 0.8623 0.8877 62 8.0070 28.6305 0.8349 0.9298
FOTV 64 8.8045 33.2102 0.8915 0.9084 42 5.4066 28.2719 0.8363 0.9364
HOTV 128 11.8345 33.6125 0.8743 0.9115 92 10.5619 28.7451 0.8497 0.9450
OGS-TV 59 7.0703 33.8428 0.9079 0.8986 36 5.3908 28.8995 0.8474 0.9414
Ours 36 6.7205 34.1525 0.9121 0.9140 33 7.8365 29.1134 0.8540 0.9492

INFORMATICA

  • Online ISSN: 1822-8844
  • Print ISSN: 0868-4952
  • Copyright © 2023 Vilnius University

About

  • About journal

For contributors

  • OA Policy
  • Submit your article
  • Instructions for Referees
    •  

    •  

Contact us

  • Institute of Data Science and Digital Technologies
  • Vilnius University

    Akademijos St. 4

    08412 Vilnius, Lithuania

    Phone: (+370 5) 2109 338

    E-mail: informatica@mii.vu.lt

    https://informatica.vu.lt/journal/INFORMATICA
Powered by PubliMill  •  Privacy policy