1 Introduction
Image acquisition is an ubiquitous technology, found for example in photography, medical imagery, astronomy, etc. Nevertheless, in almost all situations, the image-capturing devices are imperfect: some unwanted noise is added to the signal. Therefore, the obtained images are post-processed by numerical algorithms before being delivered to the users; those algorithms have to solve the image restoration problem.
In the image restoration problem, an original image u is corrupted by some random noise η, resulting in a noisy image f. Our task is to reconstruct u, knowing both f and the distribution of η. Of course, there is in general no way to find the exact image u; image restoration algorithms rather yield a good approximation of u, usually noted ${u^{\ast }}$. To do so, they exploit a priori knowledge on the image u.
Various distributions have been considered for the noise, e.g. Gaussian (Rudin
et al.,
1992; Pham and Kopylov,
2015), Poisson (Chan and Shen,
2005; Le
et al.,
2007), Cauchy (Sciacchitano
et al.,
2015), as well as some mixed noise models, e.g. mixed Gaussian-Impulse noise (Yan,
2013), mixed Gaussian–Salt and Pepper noise (Liu
et al.,
2017), mixed Poisson–Gaussian (Calatroni
et al.,
2017; Pham
et al.,
2018; Tran
et al.,
2019).
A growing interest in Poisson–Gaussian probabilistic models has recently arisen (Chouzenoux
et al.,
2015). The mixture of Poisson and Gaussian noise occurs in several practical setups (e.g. microscopy, astronomy), where the sensors used to capture images have two sources of noise: a signal-dependent source which comes from the way light intensity is measured; and a signal-independent source which is simply thermal and electronic noise. Gaussian noise is just additive, so it cannot properly approximate the Poisson–Gaussian distributions observed in practice, which are strongly signal-dependent.
In general, the mixed Poisson–Gaussian noise model can be expressed as follows:
where
f is observed image,
u is the unknown image,
$\mathcal{P}(u)$ means that the image
u is corrupted by Poisson noise, and
$W\sim \mathcal{N}(0,{\sigma ^{2}})$ is a Gaussian noise with zero mean and variance
σ.
Recently, several approaches have been devoted to the mixed Poisson–Gaussian noise model (Foi
et al.,
2008; Jezierska
et al.,
2011; Lanza
et al.,
2014; Le Montagner
et al.,
2014). Many algorithms for denoising images corrupted by mixed Poisson–Gaussian noise have been investigated using approximations based on variance stabilization transforms (Zhang
et al.,
2007; Makitalo and Foi,
2013) or PURE-LET based approaches (Luisier
et al.,
2011; Li
et al.,
2018). Variational models based on the Bayesian framework have been also proposed for removing and denoising and deconvolution of mixed Poisson–Gaussian noise (Calatroni
et al.,
2017). This framework is perhaps a popular approach to mixed Poisson–Gaussian noise model. Authors in De Los Reyes and Schönlieb (
2013) proposed a nonsmooth PDE-constrained optimization approach for the determination of the correct noise model in total variation image denoising. Authors in Lanza
et al. (
2014) focused on the maximum a posteriori approach to derive a variational formulation composed of the total variation (TV) regularization term and two fidelities. A weighted squared
${L_{2}}$ norm noise approximation was proposed for mixed Poisson–Gaussian noise in Li
et al. (
2015), or an efficient primal-dual algorithm was also proposed in Chouzenoux
et al. (
2015) by investigating the properties of the Poisson–Gaussian negative log-likelihood as a convex Lipschitz differentiable function. Recently, authors in Marnissi
et al. (
2016) proposed a variational Bayesian method for Poisson–Gaussian noise, using an exact Poisson–Gaussian likelihood. Similarily, authors in Calatroni
et al. (
2017) proposed a variational approach which includes an infimal convolution combination of standard data delities classically associated to one single-noise distribution, and a TV regularization as regularizing energy. Generally, image restoration by variational models based on TV can be a good solution to the mixed Poisson–Gaussian noise removal with the following formula (Calatroni
et al.,
2017; Pham
et al.,
2019):
where
f is the observed image,
$\Omega \subset {\mathbb{R}^{2}}$ is a bounded domain, and
$S(\Omega )$ is the set of positive functions from Ω to
$\mathbb{R}$; finally,
${\lambda _{1}},{\lambda _{2}}$ are positive regularization parameters (see Chan and Shen,
2005, for details on this method).
However, in some cases, intermediate solutions of (
2) obtained during the execution of algorithms may contain pixels with negative values. To avoid this problem, authors in Pham
et al. (
2018) proposed a modified scheme of gradient descent (MSGD) that guarantee positive values for each pixel in the image domain.
In this work, we focus on the model (
2) and consider the following model:
where
f is the observed image,
${\lambda _{1}}$ and
${\lambda _{2}}$ are positive regularization parameters,
$S(\Omega )=\{u\in \text{BV}(\Omega ):u>0\}$ is closed and convex, with
$\text{BV}(\Omega )$ being the space of functions
$\Omega \to \mathbb{R}$ with bounded variation; and finally
$\alpha (x)$ is a continuous function in
$S(\Omega )$.
The function
$\alpha (x)$ is used to control the intensity of the diffusion, which is an edge indicator for spatially adaptive image restoration (Barcelos and Chen,
2000). Typically, the function
$\alpha (x)$ is chosen as follows:
where
l is a threshold value and
$v(x)=|\nabla {G_{\sigma }}(x)\ast f|$, in which ∗ denotes the convolution with
${G_{\sigma }}(x)=\frac{1}{2\pi {\sigma ^{2}}}\exp \big(-\frac{{x^{2}}}{2{\sigma ^{2}}}\big)$, i.e. the Gaussian filter with standard deviation
σ.
The main contributions of this paper are the following. We give an elementary proof of the existence and uniqueness of model (
3). Moreover, we check that the functional
$E(\cdot )$ is
convex, which enables us to use larger time-step parameters during gradient descent when solving (
3). We introduce the influence function
$\alpha (x)$, which acts as an edge-detection function, to get the model (
3) in order to improve the ability of edge preservation and to control the speed of smoothing. In addition, we propose a new method to solve (
3) that perceptibly improves the quality of the denoised images. By changing the time-step parameter, users can either get faster denoising with comparable results to previous methods, or better quality denoising with comparable running times. Our method is a technical improvement over the split-Bregman algorithm. We report experimental results for the aforementioned method, for various parameters in the noise distribution. The quality of denoising is measured with the SSIM and PSNR metrics. If we tune the time-step parameter to get similar quality result as the original split-Bregman method, we get faster running times.
The rest of the paper is organized as follows. In Section
2, we describe the Poisson–Gaussian model and introduce the notation used in this work. In Section
3, we prove the existence and uniqueness of the solution. In Section
4, using the split-Bregman algorithm, we present the proposed optimization framework. Next, in Section
5, we show some numerical results of our proposed method and we compare them with the results obtained with other existing methods. Finally, some conclusions are drawn in Section
6.
2 Preliminaries
We recall the principle behind equation (
2). Note that the contents of this section are not a rigorous proof; we simply provide a bit of context around the equation, why it was considered in the first place, and one possible reason for its practical efficiency. We also state our assumptions on both the initial image and the noise along the way.
Our goal is to recover the original image
u, knowing the noisy image
f. Our strategy is to find the image
u which maximizes the conditional probability
$P(u|f)$. Bayes’s rule gives:
The probability density function of the observed image
f corrupted by Gaussian noise
${P_{\mathcal{N}}}$ (respectively, by Poisson noise
${P_{\mathcal{P}}}$) is:
where
σ is the variance of the Gaussian noise. As we explained in the introduction, the two sources of noise are independent of each other, so the distribution of the mixed noise may be expressed as:
We assume that the values of the pixels in the original image are independent, and that the noise is also independent on each pixel. (However, we do
not assume that the noise and the original image are independent of each other.) Suppose that
f has size
$M\times N$, and let
$I=\{1,\dots ,M\}\times \{1,\dots ,N\}$ denote the domain of
f. For
i in
I, we write
${f_{i}}$ the pixel of
f at position
i (and similarly
${u_{i}}$ the pixel of
u at position
i). Then,
with
$s={(\sqrt{2\pi }\sigma )^{-1}}$. Maximizing
${P_{\mathrm{mixed}}}$ is equivalent to minimizing
$-\log {P_{\mathrm{mixed}}}$, so let us compute the quantity
$-\log ({P_{\mathrm{mixed}}}(f|u))$:
for some constant
$y>0$. In the above equation,
u varies but
f is constant. Since our goal is to minimize the whole expression, we can ignore the term
$\log ({f_{i}}!)$ altogether.
Now we assume that
$P(u)$ follows a Gibbs prior (Le
et al.,
2007):
where
z is a normalization factor. We need to make a couple of comments here. First,
u is not a function
${\mathbb{R}^{2}}\to \mathbb{R}$, but rather a discrete array of pixels; thus the integral in that expression is going to be translated to a sum, while
$\nabla u$ will be translated as a linear approximation. Second, this assumption appears to contradict the previous one, that the pixels of the original image are independent of one another. However, the assumption on
$P(u)$ is local: each pixel depends (weakly) on the neighbouring pixels only, so we do not lose much by assuming independence. This turns out to yield good results in practice (Chan and Shen,
2005).
We now have all the ingredients to maximize
$P(u|f)$. By equation (
4), this amounts to minimize the expression
$-\log (P(f|u))-\log (P(u))$, so we can plug in equations (
5) and (
6) to get:
and we can view this expression as a discrete approximation of the functional
$E(\cdot )$ defined as:
with
${\lambda _{1}}=2yz$ and
${\lambda _{2}}=z$. (We multiplied by
z, which is positive and constant, so the minimum is the same.) Intuitively, the last two terms are
data fidelity terms, which ensure that the restored image
u is not “too far” from the original image
u (taking the distribution of the noise into account). By contrast,
$|\nabla u|$ is a
smoothness term, which guarantees that the reconstructed image is not too irregular (this is where our
a priori knowledge on the original picture lies). The parameters
${\lambda _{1}}$ and
${\lambda _{2}}$ will have to be determined experimentally later on.
In the following sections, we introduce some theoretical results about the existence and uniqueness result for solution of (
3).