In this paper, we introduce a novel

Systems of nonlinear equations appear in the mathematical modelling of applications in the fields of physics, mechanics, chemistry, biology, computer science and applied mathematics.

Newton method is used for solving systems of nonlinear equations when the Jacobian matrix is Lipschitz continuous and nonsingular. The method is not well defined when the Jacobian matrix is singular. Levenberg-Marquardt algorithm was proposed to solve this problem, by introducing a regularization variable

When the number of equations is very large, solving the identified least-square problem requires considerable resources, resulting in possible measurement redundancies. These realities lead us to conclude that an accurate assessment of the cost function and the gradient is not necessary to get the result of the problem. Jinyan Fan proposes a Levenberg-Marquardt algorithm using the trust region technique, where at each iteration an ap-proximate step is calculated in addition to the step towards the minimum of the function (Fan,

Fog is a suspension of water droplets or ice crystals in the air. These particles are generally less than 50 microns in diameter and reduce visibility due to light scattering to less than 1 km. In the literature, the atmospheric propagation and the distribution of particles participating at effects such as light scattering corresponds to an atmospheric model. Intense research efforts are currently being developed to improve the possibility of detecting objects through fog. Kaiming He developed an algorithm predicated on the concept of dark channel prior (DCP) to mitigate the effects of fog (He

The aim of our work is the enhancing visibility when fog reduces it. The method we propose uses a non-linear parametric model based on the extinction coefficient of the atmosphere and the sky light intensity. Both parameters are estimated thanks to the Levenberg-Marquard algorithm. An inverse transformation is applied to measured data (observations) to reconstruct the clear image. We described in Section

Data modelling is an interpolation between some observations that belong to a continuous function, while the other observations approach the function with a certain tolerance (see Fig.

The least squares problem’s scope is to estimate a mathematical model that fits a set of observations using the cost function minimization given by the sum of the squares of the errors between the data set and the model’s analytical function. The optimization algorithm is iterative because, as we mentioned, the model is non-linear in its parameters. At each step the parameters are modified to obtain a minimum of the cost function.

As the data is in most cases affected by noise, measurement errors are generated in the fitting process referred to as residues. Thus, for a fixed value of the vector

The objective is to find

With a certain number of

The set of observations

Starting with an initial value of the vector

Using the established values of the parameters, the non-linear optimization algorithm determines step by step a series of values of

From the Taylor series, the cost function is approximated by a polynomial that has a value very close to that of the function in a specified neighbourhood:

In the above equation the vector

The gradient descent method finds the minima of a function. The essence of the method is to move one step at a time on the slope to the function that we minimize. In each step the parameters of the cost function are updated by the following relation:

The

Moving on the slope to the minimize function (low slope, respectively high slope).

In order to reach the minimum of the cost function, large steps must be taken in the area where the slope is low and small steps where the slope is high (see Fig.

The Gauss-Newton method achieves a safe convergence appealing to the second-order derivative. Using the Taylor series development in the neighbourhood of the current value

The Gradient vector is zero when the function reaches a minimum (

Finding the solution

On the other hand, the identity matrix can be used to estimate the Hessian matrix

Eqs. (

The Gradient vector of the cost function has the following components:

Next, we calculate the components of the Hessian matrix:

The approximation used in Eq. (

This operation does not affect the vector

The next iterative algorithm will generally use the Gauss-Newton method, the Gradient Descent method being used only when Eq. (

Depending on the value of the variable

Starting with initial values assigned to unknown parameters, the algorithm will follow the next steps:

With

Initialize

Calculate

If

If

If one of the following conditions is met, the iterative algorithm will stop:

Gradient convergence, the gradient of the cost function decreases below a pre-established threshold:

Convergence of the parameters, the parameter updates become very small:

Cost function convergence, when it has reached a certain threshold:

The number of iterations is greater than an established limit

It is assumed that a collimated beam of light with a unitary cross-section traverses the dispersive environment of thickness

The fractional change in intensity of radiation, the first term of Eq. (

Radiative transfer scheme.

As represented in the radiative transfer scheme, the aerosol particles capture the sky light and radiate it back in all directions. Some of the scattered light passes into the direct transmission path and raises the pixel intensity value acquired by the camera. Taking into account the increase

Our approach uses a linear first-order Eq. (

The distance map expresses the distances between the camera and the points on the scene. This matrix recording was obtained by: a) FRIDA image database (as the first one in Fig.

The distance maps for the three aforementioned images, which are included in the test dataset.

The algorithm we propose in this section relies on applying an inverse transformation to the degradation process during fog time image acquisition in order to obtain an enhanced image. We use the mathematical model described in Section

The optimization algorithm that will estimate the pseudo-model

We have the following description of the cost function that is minimized to determine the parameter vector

The estimated parameter

We validate the proposed algorithm using a dataset of sixteen simulated foggy images. Testing on real images would have meant having the set of reference images

Therefore, at this point, we are left with the quick solution of testing the enhancement algorithm on images with the presence of simulated fog

We applied the Levenberg-Marquardt optimization algorithm on the dataset of sixteen test images, a representative selection of which are evaluated here. We worked on each channel separately in the RGB (red-green-blue) colour space, as this is how the simulated fog was introduced. Table

Image | Parameters used in the simulation | Estimated parameters | ||||

Red | Green | Blue | Red | Green | Blue | |

We will make a visual inspection of the degree of fit of the estimated foggy image

Absolute value of the difference between the estimated foggy image

The better the fit, the smaller the difference

We assess the algorithm’s performance using both visual subjective inspection and a quantitative criterion. In order to compare our results with those from other algorithms in the relevant literature, we utilize an adapted metric to the quality of defogged images introduced by Liu

Regarding the visual inspection, we present six representative images from the test dataset in the following order:

Visual inspection of the enhancing algorithm: a-reference images

The FRFSIM (Fog-Relevant Feature Similarity) indicator introduced by Wei Liu takes into account both

Our method assumes the availability of a 3D component (distance map). In order to be able to compare our results with those of other methods that do not have this data, we define the following relative quantitative measure based on the FRFSIM indicator:

We used classical contrast enhancement algorithms, linear and non-linear contrast stretching and histogram equalization, working with simulated foggy images alongside with their corresponding reference images. In these cases, for the entire test dataset, the measure expressed by Eq. (

Here, we present a comparative analysis of the results obtained by our algorithm versus the best results of foggy image enhancement algorithms discussed in Liu

Also, the results of four sets of images taken from the article referenced above are presented below. Figures

Visual inspection of some results presented by Liu

A first performance reported in Liu

Next, there are two other images where the AMEF algorithm (Galdran,

The performances of the enhancement algorithms, based on criterion two (Eq. (

No. | Enhancement algorithm | Images (reference-foggy-enhanced) | |

1 | MBFIELM | Fig. |
33.041 |

2 | MBFIELM | Fig. |
52.577 |

3 | MBFIELM | Fig. |
44.447 |

4 | MBFIELM | Fig. |
32.41 |

5 | MBFIELM | Fig. |
40.36 |

6 | MBFIELM | Fig. |
14.821 |

7 | DCP | Fig. |
43.114 |

8 | DehazeNet | Fig. |
44.175 |

9 | DCP | Fig. |
15.570 |

10 | DCP | Fig. |
24.752 |

11 | DCP | Fig. |
39.522 |

12 | DCP | Fig. |
34.258 |

13 | AMEF | Fig. |
65.764 |

14 | AMEF | Fig. |
13.735 |

We utilise the

This work focuses on a mathematical method to determine a two-dimensional analytic function that best approximates a set of measured data, called observations. Starting from the well-known “Least-squares problem”, we proposed, adapted and implemented the Levenberg-Marquardt algorithm that is used to determine the unknown parameters of the mathematical model describing the image acquisition process under foggy conditions. The non-linear form of the model, the observations and the unknown parameters lead to the iterative solution of an overdetermined equation system. The algorithm for improving the quality of these images, based on the determined parameters, involves applying an inverse transformation that removes the “atmospheric veil” from the measured data and compensates for the attenuation of the scene radiance. An effective enhancement in the region of interest is found for almost all test images, but small undesirable colour deviation problems occur in areas where the distances in the

The mentioned classical algorithms used to improve image contrast do not obtain measures

The algorithm we have proposed gives comparable results to the established algorithms such as AMEF, DehazeNet, and DCP. While it is outperformed by AMEF in certain cases, there are situations where it prevails over the mentioned algorithms (to see Table

We should mention that in the implementation of the experiment we have encountered an obstacle that we have not overcome at this moment. Specifically, we could not test the MBFIELM algorithm on real foggy images. In a later approach we will extend the database used for testing the enhancement algorithm by obtaining all the resources needed to use images in real foggy conditions. Furthermore, we will work on how to choose the regularization variable in order to increase convergence performance.