An image or volume of interest in positron emission tomography (PET) is reconstructed from gamma rays emitted from a radioactive tracer, which are then captured and used to estimate the tracer’s location. The image or volume of interest is reconstructed by estimating the pixel or voxel values on a grid determined by the scanner. Such an approach is usually associated with limited resolution of the reconstruction, high computational complexity due to slow convergence and noisy results.

This paper presents a novel method of PET image reconstruction using the underlying assumption that the originals of interest can be modelled using Gaussian mixture models. Parameters are estimated from one-dimensional projections using an iterative algorithm resembling the expectation-maximization algorithm. This presents a complex computational problem which is resolved by a novel approach that utilizes

In positron emission tomography (PET), image reconstruction implies generating an image of a radiotracer’s concentration to estimate physiologic parameters for objects (volumes) of interest. Pairs of photons arise from emissions of annihilated positrons, and crystals placed in the scanner detect these pairs. When two are activated at the same time, the device records an event. Each possible pair of crystals is connected by a line or volume of response (LOR, VOR), depending on whether the scan is 2D or 3D. Assuming there are no contaminating physical effects or noise, the total number of coincidence events detected and the total amount of tracer contained in the tube are proportional. In the 2D case, we observe events along lines of response (LORs) joining two detector elements, lying within a specified imaging plane. The data are recorded as event histograms (sinograms or projected data) or as a list of recorded photon-pair events (list-mode data). Classic PET image reconstruction methodology is explained in more detail in e.g. Tong

Modern image reconstruction methods are mostly based on maximum-likelihood expectation-maximization (MLEM) iterations. Maximum likelihood is used as the optimization criterion, combined with the expectation-maximization algorithm for finding its solution. To overcome the computational complexity and slow convergence of the MLEM, the ordered subsets expectation-maximization (OSEM) algorithm has been introduced (Hudson and Larkin,

In image segmentation, a number of algorithms based on model-based techniques utilizing prior knowledge have been proposed to model uncertainty, cf. Zhang

In this paper, we propose a new robust method for reconstructing an object from a simulated PET image. The challenge of estimating Gaussian parameters from lower-dimensional data is solved by utilizing a novel

(

Data are collected along lines lying within a specific imaging plane. Traditionally, data are organized into sets of projections, integrals along all lines (LORs) for a fixed direction

A Gaussian mixture model (Reynolds,

Traditionally in this setting, the GMMs are used to model activity concentration in voxels (Layer

Here, we apply the GMM directly to the spatial data, focusing only on the locations and not the values at those locations. In other words, the location of the tracer, i.e. the particles that originate the gamma rays, is represented by the the

As mentioned earlier, Gaussian mixture models have naturally appeared in many signal processing (Yu and Sapiro,

Note that in our simulations we assume that

For clarity, in this section we focus only on one Gaussian component. Assuming we know a set of

Each event in two dimensions is uniquely given by its slope

In general, the squared distance between

First, note that for a fixed

In the two-dimensional setting, since

Figure

Illustration of the rotation. The event (LOR) is parallel to the new

Given the original event parameters, the coordinates of the intersection of the event and the new

We can use these one-dimensional projections and their squared (Euclidian) distance from the mean, to estimate the variance in (

In many statistical models, maximum likelihood parameter estimation can not be performed directly since most equations do not have an explicit closed solution. Several iterative methods have been developed to combat this, most notably the expectation-maximization (EM) algorithm. It had been proposed and used in many different circumstances before its proper introduction in Dempster

Compute class membership probabilities. For each event, compute the squared distance from each component mean. We distinguish between a

Estimate component parameters. Either from a hard or soft classification, where each event participates with its proportional share, parameters of each component are estimated using methodology from Section

This approach allows for different variants, depending on the type of distance used (Tafro and Seršić,

In this paper, initial steps of the iterative algorithm use Euclidian distance in (

To evaluate the methodology, we experimented in the two-dimensional setting with

First, for proof of concept we show that the method in Section

Left: Relative error in estimation of

Accuracy of an estimate

Initial value of

For each of these types of covariances we then simulated a measurement from

Mean estimation error,

13.78% | 11.88% | 9.28% | |

8.27% | 7.61% | 7.6% |

4.28% | 3.66% | 2.93% | |

2.61% | 2.38% | 2.37% |

Given that the variance of the traditional standard deviation estimator (from points) equals

For

For synthetic measurements from a variety of original (real) GMMs the algorithm proved robust regardless of the values of initial parameters, with estimation using

Table

Correct classification,

Cov. 1 | Cov. 2 | Comp. 1 | Comp. 2 | Total |

87.63% | 93.49% | 89.83% | ||

93.52% | 92.13% | 93.00% | ||

91.26% | 95.83% | 92.98% |

An illustration of the results for

(

These results show proof of concept that it is possible to reconstruct data from latent mixture models using lower-dimensional observations and state-of-the-art computational techniques. Estimation from lower-dimensional measurements had not been thoroughly developed for Gaussian mixture models in general, and especially in this context. This paper shows that the main obstacle, slow computational speed, can be evaded by an alternative approach to minimization, which opens the door to various new procedures. Further work includes other metrics and membership calculation and utilizing traditional methods to obtain the number of mixing components and optimal initial values.

The fact that the reconstructed image is given by its parametric model (mean vectors

The

We insert it into all other equations and get a new system with only two unknown parameters:

We insert

Parameter

The value of parameter

Return to a higher dimension is achieved by putting calculated parameter value

This research has been supported by the Croatian Science Foundation under the project IP-2019-04-6703.