Most partial volume correction (PVC) methods are ROI-based, and assume uniform activity within each ROI. Here, we extended a PVC method, developed by Rousset et al (JNM, 1998) called geometric transfer matrix (GTM), to a voxel-based PVC approach called v-GTM which accounts non-uniform activity within each ROI. The v-GTM method was evaluated using simulated data (perfect co-registered MRIs). We investigated the influence of noise, the effect of compensating detector response during iterative reconstruction methods and the effect of non-uniform activity. For simulated data, noise did not affect the accuracy of v-GTM method seriously. When detector response compensation was applied in iterative reconstruction, both PVC methods did not improve the recovery values. In the non-uniform experiment, v-GTM had slightly better recovery values and less bias than those of GTM. Conclusion: v-GTM resulted better recovery values, and might be useful for PVC in small regions of interest.
We describe a Bayesian PET reconstruction method that incorporates an image prior model with mixed continuity constraints. In this paper we concentrate on imagin the brain which we assume can be partitioned into four tissue classes: gray matter, white matter, cerebral spinal fluid, and partial volume (PV). Each PV image voxel is assumed to be an arbitrary combination of neighboring pure tissues. The PET image is then modeled as a piece-wise smooth function through a Gibbs prior. Assume that the image intensity of each homogeneous tissue region or partial volume region is governed by a thin-plate energy function. We apply first- and second-order edge detection techniques to estimate region boundaries, and then categorize these boundaries based on the tissue types adjacent to each boundary. Rather than use the binary processes representation region boundaries such as weal-plate mode, we adopt a controlled- continuity approach to influence boundary formation. The rationale is that while the first-order edge detection can capture the jumps between two different pure regions, the second-order one can capture the crease connecting a pure region to partial volume region. As we transition from homogeneous to partial volume regions, we enforce zero-th order continuity. Discontinuities in intensity are allowed only a transitions between two different homogeneous regions. We refer to this model as a modified weak-plate model with controlled continuity. We present the result of a computer simulated phantom study in which partial volume effects are explicitly modeled. Results indicate that we obtain superior region of interest quantization using this approach in comparison to a partial volume correction method that has previously been proposed for quantitation using filtered back-projection images.
Iterative methods for the reconstruction of PET images can produce results superior to filtered backprojection since they are able to explicitly model the Poisson statistics of photon pair coincidence detection. However, many implementations of these methods use simple forward and backward projection schemes based either on linear interpolation or on computing the volume of intersection of detection tubes with each voxel. Other important physical system factors, such as depth dependent geometric sensitivity and spatially variant detector pair resolution are often ignored. In this paper, we examine the effect of a more accurate system model on algorithm performance. A second factor that limits the performance of the iterative algorithms is the chosen objective function and the manner in which it is optimized. Here we compare performance of filtered backprojection (FBP) with the OSEM (ordered subsets EM) algorithm, which approximately maximizes the likelihood, and a MAP (maximum a posteriori) method using a Gibbs prior with convex potential functions. Using the contrast recovery coefficient (CRC) as a performance measure, we performed various phantom experiments to investigate how the choice of algorithm and projection matrix affect reconstruction accuracy. Plots of CRC versus background variance were generated by varying cut-off frequency in FBP, subset size and iteration number or post-smoothing kernel in OSEM, and smoothing parameter in the MAP reconstructions. The results of these studies show that all of the iterative methods tested produce superior CRCs than FBP at matched background variance. However, there is also considerable variation in performance within the class of statistical methods depending on the choice of projection matrix and reconstruction algorithm.
We describe a Bayesian PET reconstruction method that incorporates anatomical information extracted from a partial volume segmentation of a co-registered magnetic resonance (MR) image. For the purposes of this paper we concentrate on imaging the brain which we assume can be partitioned into four tissue classes: gray matter, white matter, cerebral spinal fluid, and partial volume. The PET image is then modeled as a piece-wise smooth function through a Gibbs prior. Within homogeneous tissue regions the image intensity is assumed to be governed by a thin plate energy function. Rather than use the anatomical information to guide the formation of a binary process representing region boundaries, we use the segmented anatomical image as a template to customize the Gibbs energy in such a way that we apply thin-plate smoothing within homogeneous tissue regions while enforcing zeroth corder continuity as we transition from homogeneous to partial volume regions. Discontinuities in intensity are allowed only at transitions between two different homogeneous regions. We refer to this model as segmented thin-plate regression with controlled continuities. We present the results of a detailed computer simulated phantom study in which partial volume effects are explicitly modeled. Results indicate that we obtain superior region of interest quantitation using this approach in comparison to a 2D partial volume correction method that has previously been proposed for quantitation using filtered backprojection images.
Multiresolution image decomposition based on nonlinear filtering has received a lot of attention recently. In this research, we investigate the coding issue for one class of nonlinear multiresolution image decomposition based on mathematical morphology. We consider the use of opening and closing operations with a flat structure element to achieve image decomposition. The entropy and histogram of the difference images in the image pyramid are then examined. We give a numerical example to demonstrate potential advantages of the morphological filtering approach over the conventional linear filtering approach in the context of image coding. However, we also point out difficulties encountered in our study that have to be overcome before the method can be practically used.
A new method for fractal image compression by applying Jacquin's algorithm to a polyphase decomposed image is proposed to increase the encoding efficiency in this research. By using a (P X P) : (1 X 1) polyphase decomposition with P equals 2n, we divide an image into P X P subimages and then apply the Jacquin compression algorithm to these subimages independently. We show that the resulting scheme can improve the coding speed by a factor of P2 at the sacrifice of the decompressed image quality. Besides, since the subimages are very similar to each other, we may focus on a small subset of subimages, seek the appropriate domain block for their range blocks, and record the information of address mapping, scaling and offset. To encode the remaining subimages, we simply determine the scaling and the offset based on the same set of addressing mapping previously found. A set of numerical experiments with various parameters, including the polyphase decimation factor P, the size D (or R) of domain (or range) blocks, and the size s of search step, are performed to illustrate the tradeoff between the speed, image quality, and compression rate.