This paper proposes a no-reference image quality evaluation model for accurately assessing the quality of real-world images displayed on head-mounted display (HMD) devices. The proposed model employs a simulation of human visual system, providing a reliable measure of image quality. Initially, an efficient convolutional neural network (CNN), specifically designed for noise characteristics, is utilized to obtain a near-perfectly noise-reduced image. The difference between this image and the target image is then calculated in the linear domain. To emulate the contrast sensitivity and masking effects inherent in the human visual system, we introduce a sophisticated frequency-domain filter model in a uniform color space. The resulting multidimensional data from the filters are aggregated and corrected based on the average brightness. Our model's performance is validated against Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) metrics using the TID2013 dataset, revealing superior correlation coefficients. Human factors experiments further confirm the model's reliability and practicality in real-world scenarios.
Dimensionality reduction is a frequent preprocessing step in hyperspectral image analysis. High-dimensional data will cause the issue of the “curse of dimensionality” in the applications of hyperspectral imagery. In this paper, a dimensionality reduction method of hyperspectral images based on random projection (RP) for target detection was investigated. In the application areas of hyperspectral imagery, e.g. target detection, the high dimensionality of the hyperspectral data would lead to burdensome computations. Random projection is attractive in this area because it is data independent and computationally more efficient than other widely-used hyperspectral dimensionality-reduction methods, such as Principal Component Analysis (PCA) or the maximum-noise-fraction (MNF) transform. In RP, the original highdimensional data is projected onto a low dimensional subspace using a random matrix, which is very simple. Theoretical and experimental results indicated that random projections preserved the structure of the original high-dimensional data quite well without introducing significant distortion. In the experiments, Constrained Energy Minimization (CEM) was adopted as the target detector and a RP-based CEM method for hyperspectral target detection was implemented to reveal that random projections might be a good alternative as a dimensionality reduction tool of hyperspectral images to yield improved target detection with higher detection accuracy and lower computation time than other methods.
A sparse representation based multi-threshold segmentation (SRMTS) algorithm for target detection in hyperspectral images is proposed. Benefiting from the sparse representation, the high-dimensional spectral data can be characterized into a series of sparse feature vectors which has only a few nonzero coefficients. Through setting an appropriate threshold, the noise removed sparse spectral vectors are divided into two subspaces in the sparse domain consistent with the sample spectrum to separate the target from the background. Then a correlation and a vector 1-norm are calculated respectively in the subspaces. The sparse characteristic of the target is used to ext ract the target with a multi -threshold method. Unlike the conventional hyperspectral dimensionality reduction methods used in target detection algorithms, like Principal Components Analysis (PCA) and Maximum Noise Fraction (MNF), this algorithm maintains the spectral characteristics while removing the noise due to the sparse representation. In the experiments, an orthogonal wavelet sparse base is used to sparse the spectral information and a best contraction threshold to remove the hyperspectral image noise according to the noise estimation of the test images. Compared with co mmon algorithms, such as Adaptive Cosine Estimator (ACE), Constrained Energy Minimizat ion (CEM) and the noise removed MNF-CEM algorithm, the proposed algorithm demonstrates higher detection rates and robustness via the ROC curves.
In imaging systems based on compressed sensing, error in the
measured data is incurred due to the nonlinear response of the
photo-detector, which affects the quality of reconstructed images.
Conventionally, the affected measured data will be eliminated in order to
obtain the high reconstruction quality. However, when there are too many
measured data affected, it is impossible to reject all such data which will
certainly degrade the imaging efficiency. Therefore, an algorithm of
regionally compensating the non-linear response from the detector is
proposed. The nonlinear measured data will be compensated but not
rejected in the proposed algorithm. According to the detector response
curve, all of the nonlinear measured data will be divided into several parts. The data in the same part will have the same average compensation factor
which can be obtained from the response curve after calculation. The
affected measured data will be compensated with the compensation factor
regionally before used to reconstruct the original image. The theoretical
analysis and simulation results show that a reconstructed image with high
quality can still be obtained even when over 80% of the measured data
are nonlinear. It is impossible to get such a high quality reconstructed
image with the conventionally algorithm. The PSNR and the M-rate in
the simulation show that the compensation algorithm can greatly deal
with the situation of too many nonlinear measured data.
In imaging systems based on compressed sensing, error in the measured data is incurred due to the nonlinear response of the photo detector, which affects the quality of the reconstructed images. We propose an algorithm to compensate the nonlinear response from the detector. The compensation is based on the detector response curve on the measured data. The theoretical analysis and simulation results show that this algorithm can greatly reduce the reconstruction errors caused by the detector’s nonlinear response. Furthermore the peak signal-to-noise ratio of the reconstructed image and the system reconstruction rate have been significantly improved, while the fine feature of the images is better preserved and reconstructed as compared to that without using the algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.