In this work, we describe a convolutional neural network (CNN) to accurately predict field lighting. In the network structure, feature learning and regression are integrated into an optimization process to form a more effective model for scene illumination estimation, and we have added an attention mechanism to reinforce learning. This approach is trained with ICVL data sets and tested with Foster HSI data for better performance. The stability of the proposed neural network for local illumination estimation and the improvement of the global illumination estimation performance are verified by experiments on the spatial illumination variation images.
In this work, a deep learning network is described that uses a residual network as the backbone and incorporates SE-Blocks to accurately predict illumination. Multispectral images are segmented into small blocks as input to estimate global illumination from local estimates. The network is composed of multiple basic residual blocks and bottleneck residual blocks, integrating feature learning and regression into the optimization process, thereby producing a more effective illumination estimation model. The entire network is trained using the ICVL dataset and tested on the Foster 2022 dataset. Preliminary experiments on images under different lighting conditions have validated the stability of the proposed neural network method for illumination estimation, enhancing the performance of illumination estimation.
The spectral reflectance of multispectral images can provide more valuable information about object characteristics. In order to improve the utilization of the spectrum, the reflectance reconstruction requires the same system calibration and illumination of the image acquisition. Therefore, Khan proposed the concept of multispectral constancy, which is to transform the multispectral image data into a standard representation through spectral adaptive transformation. Khan used the linear mapping method to solve SAT to convert the multispectral image data obtained under unknown illumination into the image data under standard light source. In order to further improve the spectral utilization rate and expand the application range of multispectral cameras, an algorithm to improve multispectral constancy based on chromatic aberration index is proposed in this paper. The algorithm uses chromatic aberration as the objective function to solve the spectral adaptive transformation. In this paper, ten light sources are used as unknown light sources, SFU and X-rite are used as training and testing datasets, and multispectral camera channels are simulated by Equi-Gaussian and Equi-Energy filters with different number of channels to train and test 5, 6, 8, and 10 channels of data. In this paper, the color difference under different light sources is used as the evaluation index to test the performance of the proposed algorithm, and compared with the Khan method for calculating SAT multispectral constancy. The experimental results show that the spectral constancy algorithm based on color difference can perform better, and expand the application of different kinds of unknown light sources in multispectral constancy.
The choice of light source affects the accuracy of the spectral sensitivity estimation. In this paper, we propose to estimate the spectral sensitivity function of digital camera using spectrally tunable LED light sources. The spectral power distribution of the LED light source is determined by a combination of multiple LEDs and their weight coefficients. The method of tuning the weight coefficients of the LEDs includes Monte Carlo method and particle swarm optimization algorithm, so that the LED light source with the smallest estimation error is defined as the optimal light source. Experimental results show that the particle swarm algorithm gives the best estimation results. The relative error of estimation using LED light sources is significantly reduced when compared with the results when using a single light source for estimation (e.g., D65 light source).
The Spectral Reconstruction (SR) algorithm attempts to recover hyperspectral information from RGB camera responses. This estimation problem is usually formulated as a least squares regression, and because the data is noisy, Tikhonov regularization is reconsidered. The degree of regularization is controlled by a single penalty parameter. This paper improves the traditional cross validation experiment method for the optimization of this parameter. In addition, this article also proposes an improved SR model. Unlike common SR models, our method divides the processed RGB space into different numbers of neighborhoods and determines the center point of each neighborhood. Finally, the adjacent RGB data and spectral data of each center point are used as input and output data for the Radial Basis Function Network (RBFN) model to train the SR regression of each RGB neighborhood. This article selects MRAE and RMSE to evaluate the performance of the SR algorithm. Through comparison with different SR models, the methods proposed in this article have significant performance improvements.
The metamer mismatch volume has important applications in color correction, camera design, and light source design. The method based on spherical sampling to calculate the metamer mismatch volume has a long computation time, a large number of duplicated boundary points, too few effective vertices, and the dimension of its metamer set will appear lower than the theoretical dimension. In this paper, we propose a high-dimensional spherical sampling method that samples the metamer set directly, and find all boundary points by selecting direction vectors and polarizing all directions. The experimental results show that our method improves the above problems, the computational speed is faster, the computational results are close, the repetition rate of boundary points is greatly reduced, and the actual dimensionality of the corresponding metamer set is consistent with the theoretical dimensionality.
To improve the color reproduction and realism of digital cameras and to promote the development of computer vision. Camera colorimetry is conditioned on the spectral sensitivity response of the camera being a linear transformation of the color matching function of the human visual system. Previous methods have proposed placing well-designed filters in front of the camera to produce a sensitivity that well matches the Luther condition. In this paper, we optimize the latest matching illumination method (by using a spectral-tunable illumination system to modulate the spectrum of certain light sources), improve the method of designing filters and add new constraints. Experiments demonstrate that the matching illumination method using new objective functions give a 5% improvement over the original method, and the optimization of the filter using a gradient ascent algorithm and a genetic algorithm gives a 10% improvement in chromaticity over the original method. The method of limiting the average transmittance also has a 10% improvement over the previous one. As a result, these methods can make the imaging of digital cameras more accurate and realistic.
KEYWORDS: Picosecond phenomena, Cameras, Data modeling, Optical filters, Digital filtering, Reflectivity, Multispectral imaging, Reconstruction algorithms, Neural networks, RGB color model
Taking advantage of the technology of dispersive Fourier transform (DFT), we experimentally observed the evolutionary dynamics of convention solitons(CSs) in a simplified Erbium-doped fiber laser. The periodic beating behavior that occurs during the build-up and disappearance of conventional solitons was discovered in a nonlinear polarization rotation (NPR) fiber lasers. We suggest the reasonable assumption that the periodic beating during the dynamic evolution may be a close connection with the modulation depth of the intracavity saturable absorber The results of this study can deepen researchers' understanding of the evolution of CSs and provide additional judgment dimensions for optimizing the laser parameters.
The spectral sensitivity function of a digital camera is an important parameter and the recovery of camera spectral sensitivity function is a crucial study. In this paper, we propose a new rank-based constraint algorithm to estimate the spectral sensitivity. The constraints are imposed on the estimation of the spectral sensitivity based on the rank orders of the response values of the digital camera for imaging standard color samples under different illuminations. Color samples and illuminations are known in the estimation process. We have two kinds of ranking constraints in the algorithm, one is ranking under a single illumination, and the other is ranking under multiple illuminations. Besides, with the support of two ranking constraints, we use fewer color samples in the experiments. The study is evaluated by several numerical simulation experiments and compared with other spectral sensitivity estimation algorithms. We added various levels of noise and tried various combinations of multiple illuminations to recover the spectral sensitivity of different cameras. The experimental results suggest that the proposed algorithm performs better in estimating the camera spectral sensitivity function and computational work is reduced. At the same time, utilizing fewer color samples can reduce the complexity of the experiment without increasing the experimental error metric.
The accurate prediction of spectral sensitivity of digital camera is essential for various aspects in color science, such as color correction, color rendering and color constancy. In this paper, a multi-objective optimization algorithm was proposed to estimate the spectral sensitivity of cameras. Multiple objective functions and Sine subspace based spectral sensitivity were employed in the proposed algorithm, in which excellent robustness and high smoothness were achieved. The performance of this algorithm was theoretically evaluated by multiple numerical simulation experiments, and was further compared with other algorithms in previous literatures based on the criteria of color aberration (δE), spectral recovery error (SE) and similarity between the estimated sensors and the measured ground truth (Vora). According to the numerical simulation results, the multi-objective algorithm can significantly improve the performance of the spectral sensitivity estimation, which may promote its various applications in the fields of color correction and illumination modeling between cameras.
Skin spectral reflectance is playing an increasingly important role in many fields, including medical diagnosis, computer graphics, cosmetics industry, and even social sciences. In this paper, we proposed an algorithm based on multispectral imaging to reconstruct the skin spectral reflectance. Polynomial regression model, the equi-Gaussian filters and the equienergy filters were employed in the proposed algorithm. The performance of the proposed algorithm was evaluated under different numbers of filters and noise based on the chromatic aberration (ΔE) under D65 light source, and compared with the other spectral reconstruction algorithms appeared in previous literatures. What’s more, the real human skin datasets were employed to reconstruct the skin spectrum, which made our research more practical. According to the reconstruction results of the real skin data set, the proposed algorithm leads to considerable improvements in comparison with other algorithms.
The characterization of a set of target sample can be obtained from a specific sample set. The precision of the model largely depends on the number of the characterization samples and the number of colors. In this paper, a sample selection method combining the third-order root polynomial color correction(RPCC) model is proposed. This method is named Maxmingfc, which is based on the Maxminc method proposed by Cheung et al. And takes GFC evaluation criteria as the criteria for selecting differences between samples, it can use fewer samples to obtain higher characterization model precision. This sample selection method is used to obtain characteristic models under different cameras for different training sample sets and test sample sets, and color difference is used for evaluation. Compared with Hardeberg, Maxminc, Maxmins, Maxsums and Maxsumc methods, the sample selection method of Maxmingfc can obtain the more accurate characterization model, which is superior to other existing sample selection methods.
Digital color image reproduction based on spectral information has become a field of much interest and practical importance in recent years. The representation of color in digital form with multi-band images is not very accurate, hence the use of spectral image is justified. Reconstructing high-dimensional spectral reflectance images from relatively low-dimensional camera signals is generally an ill-posed problem. The aim of this study is to use the Principal component analysis (PCA) transform in spectral reflectance images reconstruction. The performance is evaluated by the mean, median and standard deviation of color difference values. The values of mean, median and standard deviation of root mean square (GFC) errors between the reconstructed and the actual spectral image were also calculated. Simulation experiments conducted on a six-channel camera system and on spectral test images show the performance of the suggested method.
KEYWORDS: Digital watermarking, Image compression, Databases, 3D image processing, Discrete wavelet transforms, Signal to noise ratio, Wavelet transforms, Color reproduction, Reflectivity, Image quality
Kaarna et al. [pro. Scand. Cof. Image Analysis, SCIA 2003, pages 320-327] proposed a watermarking method based on
the three dimensional wavelet transform for spectral images. kaarna et al [J. Imaging SCI. Technol. 52, pages 30502-1 -
30502-18, 2008] reported that the robustness of the watermarking method to different illumination conditions. The
spectral image database provider stores the reflectance or radiance spectra of the images. Depending on the client's
requirements, the effects from illumination can be added to the spectra, i.e., the viewing conditions change the perceived
color of the spectrum. External illumination can be compensated through convoluting the spectra of the image with the
spectrum of the illuminant. In this paper, a hybrid watermarking method based on the three-dimensional wavelet
transform and singular value decomposition is proposed. The proposed method is compared with the 3D-DWT method
of kaarna et al in the cases both with and without effect of different illumination conditions. Experiments were
performed on a spectral image of natural scenes. Inlab2 was selected. The color reproduction is done using CIE XYZ
basis function with D65 light model. Inlab2 image have the following dimensions: 256x256 pixels, and 31 spectral
components per each pixel. Images were captured by a CCD (charge coupled device) camera in a 400-700 nm
wavelength range at 10 nm intervals. The image selected was taken indoor (in a controlled environment, i.e. dark-lab or
glass-house). The performance of the proposed technique is compared with the work of kaarna et al against different
illumination conditions and attacks including median and mean filtering, lossy compression. The experiments indicate,
the proposed method outperforms the work of kaarna et al.
Spectral imaging technology have been used mostly in remote sensing, but have recently been extended to new area
requiring high fidelity color reproductions like telemedicine, e-commerce, etc. These spectral imaging systems are
important because they offer improved color reproduction quality not only for a standard observer under a particular
illuminantion, but for any other individual exhibiting normal color vision capability under another illuminantion. A
possibility for browsing of the archives is needed.
In this paper, the authors present a new spectral image browsing architecture. The architecture for browsing is expressed
as follow:
(1) The spectral domain of the spectral image is reduced with the PCA transform. As a result of the PCA
transform the eigenvectors and the eigenimages are obtained.
(2) We quantize the eigenimages with the original bit depth of spectral image (e.g. if spectral image is
originally 8bit, then quantize eigenimage to 8bit), and use 32bit floating numbers for the eigenvectors.
(3) The first eigenimage is lossless compressed by JPEG-LS, the other eigenimages were lossy compressed by
wavelet based SPIHT algorithm.
For experimental evalution, the following measures were used. We used PSNR as the measurement for spectral
accuracy. And for the evaluation of color reproducibility, ΔE was used.here standard D65 was used as a light source. To
test the proposed method, we used FOREST and CORAL spectral image databases contrain 12 and 10 spectral images,
respectively. The images were acquired in the range of 403-696nm. The size of the images were 128*128, the number of
bands was 40 and the resolution was 8 bits per sample. Our experiments show the proposed compression method is
suitable for browsing, i.e., for visual purpose.
Multispectral images are available for different purposes due to developments in spectral imaging systems. The sizes of
multispectral images are enormous. Thus transmission and storage of these volumes of data require huge time and
memory resources. That is why compression algorithms must be developed. A salient property of multispectral images is
that strong spectral correlation exists throughout almost all bands. This fact is successfully used to predict each band
based on the previous bands. We propose to use spectral linear prediction and entropy coding with context modeling for
encoding multispectral images. Linear prediction predicts the value for the next sample and computes the difference
between predicted value and the original value. This difference is usually small, so it can be encoded with less its than
the original value. The technique implies prediction of each image band by involving number of bands along the image
spectra. Each pixel is predicted using information provided by pixels in the previous bands in the same spatial position.
As done in the JPEG-LS, the proposed coder also represents the mapped residuals by using an adaptive Golomb-Rice
code with context modeling. This residual coding is context adaptive, where the context used for the current sample is
identified by a context quantization function of the three gradients. Then, context-dependent Golomb-Rice code and bias
parameters are estimated sample by sample. The proposed scheme was compared with three algorithms applied to the
lossless compression of multispectral images, namely JPEG-LS, Rice coding, and JPEG2000. Simulation tests performed
on AVIRIS images have demonstrated that the proposed compression scheme is suitable for multispectral images.
Many methods for lossy and lossless compression of multispectral imaging data has been developed. 3-dimensional
compression of multispectral images has been studied by many researchers. Although, the 3-D compression method
provides relatively good performances, a major problem is that the method requires a large amount of memory and
processing time. A salient property of hyperspectral images is that strong spectral correlation exists throughout almost all
bands. This could be because, in these bands, the signal associated with these frequencies is greatly attenuated by the
atmosphere or the materials being imaged. In this paper, we take into account these property of multispectal data and
propose a new compression algorithm based on a 2-dimensional wavelet transform. In the proposed method, we divide
the spectral bands of multispectral images into a number of groups in which each group contains two adjacent bands.
The first band of each group is SPIHT coded. Its decoded version is subtracted from the second band, and then SPIHT is
applied to the residual image. The data used in this paper was acquired by AVIRIS. There were 224 contiguous spectral
bands using wavelengths between 400 and 2500nm. The data set contains 512 scan lines with 614 pixels in each scan
line. We selected a sub-region with the size of 512×512 pixels. As can be seen in the results, the proposed algorithm
provides better performance than the SPIHT algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.