We have developed a system that separates and measures the optical properties of skin, i.e., the surface reflection, diffuse reflection, and sub-surface scattering components of the skin. This system includes two polarization filters that separate light from the skin into a surface reflection component image and a diffuse reflection component image. Furthermore, by using a projector as a light source and irradiating the skin using a high-frequency binary illumination pattern, the sub-surface scattering component image alone can be separated and generated. Using the proposed system, we performed a survey of 154 Japanese women aged from their 20s to their 70s and analyzed age-related changes in the optical properties of their skin. The results revealed the following. First, the luminance value Y of the surface reflection from the cheek and its standard deviation within the analysis area increase with age. Second, the Y value of diffuse reflection from the skin decreases with age. Third, the amount of light in the sub-surface scattering components also decreases with age. The proposed system is expected to have a wide range of applications in the medical and cosmetic fields.
The digital reproduction of a historical motion picture should resemble as much as possible the analog film projection at the time of the movie release. Nowadays, practices of capturing digital images of films do not properly consider the fundamental elements and conditions of the original film projection. The typical rigid three-band (RGB) capture cannot adapt to the multitude of historical color film stocks to be digitized, and the diffuse illumination on the film generally used by standard digital scanning devices is unable to guarantee the proper visual rendition of the original analog projection of film prints. In order to overcome these problems, we designed and built a novel multispectral imaging system that illuminates the film with a condensed light beam. The new imaging system and the computational pipeline were tested on an assorted set of photographic colors. The accuracy of the multispectral captures was tested by comparison with corresponding spectrally resolved point-based radiometric measurements of the light reflected by a screen during analog projection. The presented optical design represents an excellent solution for the creation of a new multispectral motion picture scanner prototype. The LED-based illumination system coupled with a film transport mechanism can be the core concept of a promising new generation of motion picture film scanners.
Hyperspectral imaging has become a powerful technique for the non-invasive investigation of works of art. An advantage of this technique is the possibility to obtain spectral information over the entire spatial region of interest, allowing the identification and mapping of the constituent materials of the artefact under study. While hyperspectral imaging has been extensively used for artworks such as paintings and manuscripts, few works have been published on the use of this technique on stained glass. In this paper, a workflow for the imaging and analysis of stained-glass windows is proposed. The acquisition is carried out using a laboratory set-up adapted for transmittance measurement, which can support panels with a maximum size of around 50 x 50 cm. The image processing is carried out with two aims: visualization and chromophore identification. The results of this processing provide a foundation to discuss the potential of hyperspectral imaging for the scientific analysis of stained-glass windows.
Fluorescence is a photoluminescence phenomenon where light is absorbed at lower wavelengths and re-emitted at longer wavelengths. For classic artworks, fluorescence gives useful information about varnish and retouches. At the same time, modern artworks may employ synthetic fluorescent pigments because of their special appearance properties, such as increased brightness and vividness provoked by self-luminescence. Hence, it is relevant to investigate the fluorescent signals of cultural heritage objects when studying their appearance. This work proposes a variant to Reflectance Transformation Imaging (RTI) technique, namely Fluorescence Transformation Imaging. Reflectance Transformation Imaging method outputs a single-camera multi-light image collection of a static scene, which can be used to model the reflectance of the scene as a polynomial of the illumination directions. Similarly, Fluorescence Transformation Imaging aims to model the fluorescent signal based on a series of images with fixed scene and viewpoint and varying incident light directions - what changes with respect to RTI is that the wavelength of incident light needs to be shorter than the sensing wavelength. In the literature, there are works that explore the isotropic property of fluorescence in low-dimension multi-light imagery methods (such as Photometric Stereo) to model the appearance of an object with a first-order polynomial. This is because in the fluorescent mode the object gets closer to a Lambertian surface than in the reflective mode where non-Lambertian effects such as highlights are more likely to appear. Nonetheless, this assumption stands for single-object scenes, with uniform albedo and convex geometries. When there are multiple fluorescent objects in the scene, with concavities and non-uniform fluorescent component, then the fluorescence can become secondary light to the object and create interreflections. This paper explores the Reflectance and Fluorescence Transformation Imaging methods and the resulting texture maps for appearance rendering of heterogeneous non-flat fluorescent objects.
Image sensing technology has a great impact on our daily life as well as the entire society, such as health, safety and security, communication systems, and entertainment. The conventional optical color sensors consist of side by side arranged optical filters for three basic colors (blue, green, and red). Hence, the efficiency of such optical color sensors is limited by only 33%. In this study, a vertically stacked color sensor is investigated with perovskite alloys, which has the potential to provide the efficiency approaching 100%. The proposed optical sensor will not be limited by color Moire error or color aliasing. Perovskite materials with suitable bandgaps are determined by applying the energy shifting model and the optical constants are used for further investigations. Quantum efficiencies and spectral responsivities of the described color sensors are investigated by three-dimensional electromagnetic simulations. Investigated spectral sensitivities are further analyzed for the
The development of a spectral difference-based statistical processing of hyperspectral images is provided in this article. Kullback-Leibler pseudo-divergence function, which was specifically developed for the metrological processing of hyperspectral images, is used at the foundation of the statistics. As a demonstration of its use, the proposed statistics are used in visualising surface variability within a set of pigment patches. It is then further exploited to detect anomalies and deterioration that occur on the patches.
Shifted superimposition is a resolution-enhancement method that has gained popularity in the projector industry the last couple of years. This method consists of shifting every other projected frame spatially with subpixel precision, and by doing so creating a new pixel grid on the projected surface with smaller effective pixel pitch. There is still an open question of how well this technique performs in comparison with the native resolution, and how high the effective resolution gain really is. To help investigate these questions, we have developed a framework for simulating different superimposition methods over different image contents, and evaluate the result using several image quality metrics (IQMs). We have also performed a subjective experiment with observers who rate the simulated image content, and calculated the correlation between the subjective results and the IQMs. We found that the visual information fidelity metric is the most suitable to evaluate natural superimposed images when subjective match is desired. However, this metric does not detect the distortion in synthetic images. The multiscale structural similarity metric which is based on the analysis of image structure is better at detecting this distortion.
Non contact spatial resolved oxygenation measurements remain an open challenge in the biomedical field and non contact patient monitoring. Although point measurements are the clinical standard till this day, regional differences in the oxygenation will improve the quality and safety of care. Recent developments in spectral imaging resulted in spectral filter array cameras (SFA). These provide the means to acquire spatial spectral videos in real-time and allow a spatial approach to spectroscopy. In this study, the performance of a 25 channel near infrared SFA camera was studied to obtain spatial oxygenation maps of hands during an occlusion of the left upper arm in 7 healthy volunteers. For comparison a clinical oxygenation monitoring system, INVOS, was used as a reference. In case of the NIRS SFA camera, oxygenation curves were derived from 2-3 wavelength bands with a custom made fast analysis software using a basic algorithm. Dynamic oxygenation changes were determined with the NIR SFA camera and INVOS system at different regional locations of the occluded versus non-occluded hands and showed to be in good agreement. To increase the signal to noise ratio, algorithm and image acquisition were optimised. The measurement were robust to different illumination conditions with NIR light sources. This study shows that imaging of relative oxygenation changes over larger body areas is potentially possible in real time.
Optical non-contact measurements in general, and chromophore concentration estimation in particular, have been identified to be useful tools for skin assessment. Spectral estimation using a low cost hand held device has not been studied adequately as a basis for skin assessment. Spectral measurements on the one hand, which require bulky, expensive and complex devices and direct channel approaches on the other hand, which operate with simple optical devices have been considered and applied for skin assessment. In this study, we analyse the capabilities of spectral estimation for skin assessment in form of chromophore concentration estimation using a prototypical low cost optical non-contact device. A spectral estimation work flow is implemented and combined with pre-simulated Monte Carlo spectra to use estimated spectra based on conventional image sensors for chromophore concentrations estimation and obtain health metrics. To evaluate the proposed approach, we performed a series of occlusion experiments and examined the capabilities of the proposed process. Additionally, the method has been applied to more general skin assessment tasks. The proposed process provides a more general representation in form of a spectral image cube which can be used for more advanced analysis and the comparisons show good agreement with expectations and conventional skin assessment methods. Utilising spectral estimation in conjunction with Monte Carlo simulation could lead to low cost, easy to use, hand held and multifunctional optical skin assessment with the possibility to improve skin assessment and the diagnosis of diseases.
In this paper, we present an industrial application of multispectral imaging, for density measurement of colorants in photographic
paper. We designed and developed a 9-band LED illumination based multispectral imaging system specifically for
this application in collaboration with FUJIFILM Manufacturing Europe B.V., Tilburg, Netherlands. Unlike a densitometer,
which is a spot density measurement device, the proposed system enables fast density measurement in a large area of a
photo paper. Densities of the four colorants (CMYK) at every surface point in an image are calculated from the spectral
reflectance image. Fast density measurements facilitate automatic monitoring of density changes (which is proportional to
thickness changes), which helps control the manufacturing process for quality and consistent output. Experimental results
confirm the effectiveness of the proposed system.
We introduce a new image database dedicated to multi-/hyperspectral image quality assessment. A total of nine
scenes representing pseudo-at surfaces of different materials (textile, wood, skin. . . ) were captured by means of
a 160 band hyperspectral system with a spectral range between 410 and 1000nm. Five spectral distortions were
designed, applied to the spectral images and subsequently compared in a psychometric experiment, in order to
provide a basis for applications such as the evaluation of spectral image difference measures. The database can
be downloaded freely from http://www.colourlab.no/cid.
To control printers so that the mixture of inks results in specific color under defined visual environment requires a
spectral reflectance model that estimates reflectance spectra from nominal dot coverage. The topic of this paper is to
investigate the dependence of the Yule-Nielsen modified spectral Neugebauer (YNSN) model accuracy on ink amount. It
is shown that the performance of the YNSN model strongly depends on the maximum ink amount applied. In a cellular
implementation, this limitation mainly occurs for high coverage prints, which impacts on the optimal cell design.
Effective coverages derived from both Murray-Davis (MD) and YNSN show large ink spreading. As ink-jet printing is a
non-impact printing process, the ink volume deposited per unit area (pixel) is constant, leading to the hypothesis that
isolated ink dots have lower thickness that the full-tone ink film. Measured spectral reflectance curves show similar
trend, which supports the hypothesis. The reduced accuracy of YNSN can thus be explained with the fact that patches
with lower effective coverage have a mean ink thickness very different from that of the full-tone patch. The effect will be
stronger for small dot coverage and large dot gain and could partially explain why the Yule-Nielsen n-factor is different
for different inks. The performance of the YNSN model could be improved with integration of ink thickness variation.
Multichannel printer modeling has been an active area of research in the field of spectral printing. The most commonly
used models for characterization of such systems are the spectral Neugebauer (SN) and its extensions. This work
addresses issues that can arise during calibration and testing of the SN model when modelling a 7-colorant printer. Since
most substrates are limited in their capacity to take in large amount of ink, it is not always possible to print all colorant
combinations necessary to determine the Neugebauer primaries (NP). A common solution is to estimate the nonprintable
Neugebauer primaries from the single colorant primaries using the Kubelka-Munk (KM) optical model. In this
work we test whether a better estimate can be obtained using general radiative transfer theory, which better represents
the angular variation of the reflectance from highly absorbing media, and takes surface scattering into account. For this
purpose we use the DORT2002 model. We conclude DORT2002 does not offer significant improvements over KM in
the estimation of the NPs, but a significant improvement is obtained when using a simple surface scattering model. When
the estimated primaries are used as inputs to the SN model instead of measured ones, it is found the SN model performs
the same or better in terms of color difference and spectral error. If the mixed measured and estimated primaries are used
as inputs to the SN model, it performs better than using either measured or estimated.
Inspired by the concept of the colour filter array (CFA), the research community has shown much interest in
adapting the idea of CFA to the multispectral domain, producing multispectral filter arrays (MSFAs). In addition
to newly devised methods of MSFA demosaicking, there exists a wide spectrum of methods developed for CFA.
Among others, some vector based operations can be adapted naturally for multispectral purposes. In this paper,
we focused on studying two vector based median filtering methods in the context of MSFA demosaicking. One
solves demosaicking problems by means of vector median filters, and the other applies median filtering to the
demosaicked image in spherical space as a subsequent refinement process to reduce artefacts introduced by
demosaicking. To evaluate the performance of these measures, a tool kit was constructed with the capability
of mosaicking, demosaicking and quality assessment. The experimental results demonstrated that the vector
median filtering performed less well for natural images except black and white images, however the refinement
step reduced the reproduction error numerically in most cases. This proved the feasibility of extending CFA
demosaicking into MSFA domain.
Spatial filtering, which aims to mimic the contrast sensitivity function (CSF) of the human visual system (HVS), has previously been combined with color difference formulae for measuring color image reproduction errors. These spatial filters attenuate imperceptible information in images, unfortunately including high frequency edges, which are believed to be crucial in the process of scene analysis by the HVS. The adaptive bilateral filter represents a novel approach, which avoids the undesirable loss of edge information introduced by CSF-based filtering. The bilateral filter employs two Gaussian smoothing filters in different domains, i.e., spatial domain and intensity domain. We propose a method to decide the parameters, which are designed to be adaptive to the corresponding viewing conditions, and the quantity and homogeneity of information contained in an image. Experiments and discussions are given to support the proposal. A series of perceptual experiments were conducted to evaluate the performance of our approach. The experimental sample images were reproduced with variations in six image attributes: lightness, chroma, hue, compression, noise, and sharpness/blurriness. The Pearson's correlation values between the model-predicted image difference and the observed difference were employed to evaluate the performance, and compare it with that of spatial CIELAB and image appearance model.
We have proposed, in this paper, a new color constancy technique, an extension to the chromagenic color constancy.
Chromagenic based illuminant estimation methods take two shots of a scene, one without and one with a specially chosen
color filter in front of the camera lens. Here, we introduce chromagenic filters into the color filter array itself by placing
them on top of R, G or B filters and replacing one of the two green filters in the Bayer's pattern with them. This allows
obtaining two images of the same scene via demosaicking: a normal RGB image, and a chromagenic image, equivalent
of RGB image with a chromagenic filter. The illuminant can then be estimated using chromagenic based illumination
estimation algorithms. The method, we named as CFA based chromagenic color constancy (or 4C in short), therefore,
does not require two shots and no registration issues involved unlike as in the other chromagenic based color constancy
algorithms, making it more practical and useful computational color constancy method in many applications.
Experiments show that the proposed color filter array based chromagenic color constancy method produces comparable
results with the chromagenic color constancy without interpolation.
The advance and rapid development of electronic imaging technology has lead the way to production of imaging
sensors capable of acquiring good quality digital images with a high resolution. At the same time the cost
and size of imaging devices have reduced. This has incited an increasing research interest for techniques that
use images obtained by multiple camera arrays. The use of multi-camera arrays is attractive because it allows
capturing multi-view images of dynamic scenes, enabling the creation of novel computer vision and computer
graphics applications, as well as next generation video and television systems. There are additional challenges
when using a multi-camera array, however. Due to inconsistencies in the fabrication process of imaging sensors
and filters, multi-camera arrays exhibit inter-camera color response variations. In this work we characterize
and compare the response of two digital color cameras, which have a light sensor based on the charge-coupled
device (CCD) array architecture. The results of the response characterization process can be used to model the
cameras' responses, which is an important step when constructing a multi-camera array system.
We report on subjective experiments comparing example-based regularization, total variation regularization,
and the joint use of both regularizers. We focus on the noisy deblurring problem, which generalizes image
superresolution and denoising. Controlled subjective experiments suggest that joint example-based regularization
and total variation regularization can provide subjective gains over total regularization alone, particularly when
the example images contain similar structural elements as the test image. We also investigate whether the
regularization parameters can be trained by cross-validation, and we compare the reconstructions using crossvalidation
judgments made by humans or by fully automatic image quality metrics. Experiments showed that of
five image quality metrics tested, the structural similarity index (SSIM) correlates best with human judgement
of image quality, and can be profitably used to cross-validate regularization parameters. However, there is a
significant quality gap between images restored using human or automatic parameter cross-validation.
In the past few years there has been a significant volume of research work carried out in the field of multispectral image
acquisition. The focus of most of these has been to facilitate a type of multispectral image acquisition systems that
usually requires multiple subsequent shots (e.g. systems based on filter wheels, liquid crystal tunable filters, or active
lighting). Recently, an alternative approach for one-shot multispectral image acquisition has been proposed; based on an
extension of the color filter array (CFA) standard to produce more than three channels. We can thus introduce the concept
of multispectral color filter array (MCFA). But this field has not been much explored, particularly little focus has been
given in developing systems which focuses on the reconstruction of scene spectral reflectance.
In this paper, we have explored how the spatial arrangement of multispectral color filter array affects the acquisition
accuracy with the construction of MCFAs of different sizes. We have simulated acquisitions of several spectral scenes
using different number of filters/channels, and compared the results with those obtained by the conventional regular MCFA
arrangement, evaluating the precision of the reconstructed scene spectral reflectance in terms of spectral RMS error, and
colorimetric ▵E*ab color differences. It has been found that the precision and the the quality of the reconstructed images
are significantly influenced by the spatial arrangement of the MCFA and the effect will be more and more prominent with
the increase in the number of channels. We believe that MCFA-based systems can be a viable alternative for affordable
acquisition of multispectral color images, in particular for applications where spatial resolution can be traded off for spectral
resolution. We have shown that the spatial arrangement of the array is an important design issue.
Many methods have been developed in image processing for face recognition, especially in recent years with the increase
of biometric technologies. However, most of these techniques are used on grayscale images acquired in the visible range
of the electromagnetic spectrum.
The aims of our study are to improve existing tools and to develop new methods for face recognition. The techniques
used take advantage of the different spectral ranges, the visible, optical infrared and thermal infrared, by either
combining them or analyzing them separately in order to extract the most appropriate information for face recognition.
We also verify the consistency of several keypoints extraction techniques in the Near Infrared (NIR) and in the Visible
Spectrum.
KEYWORDS: Printing, CMYK color model, Spectral models, Inkjet technology, Nonimpact printing, Reflectivity, Performance modeling, Opacity, Color difference, RGB color model
In the context of spectral color image reproduction by multi-channel inkjet printing a key challenge is to accurately
model the colorimetric and spectral behavior of the printer. A common approach for this modeling is to assume that the
resulting spectral reflectance of a certain ink combination can be modeled as a convex combination of the so-called
Neugebauer Primaries (NPs); this is known as the Neugebauer Model. Several extensions of this model exist, such as the
Yule-Nielsen Modified Spectral Neugebauer (YNSN) model. However, as the number of primaries increases, the
number of NPs increases exponentially; this poses a practical problem for multi-channel spectral reproduction.
In this work, the well known Kubelka-Munk theory is used to estimate the spectral reflectances of the Neugebauer
Primaries instead of printing and measuring them, and subsequently we use these estimated NPs as the basis of our
printer modeling. We have evaluated this approach experimentally on several different paper types and on the HP
Deskjet 1220C CMYK inkjet printer and the Xerox Phaser 7760 CMYK laser printer, using both the conventional
spectral Neugebauer model and the YNSN model. We have also investigated a hybrid model with mixed NPs, half
measured and half estimated.
Using this approach we find that we achieve not only cheap and less time consuming model establishment, but also,
somewhat unexpectedly, improved model precision over the models using the real measurements of the NPs.
KEYWORDS: RGB color model, Visualization, 3D metrology, Printing, Color difference, Transform theory, Profiling, Image quality, Statistical modeling, Algorithm development
Multi-dimensional look up tables (LUTs) are widely employed for color transformations due to its high accuracy and
general applicability. Using the LUT model generally involves the color measurement of a large number of samples. The
precision and uncertainty of the color measurement will be mainly represented in the LUTs, and will affect the
smoothness of the color transformation. This, in turn, strongly influences the quality of the reproduced color images. To
achieve high quality color image reproduction, the color transformation is required to be relatively smooth. In this study,
we have investigated the inherent characteristics of LUTs' transformation from color measurement and their effects on
the quality of reproduced images. We propose an algorithm to evaluate the smoothness of 3D LUT based color
transformations quantitatively, which is based on the analysis of 3D LUTs transformation from RGB to CIELAB and the
second derivative of the differences between adjacent points in vertical and horizontal ramps of each LUT entry. The
performance of the proposed algorithm was compared with a those proposed in two recent studies on smoothness, and a
better performance is reached by the proposed method.
In the context of print quality and process control colorimetric parameters and tolerance values are clearly defined.
Calibration procedures are well defined for color measurement instruments in printing workflows. Still, using more than
one color measurement instrument measuring the same color wedge can produce clearly different results due to random
and systematic errors of the instruments. In certain situations where one instrument gives values which are just inside the
given tolerances and another measurement instrument produces values which exceed the predefined tolerance
parameters, the question arises whether the print or proof is approved or not accepted with regards to the standard
parameters. The aim of this paper was to determine an appropriate model to characterize color measurement instruments
for printing applications in order to improve the colorimetric performance and hence the inter-instrument agreement. The
method proposed is derived from color image acquisition device characterization methods which have been applied by
performing polynomial regression with a least square technique. Six commercial color measurement instruments were
used for measuring color patches of a control color wedge on three different types of paper substrates. The
characterization functions were derived using least square polynomial regression, based on the training set of 14 BCRA
tiles colorimetric reference values and the corresponding colorimetric measurements obtained by the measurement
instruments. The derived functions were then used to correct the colorimetric values of test sets of 46 measurements of
the color control wedge patches. The corrected measurement results obtained from the applied regression model was
then used as the starting point with which the corrected measurements from other instruments were compared to find the
most appropriate polynomial, which results in the least color difference. The obtained results demonstrate that the
proposed regression method works remarkably well with a range of different color measurement instruments used on
three types of substrates. Finally, by extending the training set from 14 samples to 38 samples the obtained results clearly
indicate that the model is robust.
Multispectral color imaging is a promising technology, which can solve many of the problems of traditional RGB color
imaging. However, it still lacks widespread and general use because of its limitations. State of the art multispectral imaging
systems need multiple shots making it not only slower but also incapable of capturing scenes in motion. Moreover, the
systems are mostly costly and complex to operate. The purpose of the work described in this paper is to propose a one-shot
six-channel multispectral color image acquisition system using a stereo camera or a pair of cameras in a stereoscopic
configuration, and a pair of optical filters. The best pair of filters is selected from among readily available filters such
that they modify the sensitivities of the two cameras in such a way that they get spread reasonably well throughout the
visible spectrum and gives optimal reconstruction of spectral reflectance and/or color. As the cameras are in a stereoscopic
configuration, the system is capable of acquiring 3D images as well, and stereo matching algorithms provide a solution to
the image alignment problem. Thus the system can be used as a "two-in-one" multispectral-stereo system. However, this
paper mainly focuses on the multispectral part. Both simulations and experiments have shown that the proposed system
performs well spectrally and colorimetrically.
Image quality metrics have become more and more popular in the image processing community. However, so far, no one
has been able to define an image quality metric well correlated with the percept for overall image quality. One of the causes
is that image quality is multi-dimensional and complex. One approach to bridge the gap between perceived and calculated
image quality is to reduce the complexity of image quality, by breaking the overall quality into a set of quality attributes. In
our research we have presented a set of quality attributes built on existing attributes from the literature. The six proposed
quality attributes are: sharpness, color, lightness, artifacts, contrast, and physical. This set keeps the dimensionality to a
minimum. An experiment validated the quality attributes as suitable for image quality evaluation.
The process of applying image quality metrics to printed images is not straightforward, because image quality metrics
require a digital input. A framework has been developed for this process, which includes scanning the print to get a digital
copy, image registration, and the application of image quality metrics. With quality attributes for the evaluation of image
quality and a framework for applying image quality metrics, a selection of suitable image quality metrics for the different
quality attributes has been carried out. Each of the quality attributes has been investigated, and an experimental analysis
carried out to find the most suitable image quality metrics for the given quality attributes. For the sharpness attributes
the Structural SIMilarity index (SSIM) by Wang et al. (2004) is the the most suitable, and for the other attributes further
evaluation is required.
The evaluation of perceived image quality in color prints is a complex task due to its subjectivity and dimensionality. The perceived quality of an image is influenced by a number of different quality attributes. It is difficult and complicated to evaluate the influence of all attributes on overall image quality, and their influence on other attributes. Because of this difficulty, the most important attributes of a color image should be identified to achieve a more efficient and manageable evaluation of the image's quality. Based on a survey of the existing literature and a psychophysical experiment, we identify and categorize existing image quality attributes to propose a refined selection of meaningful ones for the evaluation of color prints.
KEYWORDS: RGB color model, Colorimetry, Visualization, Electronic imaging, Visual system, Curium, Human vision and color perception, Current controlled current source, Electroluminescence, Image compression
In this paper, we propose and discuss some approaches for measuring perceptual contrast in digital images. We
start from previous algorithms by implementing different local measures of contrast and a parameterized way to
recombine local contrast maps and color channels. We propose the idea of recombining the local contrast maps
and the channels using particular measures taken from the image itself as weighting parameters. Exhaustive
tests and results are presented and discussed, in particular we compare the performance of each algorithm in
relation to perceived contrast by observers. Current results show an improvement in correlation between contrast
measures and observers perceived contrast when the variance of the three color channels separately is used as
weighting parameter for local contrast maps.
KEYWORDS: Cameras, Projection systems, Color difference, Error analysis, Digital cameras, Displays, Spectrophotometry, RGB color model, Digital imaging, Transform theory
In this paper the performance of screen compensation based on previous work by Nayar et al. and Ashdown et al.
and five different camera characterization methods are evaluated.
Traditionally, colorimetric characterization of cameras consists of two steps; a linearization and a polynomial
regression. In this research, two different methods of linearization as well as the use of polynomial regression up to
fourth order have been investigated, based both on the standard deviation and the average of color differences. The
experiment consists of applying the different methods 100 times on training sets of 11 different sizes and to measure
the color differences. Both CIELAB and CIEXYZ are used for regression space. The use of no linearization and
CIELAB is also investigated. The conclusion is that the methods that use linearization as part of the model are more
dependent on the size of the training set, while the method that directly convert to CIELAB seems to be more
dependent on the order of polynomial used for regression. We also noted that linearization methods resulting in low
error in the CIEXYZ color space do not necessarily lead to good results in the CIELAB space. CIELAB space gave
overall better result than CIEXYZ; more stabile and better results.
Finally, the camera characterization with the best result was combined into a complete screen compensation
algorithm. Using CIELAB as a regression space the compensation achieved results between 50 an 70 percents more
similar to the same color projected on a white screen than using CIEXYZ (as measured by a spectrophotometer,
comparing absolute color difference in CIELAB) in our experimental setup.
Gamut mapping algorithms are currently being developed to take advantage of the spatial information in an
image to improve the utilization of the destination gamut. These algorithms try to preserve the spatial information
between neighboring pixels in the image, such as edges and gradients, without sacrificing global contrast.
Experiments have shown that such algorithms can result in significantly improved reproduction of some images
compared with non-spatial methods. However, due to the spatial processing of images, they introduce unwanted
artifacts when used on certain types of images. In this paper we perform basic image analysis to predict whether
a spatial algorithm is likely to perform better or worse than a good, non-spatial algorithm. Our approach starts
by detecting the relative amount of areas in the image that are made up of uniformly colored pixels, as well
as the amount of areas that contain details in out-of-gamut areas. A weighted difference is computed from
these numbers, and we show that the result has a high correlation with the observed performance of the spatial
algorithm in a previously conducted psychophysical experiment.
KEYWORDS: RGB color model, Cameras, Digital cameras, Scanners, Instrument modeling, Digital imaging, Device simulation, Visualization, Reverse modeling, Analog electronics
The introduction of digital intermediate workflow in movie production has made visualization of the final image
on the film set increasingly important. Images that have been color corrected on the set can also serve as a basis
for color grading in the laboratory. In this paper we suggest and evaluate an approach that has been used to
simulate the appearance of different film stocks. The GretagMacbeth Digital ColorChecker was captured using
both a Canon EOS 20D camera as well as an analog camera. The film was scanned using an Arri film
scanner. The images of the color chart were then used to perform a colorimetric characterization of these devices
using models based on polynomial regression. By using the reverse model of the digital camera and the forward
model of the analog film chain, the output of the film scanner was simulated. We also constructed a direct
transformation using regression on the RGB values of the two devices. A different color chart was then used as
a test set to evaluate the accuracy of the transformations, where the indirect model was found to provide the
required performance for our purpose without compromising the flexibility of having an independent profile for
each device.
Digital halftoning is used to reproduce a continuous tone image with a printer. One of these halftoning algorithms,
error diffusion, suffers from certain artifacts. One of these artifacts is commonly denoted as worms. We propose
a simple measure for detection of worm artifacts. The proposed measure is evaluated by a psychophysical
experiment, where 4 images were reproduced using 5 different error diffusion algorithms. The results indicate a
high correlation between the predicted worms and perceived worms.
There is growing interest in video-based solutions for people monitoring and counting in business and security
applications. Compared to classic sensor-based solutions the
video-based ones allow for more versatile functionalities,
improved performance with lower costs. In this paper, we propose a real-time system for people counting
based on single low-end non-calibrated video camera.
The two main challenges addressed in this paper are: robust estimation of the scene background and the number
of real persons in merge-split scenarios. The latter is likely to occur whenever multiple persons move closely,
e.g. in shopping centers. Several persons may be considered to be a single person by automatic segmentation
algorithms, due to occlusions or shadows, leading to under-counting. Therefore, to account for noises, illumination
and static objects changes, a background substraction is performed using an adaptive background model
(updated over time based on motion information) and automatic thresholding. Furthermore, post-processing
of the segmentation results is performed, in the HSV color space, to remove shadows. Moving objects are
tracked using an adaptive Kalman filter, allowing a robust estimation of the objects future positions even under
heavy occlusion. The system is implemented in Matlab, and gives encouraging results even at high frame rates.
Experimental results obtained based on the PETS2006 datasets are presented at the end of the paper.
Eye tracking as a quantitative method for collecting eye movement data, requires the accurate knowledge of the eye
position, where eye movements can provide indirect evidence about what the subject sees. In this study two eye tracking
devices have been compared, a Head-mounted Eye Tracking Device (HED) and a Remote Eye Tracking Device (RED).
The precision of both devices has been evaluated, in terms of gaze position accuracy and stability of the calibration. For
the HED it has been investigated how to register data to real-world coordinates. This is needed since coordinates
collected by the HED eye tracker are relative to the position of the subject's head and not relative to the actual stimuli as
it is the case for the RED device. Result Results show that the precision gets worse with time for both eye tracking
devices. The precision of RED is better than the HED and the difference between them is around 10 - 16 pixels (5.584
mm). The distribution of gaze positions for HED and RED devices was expressed by a percentage representation of the
point of regard in areas defined by the viewing angle. For both eye tracking devices the gaze position accuracy has been
95-99% at 1.5-2° viewing angle. The stability of the calibration was investigated at the end of the experiment and the
obtained result was not statistically significant. But the distribution of the gaze position is larger at the end of the
experiment than at the beginning.
We have used image difference metrics to measure the quality of a set of images to know how well they predict
perceived image difference. We carried out a psychophysical experiment with 25 observers along with a recording
of the observers gaze position. The image difference metrics used were CIELAB ΔEab, S-CIELAB, the hue angle
algorithm, iCAM and SSIM. A frequency map from the eye tracker data was applied as a weighting to the image
difference metrics. The results indicate an improvement in correlation between the predicted image difference
and the perceived image difference.
KEYWORDS: RGB color model, 3D modeling, LCDs, Instrument modeling, Optimization (mathematics), Data modeling, Digital Light Processing, CRTs, Projection devices, Projection systems
We have defined an inverse model for colorimetric characterization of additive displays. It is based on an
optimized three-dimensional tetrahedral structure. In order to minimize the number of measurements, the
structure is defined using a forward characterization model. Defining a regular grid in the device-dependent
destination color space leads to heterogeneous interpolation errors in the device-independent source color space.
The parameters of the function used to define the grid are optimized using a globalized Nelder-Mead simplex
downhill algorithm. Several cost functions are tested on several devices. We have performed experiments with
a forward model which assumes variation in chromaticities (PLVC), based on one-dimensional interpolations for
each primary ramp along X, Y and Z (3×3×1-D). Results on 4 devices (2 LCD and a DLP projection devices,
one LCD monitor) are shown and discussed.
One of latest developments for pre-press applications is the concept of soft proofing, which aims to provide an accurate
preview on a monitor of how the final document will appear once it is printed. At the core of this concept is the problem
of identifying, for any printed color, the most similar color the monitor can display. This problem is made difficult by
such factors as varying viewing conditions, color gamut limitations, or the less studied time spacing. Color matching
experiments are usually done by examining samples viewed simultaneously. However, in soft proofing applications, the
proof and the print are not always viewed together. This paper attempts to shed more light on the difference between
simultaneous and time-spaced color matching, in order to contribute to improving the accuracy of soft proofs. A color
matching experiment setup has been established in which observers were asked to match a color patch displayed on a
LCD monitor, by adjusting its RGB values, to another color patch printed out on paper. In the first part of the experiment
the two colors were viewed simultaneously. In the second part, the observers were asked to produce the match according
to a previously memorized color. According to the obtained results, the color appearance attributes lightness and chroma
were the most difficult components for the observers to remember, generating huge differences with the simultaneous
match, whereas hue was the component which varied the least. This indicates that for soft proofing, getting the hues right
is of primordial importance.
We aim to print spectral images using spectral vector error diffusion. Vector error diffusion produces good quality
halftoned images but it is very slow to diffuse the error in the image during the halftoning process due to error
accumulation. In spectral images each pixel is a re.ectance and the accumulation of error can modify completly
the shape of the reflectance. This phenomena is increased when data are out of the gamut of the printer. To
control the diffusion of error and to decrease the slowness of the spectral vector error diffusion we preprocess the
spectral image by applying spectral gamut mapping and test the shape of the reflectances by keeping them in a
range of feasible values. Our spectral gamut mapping is based on the inversion of the spectral Neugebauer printer
model. After preprocessing the spectral image to be halftoned is the closest estimation the printer can made
of it with the available colorants. We apply spectral vector error diffusion to spectral images and we evaluate
the halftoning by simulation. We use a seven channels printer which we assume has stable inks and no dot gain
(with a large set of inks we increase the variability or re.ectances the printer can produce). Our preprocessing
and error control have shown promising results.
This article proposes to deal with noisy and variable size color textures. It also proposes to deal with quantization methods and to see how such methods change final results. The method we use to analyze the robustness of the textures consists of an auto-classification of modified textures. Texture parameters are computed for a set of original texture samples and stored into a database. Such a database is created for each quantization method. Textures from the set of original samples are then modified, eventually quantized and classified according to classes determined from a precomputed database. A classification is considered incorrect if the original texture is not retrieved. This method is tested with 3 textures parameters: auto-correlation matrix, co-occurrence matrix and directional local extrema as well as 3 quantization methods: principal component analysis, color cube slicing and RGB binary space slicing. These two last methods compute only 3 RGB bands but could be extended to more. Our results show that, with or without quantization, autocorrelation matrix parameter is less sensitive to noise and to scaling than the two other tested texture parameters. This implies that autocorrelation matrix should probably be preferred for texture analysis with non controlled condition, typically industrial applications where images could be noisy. Our results also shows that PCA quantization does not change results where the two other quantization methods change them dramatically.
Modern digital imaging workflows typically involve a large number of different imaging technologies and media. In order to assure the quality of such workflows, there is a need to quantify how reproduced images have been changed by the reproduction process, and how much these changes are perceived by the human eye. The goal of this study is to investigate whether current color image difference formulae can be used to this end, specifically with regards to the image degradations induced by color gamut mapping.
We have applied image difference formulae based on CIELAB, S-CIELAB, and iCAM to a set of images, which have been processed by several state-of-the-art color gamut mapping algorithms. The images have also been evaluated by psychophysical experiments on a CRT monitor. We have not found any statistically significant correlation between the calculated color image differences and the visual evaluations.
We have examined the experimental results carefully, in order to understand the poor performance of the color difference calculations, and to identify possible strategies for improving the formulae. For example, S-CIELAB and iCAM were designed to take into account factors such as spatial properties of human vision, but there might be other important factors to be considered to quantify image quality. Potential factors include background/texture/contrast sensitivity effect, human viewing behaviour/area of interest, and memory colors.
We present a new approach to optically calibrate a multispectral imaging system based on interference filters. Such a system typically suffers from some blurring of its channel images. Because the effectiveness of spectrum reconstruction depends heavily on the quality of the acquired channel images, and because this blurring negatively affects them, a method for deblurring and denoising them is required. The blur is modeled as a uniform intensity distribution within a circular disk. It allows us to characterize, quantitatively, the degradation for each channel image. In terms of global reduction of the blur, it consists of the choice of the best channel for the focus adjustment according to minimal corrections applied to the other channels. Then, for a given acquisition, the restoration can be performed with the computed parameters using adapted Wiener filtering. This process of optical calibration is evaluated on real images and shows large improvements, especially when the scene is detailed.
A method is proposed for performing spectral gamut mapping, whereby spectral images can be altered to fit within an approximation of the spectral gamut of an output device. Principal component analysis (PCA) is performed on the spectral data, in order to reduce the dimensionality of the space in which the method is applied. The convex hull of the spectral device measurements in this space is computed, and the intersection between the gamut surface and a line from the center of the gamut towards the position of a given spectral
reflectance curve is found. By moving the spectra that are outside the spectral gamut towards the center until the gamut is encountered, a spectral gamut mapping algorithm is defined. The spectral gamut is visualized by approximating the intersection of the gamut and a 2-dimensional plane. The resulting outline is shown along with the center of the gamut and the position of a spectral reflectance curve. The spectral gamut mapping algorithm is applied to spectral data from the Macbeth Color Checker and test images, and initial results show that the amount of clipping increases with the number of dimensions used.
Calibration targets are widely used to characterize imaging devices and estimate optimal profiles to map the response of one device to the space of another. The question addressed in this paper is that of how many surfaces in a calibration target are needed to account for the whole target perfectly. To accurately answer this question we first note that the reflectance spectra space is closed and convex. Hence the extreme points of the convexhull of the data encloses the whole target. It is thus sufficient to use the extreme points to represent the whole set. Further, we introduce a volume projection algorithm to reduce the extremes to a user defined number of surfaces
such that the remaining surfaces are more important, i.e. account for a larger number of surfaces, than the rest. When testing our algorithm using the Munsell book of colors of 1269 reflectances we found that as few as 110 surfaces were sufficient to account for the rest of the data and as few as 3 surfaces accounted for 86% of the
volume of the whole set.
Projection displays generally do not reproduce colors evenly at different locations of the display. Depending on the display technology, the non-uniformity may be in luminance only, typically due to optical effects in the lens, or in all color dimensions of luminance, chroma and hue. Even though this non-uniformity often remains unnoticed by the user, for certain applications such as tiling/stitching of projection displays, the non-uniformity is an important problem.
In this study we investigate the feasibility of using an inexpensive webcam to correct the projection display non-uniformity. Two main approaches are proposed and evaluated, one using colorimetric characterization of camera and display, and another closed-loop approach. Both approaches are based on displaying images that should ideally have a uniform color distribution, capturing the displayed images with the webcam, and using these captured images to create a correction function, which is then applied to images in order to correct them.
Our results show that the feasibility of the proposed methods depends heavily on the qualities of the equipment involved. For standard low-end webcams it is generally difficult to obtain reliable device-independent color measurements needed for the colorimetric characterization approach, but the direct approach still gives reasonable results.
KEYWORDS: Reflectivity, Sensors, Cameras, Systems modeling, Imaging systems, Data modeling, Multispectral imaging, Error analysis, Visualization, CMYK color model
If digital cameras and scanners are to be used for colour measurement it is necessary to correct their device responses to device-independent colour co-ordinates, such as CIE tristimulus values. In order to do this it is sufficient to recover the underlying spectral reflectance functions from a scene at each pixel. Traditionally, linear methods are used to transform device responses to reflectance values. Recently, however, several non-linear methods have been applied to this problem, including generic methods such as neural networks, more novel approaches such as sub-manifold approximation and approaches based upon quadratic programming.
In this paper we apply polynomial models to the recovery of reflectance. We perform a number of simulations with both tri-chromatic and multispectral imaging systems to determine their accuracy and generalisation performance. We find that, although higher order polynomials seem to be superior to linear methods in terms of accuracy, the generalisation performance for the two methods is approximately equivalent. This suggests that the advantage of polynomial models may only be seen when the training and test data are statistically similar. Furthermore, the experiments with multispectral systems suggest that the improvement using high order polynomials on training data is reduced when the number of sensors is increased.
Recently the use of projection displays has increased dramatically in different applications such as digital cinema, home theatre, and business and educational presentations. Even if the color image quality of these devices has improved significantly over the years, it is still a common situation for users of projection displays that the projected colors differ significantly from the intended ones. This study presented in this paper attempts to analyze the color image quality of a large set of projection display devices, particularly investigating the variations in color reproduction. As a case study, a set of 14 projectors (LCD and DLP technology) at Gjovik University College have been tested under four different conditions: dark and light room, with and without using an ICC-profile. To find out more about the importance of the illumination conditions in a room, and the degree of improvement when using an ICC-profile, the results from the measurements was processed and analyzed. Eye-One Beamer from GretagMacbeth was used to make the profiles. The color image quality was evaluated both visually and by color difference calculations. The results from the analysis indicated large visual and colorimetric differences between the projectors. Our DLP projectors have generally smaller color gamut than LCD projectors. The color gamuts of older projectors are significantly smaller than that of newer ones. The amount of ambient light reaching the screen is of great importance for the visual impression. If too much reflections and other ambient light reaches the screen, the projected image gets pale and has low contrast. When using a profile, the differences in colors between the projectors gets smaller and the colors appears more correct. For one device, the average ΔE*ab color difference when compared to a relative white reference was reduced from 22 to 11, for another from 13 to 6. Blue colors have the largest variations among the projection displays and makes them therefore harder to predict.
The spectral integrator at the University of Oslo consists of a lamp whose light is dispersed into a spectrum by means of a prism. Using a transmissive LCD panel controlled by a computer, certain fractions of the light in different parts of the spectrum is masked out. The remaining spectrum is integrated and the resulting colored light projected onto a dispersing plate. Attached to the computer is also a spectroradiometer measuring the projected light, thus making the spectral integrator a closed-loop system. One main challenge is the generation of stimuli of arbitrary spectral power distributions. We have solved this by means of a computational calibration routine: Vertical lines of pixels within the spectral window of the LCD panel are opened successively and the resulting spectral power distribution on the dispersing plate is measured. A similar procedure for the horizontal lines gives, under certain assumptions, the contribution from each opened pixel. Hereby, light of any spectral power distribution can be generated by means of a fast iterative heuristic search algorithm. The apparatus is convenient for research within the fields of color vision, color appearance modelling, multispectral color imaging, and spectral characterization of devices ranging from digital cameras to solar cell panels.
KEYWORDS: RGB color model, Cameras, LCDs, Projection systems, Data modeling, Instrument modeling, Color difference, Data conversion, Image resolution, Digital cameras
The need for color consistency throughout an imaging system has made color management an important field. A key to successful color management is to find adequate models for colorimetric characterization of devices, giving accurate mappings between the color spaces of individual devices and a device-independent color space. Due to the considerable spatial non-uniformity typically found with projection displays, a conventional model for colorimetric characterization is only valid at the position the characterization data was measured. In this study a colorimetric camera is introduced and evaluated as a supplement to the traditional and evaluated as a supplement to the traditional spectroradiometer. Inspired by the fact that we are now able to conveniently collect colorimetric data with high spatial resolution, we propose a new global characterization model which enables consistent color reproduction over the entire display. The performance of the global characterization was evaluated based on two criteria, the absolute characterization accuracy of a color displayed at the center and the relative accuracy across the display. The absolute accuracy was tested by displaying 20 random color patches at the center and gave an average color difference ΔEab= 3.66 between measured and predicted color. The relative accuracy across the display was tested by using uniform tristimulus values as input, and measuring non-uniformities in the displayed images. The average color difference across the display for a set of 12 test images was ΔE = 2.59. Applications for the proposed global characterization include high quality image display for commerce, design and simulation applications, and particularly stitching of multiple projector images.
KEYWORDS: Scanners, Printing, RGB color model, Nonimpact printing, Inkjet technology, Color difference, Color reproduction, 3D scanning, Photography, Laser scanners
Due to the increasing popularity and afford ability of color imaging devices, color characterization for these devices becomes an important subject. In other words, a set of color profile(s) needs to be generated for each device to transform the device dependent color space to a device independent one. This paper will concentrate on color characterization of scanners.
In this article we describe the experimental setup of a multispectral image acquisition system consisting of a professional monochrome CCD camera and a tunable filter in which the spectral transmittance can be controlled electronically. We have performed a spectral characterization of the acquisition system taking into account the acquisition noise. To convert the camera output signals to device-independent data, two main approaches are proposed and evaluated. One consists in applying regression methods to convert from the K camera outputs to a device- independent color space such as CIEXYZ or CIELAB. Another method is based on a spectral model of the acquisition system. By inverting the model using a Principal Eigenvector approach, we estimate the spectral reflectance of each pixel of the imaged surface.
KEYWORDS: Scanners, RGB color model, Imaging systems, Digital imaging, Printing, Image quality, Image processing, Data conversion, Color management, Digital color imaging
To achieve high image quality throughout a digital imaging system, the first requirement is to ensure the quality of the device that captures real-world physical images to digital images, for example a desktop scanner. Several factors have influence on this quality: optical resolution, bit depth, spectral sensitivities, and acquisition noise, to mention a few. In this study we focus on the colorimetric faculties of the scanner, that is, the scanner's ability to deliver quantitative device-independent digital information about the colors of the original document. We propose methods to convert from the scanner's device-dependent RGB color space to the standard device-independent color space sRGB. The methods have been evaluated using several different desktop scanners. Our results are very good: mean CIELAB (Delta) E*ab color errors as low as 1.4. We further discuss advantages and disadvantages of a digital color imaging system using the sRGB space for image exchange, compared to using other color architectures.
This paper addresses digital techniques used to automatically correct those color photographs whose range of transmittance densities is too large for visually acceptable image reproduction. The first step consists in calibrating the image acquisition devices, and results are shown for two different models. Then, we present a method inspired by photographic techniques that uses modified binary masks to enhance negatives of too high contrast. This method can be applied in an industrial environment such as in photographic mini-laboratories.
In order to properly calibrate an electronic camera for a variety of illuminants it is necessary to estimate the spectral sensitivity of the camera. This spectral characterization is obtained by measuring a set of samples of known spectral reflectances and by inverting the resulting system of linear equations. In the presence of noise, this system inversion is not straightforward. We describe several approaches to this problem. In particular we show that the choice of samples is of great importance for the quality of the characterization, and we present an algorithm for the choice of a reduced number of samples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.