Wavefront printing for a digitally-designed hologram has got attentions recently. In this printing, a spatial light modulator (SLM) is used for displaying a hologram data and the wavefront is reproduced by irradiating the hologram with a reference light the same way as electronic holography. However, a pixel count of current SLM devices is not enough to display an entire hologram data. To generate a practical digitally-designed hologram, the entire hologram data is divided into a set of sub-hologram data and wavefront reproduced by each sub-hologram is sequentially recorded in tiling manner by using X-Y motorized stage. Due to a lack of positioning an accuracy of X-Y motorized stage and the temporal incoherent recording, phase continuity of recorded/reproduced wavefront is lost between neighboring subholograms. In this paper, we generate the holograms that have different size of sub-holograms with an overlap or nonoverlap, and verify the size of sub-holograms effect on the reconstructed images. In the result, the reconstructed images degrade with decreasing the size of sub-holograms and there is little or no degradation of quality by the wavefront printing with the overlap.
KEYWORDS: Wavefronts, Printing, 3D image reconstruction, Holograms, Holography, Diffraction, Spatial light modulators, 3D acquisition, 3D image processing, 3D printing
A hologram recording technique, generally called as “wavefront printer”, has been proposed by several research groups for static three-dimensional (3D) image printing. Because the pixel number of current spatial light modulators (SLMs) is not enough to reconstruct the entire wavefront in recording process, typically, hologram data is divided into a set of subhologram data and each wavefront is recorded sequentially as a small sub-hologram cell in tiling manner by using X-Y motorized stage. However since previous works of wavefront printer do not optimize the cell size, the reconstructed images were degraded by obtrusive split line due to visible cell size caused by too large cell size for human eyesight, or by diffraction effect due to discontinuity of phase distribution caused by too small cell size. In this paper, we introduce overlapping recording approach of sub-holograms to achieve both conditions: enough smallness of apparent cell size to make cells invisible and enough largeness of recording cell size to suppress diffraction effect by keeping the phase continuity of reconstructed wavefront. By considering observing condition and optimization of the amount of overlapping and cell size, in the experiment, the proposed approach showed higher quality 3D image reconstruction while the conventional approach suffered visible split lines and cells.
A holographic TV system based on multiview image and depth map coding and the analysis of coding noise effects in reconstructed images is proposed. A major problem for holographic TV systems is the huge amount of data that must be transmitted. It has been shown that this problem can be solved by capturing a three-dimensional scene with multiview cameras, deriving depth maps from multiview images or directly capturing them, encoding and transmitting the multiview images and depth maps, and generating holograms at the receiver side. This method shows the same subjective image quality as hologram data transmission with about 1/97000 of the data rate. Speckle noise, which masks coding noise when the coded bit rate is not extremely low, is shown to be the main determinant of reconstructed holographic image quality.
We have recently developed an electronic holography reconstruction system by tiling nine 4Kx2K liquid crystal on silicon (LCOS) panels seamlessly. Magnifying optical systems eliminate the gaps between LCOS panels by forming enlarged LCOS images on the system’s output lenses. A reduction optical system reduces the tiled LCOS images to the original size, returning to the original viewing zone angle. Since this system illuminates each LCOS panel through polarized beam splitters (PBS) from different distances, viewing-zone-angle expansion was difficult since it requires illumination of each LCOS panel from different angles. In this paper, we investigated viewing-zone-angle expansion of this system by integrating point light sources in the magnifying optical system. Three optical fibers illuminate a LCOS panel from different angles in time-sequential order, reconstructing three continuous viewing-zones. Full-color image reconstruction was realized by switching the laser source among R, G, and B colors. We propose a fan-shaped optical fiber arrangement to compensate for the offset of the illumination beam center from the LCOS panel center. We also propose a solution for high-order diffraction light interference by inserting electronic shutter windows in the reduction optical system.
Electronic holography technology is expected to be used for realizing an ideal 3DTV system in the future, providing
perfect 3D images. Since the amount of fringe data is huge, however, it is difficult to broadcast or transmit it directly. To
resolve this problem, we investigated a method of generating holograms from depth images. Since computer generated
holography (CGH) generates huge fringe patterns from a small amount of data for the coordinates and colors of 3D
objects, it solves half of this problem, mainly for computer generated objects (artificial objects). For the other half of the
problem (how to obtain 3D models for a natural scene), we propose a method of generating holograms from multi-view
images and associated depth maps. Multi-view images are taken by multiple cameras. The depth maps are estimated
from the multi-view images by introducing an adaptive matching error selection algorithm in the stereo-matching
process. The multi-view images and depth maps are compressed by a 2D image coding method that converts them into
Global View and Depth (GVD) format. The fringe patterns are generated from the decoded data and displayed on
8K4K liquid crystal on silicon (LCOS) display panels. The reconstructed holographic image quality is compared using
uncompressed and compressed images.
This paper introduces two 3D visual systems toward ultra-realistic communication. The first system includes integral photography video camera that uses a lens array and a 4K2K-resolution video camera for the capture of ray information at slightly separated locations. The second system includes camera array that uses 300 cameras to capture ray information at more sparse locations than integral photography. Both systems use electronic holography as an ideal 3D display. These systems are characterized in that the ray-based image sensors are used to capture 3D objects under natural light and electronic holography is used to reconstruct the 3D objects.
Phase-shifting digital holography is a hologram capture method used for natural scenes. We propose a method for
enlarging the viewing-zone angle for the electronic holography input. During hologram generation, if we use multiple
reference beams or multiple object beams whose incident angles differ slightly from each other, the viewing-zone angle
of the phase-shifted hologram can be expanded several times compared to the original. In the experiment, a phase-shifted
hologram with a viewing-zone angle of 16 degrees was generated using 3 object beams whose incident angles differ
from each other by 5.6 degrees.
We have developed some prototype systems for ultra-realistic communication in future using electronic holography as 3D display, since electronic holography is the technology to reconstruct ideal 3D objects in space. In this paper, we describe the basis of the systems and introduce three of them, i.e., a real-time electronic holography system with integral photography, wide viewing-zone-angle electronic holography system, and electronic holography system with camera array.
Computer generated holograms (CGH) are expected in holography 3D display for the reconstruction of realistic or artistic virtual 3D objects. We propose a CGH approach that combines computer graphics (CG) technology and wave propagation theory. Our approach is based on the following assumptions. Virtual 3D objects are described by the popular computer graphics format that uses a set of triangular surfaces, and CG technology can be used to render ray information on these surfaces. The hologram plane is flat. Each triangular surface is tilted (that is, not parallel) relative to the hologram plane. An advantage of our approach is that even though the surfaces are tilted, the sampling pitch on the tilted surfaces can be defined.
A wide viewing-zone-angle full-color electronic holography reconstruction system is developed. Time division
multiplexing of RGB color light and space division multiplexing of viewing-zone-angles are adopted to keep the optical
system compact. Undesirable light such as illumination light, phase conjugate light, and high-order diffraction light are
eliminated by half-zone-plate hologram generation and single sideband beam reconstruction. Color aberration and
astigmatism caused by the reproduction optical system are analyzed and reduced. The developed system expands
viewing-zone-angle of full-color holographic image three times wider than the original, suppressing undesirable light,
color aberration, and astigmatism.
KEYWORDS: Cameras, Holography, Stereo holograms, 3D displays, Stereoscopic cameras, Holograms, Signal processing, Current controlled current source, Fourier transforms, Optical design
Holographic stereograms can display 3D objects by using ray information. To display high quality representations of real 3D objects by using holographic stereograms, relatively dense ray information must be prepared as the 3D object information. One promising method of obtaining this information uses a combination of a camera array and view interpolation which is signal processing technique. However, it is still technically difficult to synthesize ray information without visible error by using view interpolation. Our approach uses a densely arranged camera array to reduce this difficulty. Even though view interpolation is a simple signal processing technique, the synthesized ray information produced by this camera array should be adequate. We designed and manufactured a densely arranged camera array and used it to generate holographic stereograms.
Holography is considered as an ideal 3D display method. We generated a hologram under white light. The infrared depth
camera, which we used, captures the depth information as well as color video of the scene in 20mm of accuracy at 2m of
object distance. In this research, we developed a software converter to convert the HD resolution depth map to the
hologram. In this conversion method, each elemental diffraction pattern on a hologram plane was calculated beforehand
according to the object distance and the maximum diffraction angle determined by the reconstruction SLM device (high
resolution LCOS). The reconstructed 3D image was observed.
We are studying electronic holography and have developed a real-time color holography system for live scene
which includes three functional blocks, capture block, processing block, and display block. In this paper, we will
introduce our developed system after describing basic idea that quickly calculates hologram from IP image. The
first block, capture block, uses integral photography (IP) technology to capture color 3-D objects under natural
light in real time. The second block, processing block, consists of four general personal computers to generate
holograms from IP images in real time. Three half-zone-plated holograms for red, green and blue (RGB) channels
are generated for all captured IP images by using fast Fourier Transform. The last block, display block, mainly
consists of three liquid crystal displays to display the holograms and three laser sources for RGB to reconstruct
the color 3-D objects. All blocks work in real time, i.e., in 30 color frames per second.
We are studying electronic holography and have developed a real-time color holographic movie system which includes three functional blocks, capture block, processing block, and display block. We will introduce the system and its technology in this paper. The first block, capture block, uses integral photography (IP) technology to capture color 3-D objects in real time. This block mainly consists of a lens array with approximately 120(W)x67(H) convex lenses and a video camera with 1920(W)x1080(H) pixels to capture IP images. In addition to that, the optical system to reduce the crosstalk between elemental images is mounted. The second block, processing block, consists of two general personal computers to generate holograms from IP images in real time. Three half-zone-plated holograms for red, green and blue (RGB) channels are generated for each frame by using Fast Fourier Transform. The last block, display block, mainly consists of three liquid crystal displays for displaying the holograms and three laser sources for RGB to reconstruct the color 3-D objects. This block is a single-sideband holography display, which cuts off conjugate and carrier images from primary images. All blocks work in real time, i.e., in 30 frames per second.
KEYWORDS: Multimedia, Information security, Data modeling, Standards development, Intellectual property, Video, Solid modeling, Information fusion, Visualization, Digital watermarking
To ensure secure digital content delivery, MPEG has dedicated significant effort to DRM (Digital Rights Management) issues. The IPMP (Intellectual Property Management and Protection) components, the fourth part of the new MPEG standard project-MPEG-21 (Multimedia Framework) is one of the most important parts of this standardization activity. It defines an interoperable framework for IPMP. Fairly soon after MPEG-4, with its IPMP hooks became an International Standard, MPEG decided to start a new project named "MPEG-4 IPMP Extensions" on more interoperable IPMP systems and tools. The project includes standardized ways of retrieving IPMP tools from remote locations, standardized messages exchanged among IPMP tools and the terminal. It also addresses authentication of IPMP tools, and has provisions for integrating Rights Expressions according to the Rights Data Dictionary and the Rights Expression Language. After MPEG-2/4 IPMP standardization work is finished, efforts are currently ongoing to define the similar mechanism for the management and protection of intellectual property in the various parts of the MPEG-21 standard currently under development. The present paper describes the technologies that are now included in MPEG-21 IPMP Components Committee Draft to effectively manage and protect the digital content with the flexible IPMP scheme, which specifies how protection is applied to Digital Items and facilitates the exchange of governed content declared by a standard constructor MPEG-21 DID (Digital Item Declaration) between MPEG-21 Peers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.