In this paper, we proposed parallel processing method of 2 step wave field projection method using GPU. In the first step, 2D projection of wave field for 3D object is calculated by radial symmetric interpolation (RSI) method to the reference depth, and then in step 2 it is translated toward depth direction using Fresnel transformation. In each step, the object points are divided into small groups and processed in each CUDA cores in parallel. Experimental results show that proposed method is 5901 times faster than Rayleigh-Sommerfeld method for 1 million object points and full HD SLM resolution.
In this paper, we present a fast hologram pattern generation method to overcome accumulation problem of point source
based method. Proposed method consists of two steps. In the first step, 2D projection of wave field for 3D object is
calculated by radial symmetric interpolation (RSI) method to the multiple reference depth planes. Then in the second
step, each 2D wave field is translated toward SLM plane by FFT based algorithm. Final hologram pattern is obtained by
adding them. The effectiveness of method is proved by computer simulation and optical experiment. Experimental
results show that proposed method is 3878 times faster than analytic method, and 226 times faster than RSI method.
KEYWORDS: Video, Visualization, Video processing, Video coding, Volume rendering, Computer programming, 3D modeling, 3D displays, 3D video compression, Data communications
A depth dilation filter is proposed for free viewpoint video system based on mixed resolution multi-view video plus depth (MVD). By applying gray scale dilation filter to depth images, foreground regions are extended to background region, and synthesis artifacts occur out of boundary edge. Thus, objective and subjective quality of view synthesis result is improved. A depth dilation filter is applied to inloop resampling part in encoding/decoding, and post processing part after decoding. Accurate view synthesis is important in virtual view generation for autostereoscopic display, moreover there are many coding tools which use view synthesis to reduce interview redundancy in 3D video coding such as view synthesis prediction (VSP) and depth based motion vector prediction (DMVP), and compression efficiency can be improved by accurate view synthesis. Coding and synthesis experiments are performed for performance evaluation of a dilation filter with MPEG test sequences. Dilation filter was implemented on the top of the MPEG reference software for AVC based 3D video coding. By applying a depth dilation filter, BD-rate gains of 0.5% and 6.0% in terms of PSNR of decoded views and synthesized views, respectively.
In this paper, we present a fast hologram pattern generation method by radial symmetric interpolation. In spatial domain,
concentric redundancy of each point hologram is removed by substituting the calculation of wave propagation with
interpolation and duplication. Also the background mask which represents stationary point in temporal domain is used to
remove temporal redundancy in hologram video. Frames are grouped in predefined time interval and each group shares
the background information, and hologram pattern of each time is updated only for the foreground part. The
effectiveness of proposed algorithm is proved by simulation and experiment.
KEYWORDS: Image resolution, Video, 3D displays, 3D video streaming, Image processing, Image quality, Video compression, Detection and tracking algorithms, Stereoscopic displays, Cameras
For a full motion parallax 3D display, it is necessary to supply multiple views obtained from a series of different
locations. However, it is impractical to deliver all of the required views because it will result in a huge size of
bit streams. In the previous work, authors proposed a mixed resolution 3D video format composed of color and
depth information pairs with heterogeneous resolutions, and also suggested a view synthesis algorithm for mixed
resolution videos. This paper reports a more refined view interpolation method and improved results.
KEYWORDS: Video, Video coding, Cameras, Neodymium, 3D video compression, Video compression, Stereoscopic cameras, Photonic integrated circuits, Surface conduction electron emitter displays, Electronics
One of the important issues for a next generation broadcasting system is how to compress a massive amount of threedimensional
(3D) video efficiently. In this paper, we propose a geometry compensation method for 3D video coding
exploiting color videos, depth videos and camera parameters. In the proposed method, we first generate a compensated
view, which is located at the geometrically same position with the current view, using depth and camera parameters of
neighboring views. Then, the compensated view is used as a reference picture to reduce the inter-view redundancies such
as disparity and motion vectors. Furthermore, considering the direction of hole-regions, we propose a hole-filling method
for picture of P-view to fill up the holes based on the neighboring background pixels. The experimental results show that
the proposed algorithm increases BD-PSNRs up to 0.22dB and 0.63dB for P- and B-views, respectively. Meanwhile, we
achieved up to 6.28% and 18.32% BD bit-rates gain for P- and B- views, respectively.
KEYWORDS: Video, Video coding, Cameras, Video compression, Chromium, Standards development, Electronics engineering, Stereoscopic displays, Electronic imaging, Current controlled current source
The inter-view prediction is used as well as the temporal prediction in order to exploit both the temporal and inter-view
redundancies in multiview video coding. Accordingly, the multiview video coding has two types of motion vectors that
are the temporal motion vector and the disparity vector, respectively. The disparity vector is generally uncorrelated with
the temporal motion vector. However, they are used together to predict the motion vector regardless of their types,
therefore an efficiency of the conventional predictive coding of multiview video coding is decreased. In order to increase
the accuracy of the predicted motion vector, a new motion vector prediction method including virtual temporal motion
vector and virtual disparity vector is proposed for both the multiview video and multiview video plus depth formats. The
experimental results show that the proposed method can reduce the coding bitrates by 6.5% in average and 14.6% at
maximum in terms of Bjontegaard metric compared to the conventional method.
KEYWORDS: Video, Video compression, 3D video compression, 3D video streaming, Data centers, Laser Doppler velocimetry, Image resolution, Video processing, Image quality, 3D displays
new 3D video format which consists of one full resolution mono video and half resolution left/right videos is proposed.
The proposed 3D video format can generate high quality virtual views from small amount of input data while preserving
the compatibility for legacy mono and frame compatible stereo video systems. The center view video is the same with
normal mono video data, but left/right views are frame compatible stereo video data. This format was tested in terms of
compression efficiency, rendering capability, and backward compatibility. Especially we compared view synthesis
quality when virtual views are made from full resolution two views or one original view and the other half resolution
view. For frame compatible stereo format, experiments were performed on interlaced method. The proposed format gives
BD bit-rate gains of 15%.
This paper presents an efficient depth map coding method based on color information in multi-view plus depth (MVD)
system. As compared to the conventional depth map coding in which depth video is separately coded, the proposed
scheme involves color information for depth map coding. In details, the proposed algorithm subsamples input depth data
along temporal direction to reduce the bit-rate, and non-encoded depth frames are recovered at the decoder side guided
by the motion information extracted from the decoded color video. The simulation results shows the high coding
efficiency of the proposed scheme, and it also shows that recovered depth frame are not much different from the
reconstructed one. Furthermore, it can even provide temporally consistent depth map which results in better subjective
quality for view-interpolation.
KEYWORDS: Cameras, Video, Video coding, Video compression, 3D vision, 3D video compression, Quality measurement, Scalable video coding, Image quality, Matrices
One of the critical issues to successful service of 3D video is how to compress huge amount of multi-view video data
efficiently. In this paper, we described about geometric prediction structure for multi-view video coding. By exploiting
the geometric relations between each camera pose, we can make prediction pair which maximizes the spatial correlation
of each view. To analyze the relationship of each camera pose, we defined the mathematical view center and view
distance in 3D space. We calculated virtual center pose by getting mean rotation matrix and mean translation vector. We
proposed an algorithm for establishing the geometric prediction structure based on view center and view distance. Using
this prediction structure, inter-view prediction is performed to camera pair of maximum spatial correlation. In our
prediction structure, we also considered the scalability in coding and transmitting the multi-view videos. Experiments are
done using JMVC (Joint Multiview Video Coding) software on MPEG-FTV test sequences. Overall performance of
proposed prediction structure is measured in the PSNR and subjective image quality measure such as PSPNR.
In multi-exposure based image fusion task, alignment is an essential prerequisite to prevent ghost artifact after
blending. Compared to usual matching problem, registration is more difficult when each image is captured under
different photographing conditions. In HDR imaging, we use long and short exposure images, which have different
brightness and there exist over/under satuated regions. In motion deblurring problem, we use blurred and noisy
image pair and the amount of motion blur varies from one image to another due to the different exposure times.
The main difficulty is that luminance levels of the two images are not in linear relationship and we cannot perfectly
equalize or normalize the brightness of each image and this leads to unstable and inaccurate alignment results. To
solve this problem, we applied probabilistic measure such as mutual information to represent similarity between
images after alignment. In this paper, we discribed about the characteristics of multi-exposed input images in the
aspect of registration and also analyzed the magnitude of camera hand shake. By exploiting the independence
of luminance of mutual information, we proposed a fast and practically useful image registration technique in
multiple capturing. Our algorithm can be applied to extreme HDR scenes and motion blurred scenes with over
90% success rate and its simplicity enables to be embedded in digital camera and mobile camera phone. The
effectiveness of our registration algorithm is examined by various experiments on real HDR or motion deblurring
cases using hand-held camera.
In this paper, we present a new noise estimation and reduction scheme to restore images degraded by image sensor noise. Since the characteristic of noise deviates according to camera response function (CRF) and the sensitivity of image sensors, we build a noise profile by using test charts for accurate noise estimation. By using the noise profile, we develop simple and fast noise estimation scheme which can be appropriately used for digital cameras. Our noise removal method utilizes the result of the noise estimation and applies several adaptive nonlinear filters to give the best image quality against high ISO noise. Experimental results show that the proposed method yields significantly good performance for images corrupted by both synthetic sensor noise and real sensor noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.