|
1.IntroductionMinimally invasive in vivo imaging of living tissue and cells can be achieved with the laser scanning confocal endomicroscope (LSCEM).1 Since its advent, the LSCEM has been used in a wide range of applications to substitute for painful biopsy procedures.2–5 Its confocal properties enable deep-sectional microscopic imaging achieved by capturing light signals that penetrate through the tissue surface to obtain a three-dimensional (3-D) dataset. The LSCEM acquires a volume dataset by manually capturing cross-sectional images at incremental depths. However, due to the noninstantaneous capture time for each slice, a critical impact from in vivo living tissue acquisition is inherent: sporadic movement. These movements originate from two main sources: (i) human probe handling as a result of in situ imaging and (ii) movement from living cells and tissue. Traces of movement are undesirable because they distort the images, especially when obtaining a 3-D snapshot of the tissue. In volume imaging, this effect is further exhibited and propagated across deeper slices. Figure 1 illustrates this problem. Thus, either physical or synthetic compensations are needed to correct these images. Physical compensations involve manually adjusting the imaging probe to minimize distortions. This is challenging because human control cannot guarantee stable probe handling. Moreover, it is difficult to obtain proper indications for an appropriate physical compensation due to human subjectivity and lack of computational analytics. On the other hand, synthetic compensations can be accomplished with image registration methods6,7 to realign images. These methods generate free-form transformations, which can then be used to correct the images. At the same time, these transformations sufficiently describe the deformations in the current dataset when visualized. This complements physical compensation by providing immediate feedback to the human operator, enabling physical adjustment to the imaging probe and updates to other imaging settings, such as laser source intensities, alignment, and receiver sensitivity, accordingly. The presence of two different correction avenues in one pipeline is significant. Although computational methods can provide fine, nonrigid deformations at localized areas of the image, a fundamental assumption is that the deformations are relatively small.8 For large displacements, physical correction can be used instead. Because the imaging probe is rigid, physical adjustment of the probe causes displacement to the entire image. Therefore, with an understanding of the current deformation pattern obtained from computational methods, the global displacement can be manually compensated by adjusting the orientation and position of the probe. To present information about the predicted correctional displacements, we proposed a real-time analytics tool based on the Demons registration algorithm,9 which complemented the imaging pipeline with an additional image registration pipeline. Major technical innovations were made in (i) a visualization mechanism for both real-time rendering of volume data and registration transformations alongside image acquisition and (ii) a system design and demonstration of the new functionality. The algorithmic design is verified by experimental results on both synthetic deformations and actual in vivo tissue datasets. The problem explored in this work concerns three aspects: medical imaging, image registration, and volume rendering. We extended the use of the registration algorithm from an offline image realignment method into a real-time online feedback mechanism alongside acquisition. Figure 2(a) shows an imaging procedure. Single image slices that are captured from the imaging probe are shown on a slice display. A stack of image slices forms a volume. However, to gain a good understanding of the entire volume dataset, it must be registered and visualized in an external rendering engine. A real-time visualization system for immediate assessment is lacking, and our goal is to develop an integrated design that is suitable to be embedded within modern imaging systems. The canonical flow 3-D confocal (LSCEM) datasets must go through prior to being visualized is shown in Fig. 2(b). The pipeline presents several shortcomings, albeit well-established ones. First, each process is terminated before initiating the next stage. Useful indicators, which are obtainable only from subsequent stages, can be obtained only after terminating the current process. Thus, adjustments based on these indicators cannot be made without repeating the process. Second, as a result of the separated processes, a new dataset is always reconstructed between processes. This step incurs additional memory costs and computational delays, which are especially crucial in real-time embedded applications. To mitigate these two problems, we propose a design for visualizing registered datasets in real time. Figure 2(c) illustrates our proposed approach. Instead of a sequential process flow, the three main stages of the imaging pipeline are executed concurrently. A streams-based data structure can effectively pass data through the pipeline as they are acquired or processed in each stage. This is straightforward between the imaging and the registration processes. The output can also be reconstructed and exported at a later time. In Sec. 2, we introduce the Demons registration algorithm design, regularization, optimization, and transformation. In Sec. 3, the coupling between registration and visualization processes for object-ordered and image-ordered rendering is detailed. A description of the developed visual analytics tool is presented in Sec. 4. Section 5 includes experimental results. Finally, the advantages and future developments are given in Sec. 6. 2.Demons Algorithm Design2.1.Related Work and ChallengesAn integrated imaging-rendering system has been developed using field-programmable gate array,1,10,11 which enables automated acquisition with LSCEM and, subsequently, real-time volume rendering of mucosal tissue. However, rendered results from the system clearly exhibit alignment problems, which indicate a need for complementing registration methods. Intensity-based registration is a class of registration algorithms that does not require parametric transformations.12 These algorithms produce a dense array of translation vectors for each voxel, indicating its desired deformation. Intensity-based methods are not feature dependent, and do not require a preliminary feature extraction step. The Demons algorithm9 (henceforth referred to as Demons) is a prominent intensity-based, nonrigid registration algorithm, which registers nonlinear image transformations by modeling the deformations as a rapidly diffusing process. This method is well known for its effectiveness and fast convergence6 and is suitable for a wide range of medical applications.13 Rendering of real-time deformable volumes has been developed, notably for texture-based methods.14–16 However, texture mapping algorithms require a complete, static dataset in order to function efficiently, where a proxy geometry that speeds up the rendering process is often created in the prerendering stage. In this case, proxy generation is not feasible because the dataset cannot be readily used in real time. Object-ordered rendering methods such as splatting17 iterate through volume voxels and project them onto the viewing screen. Thus, rendering volumes with dynamic sizes is intuitively straightforward. On the other hand, image-ordered ray-casting18 rendering is also able to render volumes of dynamic sizes. This is achievable by dynamically updating render parameters, including dataset thickness. Well-known functional deformation methods such as spatial ray deflectors19 bend the rays within a 3-D space in the opposite direction of the deformation. Ray-casting rendering of free-form deformation (FFD) volumes known as inverse ray deformation20 has been presented using B-spline functions. This method bends the ray paths instead of the volume, bypassing the need for an intermediate deformed volume. Recently developed methods specially targeted at the graphics hardware pipeline15,21 deform images using parameterized functions. However, the computational costs of FFD methods increase exponentially with a larger amount of overlapping deformation functions. Especially, dense biomedical datasets such as living tissue require sufficiently complex parametric deformation models to achieve high accuracy, and this impairs computational performance. In this paper, we replace the functional deformations with displacement vectors computed using the Demons algorithm. We focus on the coupling between the registration and visualization processes. Our method omits the reconstruction stage in the main real-time operational flow between these processes by injecting the registration transformation through a vantage opening within the volume rendering pipeline. 2.2.Image Registration with DemonsOur registration model is specifically designed for the registration of slice images acquired from the LSCEM system. We assume that distortions occur within each slice; i.e., sheared motion is exhibited in a 2-D plane parallel to the imaged plane. Also, due to the small physical distance () between consecutive slices,2 adjacent slices can be assumed to have a high correlation where unwanted motion is relatively small. With these notions, each slice can be registered against the previously transformed slice to an appropriate tolerance level. In general, the dataset is represented as an isotropic volume set , which is obtained by sampling data points of regularly spaced intervals. A sequence of 2-D planes perpendicular to the imaging probe direction is captured across incrementing depths, forming a volume. A consolidation of number of consecutive slices forms a 3-D dataset: . In order to realign these slices, the transformation is a deformation within each individual slice, i.e., interslice registration. Here, two image slices are involved: a reference (fixed) image and a target (moving) image . With a 2-D transformation denoted by , where and are matrices representing displacements in the - and -direction, the registration model is With the transformation operator ∘ denoting the expression in Eq. (1), an optimal transformation according to a certain similarity metric is thus Several similarity measures to represent may be adopted, depending on the type of targeted dataset and the feature of its desired outcome. General measures include sum-of-square errors22 and correlation coefficient.82.3.Demons DisplacementBased on the optical flow model, the Demons algorithm9 computes a displacement vector that transforms as closely as possible to match . Our approach is straightforward: continuously deforming with an incremental transformation, which minimizes the energy difference between the transformed image and reference image. An accelerated variant23 of Demons is selected, such that the displacement vector is not bounded solely by the fixed image gradient , where is the gradient operator. This variant aims to mitigate the inefficiency experienced by a small fixed image gradient. Given the accumulated displacement across iterations to be and , the accelerated Demons displacement is where is the -norm.2.4.Regularization, Optimization, and TransformationEstimating the nonrigid deformation between matching image pairs is an ill-posed problem. For instance, all points with the same intensity value in the moving image can theoretically be mapped into a same point in the fixed image with an identical value. This produces a solution with a high similarity metric score, despite being inaccurate. To solve the problem, additional spatial constraints must be incurred. We use a regularization process to alleviate the ill-posedness and bound local transformations together by relating neighborhood displacements. An analogy of this effect will be a force exerted on a point within the volume that should also displace its neighborhood to some extent. It has been deduced that regularization plays a crucial role in determining the accuracy and robustness of nonrigid registration.22 The choice of the kernel is dependent on the type of dataset and the anticipated transformation. The Gaussian low-pass filter is chosen in the regularization operation as where is the normalization factor and is the regularization kernel function. Finally, we implement an iterative model in solving the registration problem. This is done by continuously updating the transformation .The terminating criterion for each slice is subject to either one of two factors: (i) as soon as the transformation count exceeds a predefined number of iterations or (ii) when the slices match each other closely enough according to the designated similarity measure, fulfilling Eq. (2). Our proposed registration method has a unique characteristic in which the deformation model is readily embedded within the regularization of the Demons displacements. Thus, the deformation profile is represented as an array of point-displacement vectors without further modeling. As compared with inverse-ray-deformation methods20 or parametric methods,12,24 functional deformation kernels or additional modeling constraints are not required. Therefore, in our ray-deformation model, sampled points are displaced by . 3.Demons Deformable Volume RenderingWe address the integration of deformations for two basic modes of direct volume rendering: object-ordered and image-ordered rendering. The proposed pipeline to integrate Demons registration and volume rendering is shown in Fig. 3. The Demons displacement, in the form of a vector array, is passed into two stages of the rendering pipelines. Deformation occurs where the dataset voxels are sampled. Figure 4 shows an illustration of voxel displacement models with Demons in object-ordered and image-ordered rendering. 3.1.Object-Ordered RenderingIn object-ordered rendering, a projection of each voxel on the viewing screen is first computed. The voxel values are then composited with the screen pixels at the projected position. For a detailed description, the reader can refer to Ref. 17. In our proposed design, we compute the corresponding screen projections while iterating through the voxel locations. To incorporate deformation, the voxel values that are composited with screen pixels are obtained by resampling the voxel location displaced by [see Fig. 4(a)] Algorithm 1 illustrates this process. Algorithm 1Object-ordered deformable rendering.
3.2.Image-Ordered RenderingImage-ordered rendering, or ray-casting, casts imaginary rays toward the data object and samples points along the ray. Sampled points on a common ray are combined together to obtain a rendered image. In our proposed design, we deflect the sampling points along each ray with deformable forces. We draw distinctions from functional deformation methods19 and FFD models.20 Our method does not deflect the ray path itself directly; rather, we cast the ray that samples the dense transformation matrix . Then, the displacements are combined with the spatial coordinates to indicate the voxel position to be sampled in the captured dataset. Given the center of projection of the scene , from the ray origin on the screen , a ray is cast in the direction The sampling point coordinates are thus where is the sampling point coefficient starting from 0 at the point of origin of each pixel on the image plane.The transformation is then sampled at using an interpolation function to obtain the displacement of sampled point , and the sampled voxel intensity is Algorithm 2 illustrates this process. Algorithm 2Image-ordered deformable rendering.
There are two sampling stages that are involved in the image-ordered deformable rendering method. First, the deformation matrix is sampled (step 7), which gives the resultant displacement vectors of the sampled points. Next, the dataset is sampled at the displaced locations (step 8). 4.Visual Analytics for RegistrationIn order to understand the registering images, a visual analytical tool that provides real-time feedback to the operator is developed. The feedback system is shown in Fig. 5. Processes operate within their own iterative loop in parallel, and data are communicated through each block. We identified multiple useful real-time indicators to be presented, which include the following:
If the voxels are transformed in a consistent direction, the will be pulled toward that direction of transformation. Likewise, if the change in is minimal, the translation component in the transformation is small. In Fig. 7(a), the is shown as white (previous) and green (current) solid points. The change in is small because the current and previous centers are close. However, there is significant displacement activity observed from the transformation magnitude profile. This suggests a nontranslational transformation and thus rotating the probe instead according to the direction may mitigate the required adjustment. 5.Experimental ResultsWe use a threshold of 1% error in our experiments. We demonstrate our approach using volume datasets captured from imaging experiments on biological tissue. Due to our intent for this design to perform in real time, only core operations of the rendering processes are preserved. Additional computations, which are nonfundamental, are omitted to save computational costs and render the dataset as is. Thus, we do not compare against full processing methods for each stage. A software version of the design simulates the pipeline on the CPU. 5.1.Real-Time Rendering of Demons Deformable DatasetsTo demonstrate image registration, a swine tongue dataset captured with confocal microscopy is used. The full dataset is 19 slices thick with a resolution of . Synthetic smooth deformation generated using a spherical filter is applied to this dataset to obtain a deformed dataset. The original nondeformed dataset serves as the ground truth. The datasets are registered and visualized with MIP in our experiments, as shown in Fig. 9 as renderings at increasing slice counts. This experiment simulates nonrigid deformations and the use of our approach to obtain visual hints about the captured dataset. It can be observed that registration is performed to realign structures in the tissue, and cross-sectional information can be realized by visually perceiving the renderings. Experiments on a living tissue dataset of a Drosophila muscle obtained from in vivo experiments25 are also performed. In this experiment, two datasets of the same tissue captured at different times are used. This sufficiently indicates natural deformations of living tissue. They contain 29 slices with a resolution of across . The fixed and moving images are recorded 60 min apart. Figure 10 shows registration–rendering results using the averaging compositing scheme. This experiment demonstrates the registration and visualization of natural deformations in living tissue. There are two different conditions under which this real-time registration–visualization pipeline is useful: (i) realignment of slices against each other due to prolonged capture time, which is a limitation of modern imaging modalities;1,5 and (ii) registration against another predefined atlas dataset, such as between time-lapse datasets or toward a dataset with well-established features so that the current acquisition is coherent. 5.2.Visualization of Deformations for Physical CorrectionIn this experiment, we show the use of visualizing Demons displacement profiles for analyses under actual circumstances, The Drosophila muscle datasets captured at different points of time are shown in Fig. 11. The deformities expressed by these datasets are natural deformations due to biological functions and motion; no synthetic alterations are imposed. In this experiment, we assume the dataset captured at to be our ground truth dataset, whereas the dataset is assumed to be the live in vivo dataset. Due to naturally occurring biological functions, i.e., metamorphosis in this case, the captured dataset at exhibits nonrigid motion at localized areas within the slices. In applications with large time gaps between acquired images, the displacement information should be captured and correction is undesirable. In these cases, the visualization of these deformities is useful for providing a clear indication of such movements for analyses where areas with higher displacements can be observed, as shown in Fig. 11. Finally, to highlight the significance of rendering deformations for correction and realignment, we present distinctions between a default unaltered dataset and one that is Demons registered. Figure 12 shows renderings of a naturally deformed Drosophila cell nuclei dataset. Renderings are shown at different dataset thicknesses during acquisition. It is observable that without registration, the movement of cells quickly dissolves the information at an early acquisition stage. Registration of the dataset is performed by matching each newly acquired slice with an adjacent registered slice captured at the previous depth level. Only the first slice is unaltered. With registration, the effects of movements are mitigated, resulting in a more apprehendable visualization. 6.Discussion and ConclusionIn this paper, we described a design to perform real-time visualization of Demons-registered datasets alongside acquisition, intended for embedded applications. This provides close coupling within the imaging pipeline to provide visual cues of the capturing dataset in real time, allowing immediate evaluation of the quality of acquisition on the fly. This also removes the reconstruction stages that separate each process, saving computations and preserving a minimal memory footprint.25–27 We demonstrated the use of this pipeline with: (i) a synthetically deformed swine tongue dataset and (ii) a time-lapse in vivo Drosophila muscle dataset. This paper also addressed a critical problem in noninvasive in vivo imaging of live tissue: sporadic movement that manifests as distortions in datasets. We proposed a real-time Demons visual analytics tool to complement the imaging pipeline with an image registration pipeline. With this tool, it is possible to obtain immediate feedback and apply responsive measures such as physically adjusting the imaging probe. We presented the implemented algorithms and specifications and detailed the type of information visualized to the human user. Innovations include a visualization mechanism that integrates rendering, registration, and acquisition within a single pipeline and a proposed system design for the new functionalities. Verification is presented through demonstration using experimental results of synthetically deformed datasets and actual in vivo datasets acquired from confocal imaging. This design is specifically described for embedded implementations in biomedical imaging instrumentation, toward an integrated system that includes all necessary stages in the confocal imaging pipeline. Future work includes detailed analysis for fully customizable hardware architectures for performance analysis. This is to provide a comprehensive understanding of this proposed design as an embedded solution for imaging methods. Because the system is useful as an analytics tool, additional real-time analytical features such as cancer diagnosis and high-quality volume rendering extensions28,29 can also be included in the pipeline. DisclosuresThe authors have no relevant financial interests in this article and no potential conflicts of interest to disclose. AcknowledgmentsThis work was partially supported by a research grant (M408020000) from Nanyang Technological University and another (M4080634.B40) from the Institute for Media Innovation, NTU. We also acknowledge the Ministry of Education Tier-1 grant for 2017-T1-001-053, “Key Techniques for the Statistic Shape Modeling in Anatomical Structure Reconstruction, Segmentation, and Registration.” ReferencesP. S. Thong et al.,
“Toward real-time virtual biopsy of oral lesions using confocal laser endomicroscopy interfaced with embedded computing,”
J. Biomed. Opt., 17 056009
(2012). http://dx.doi.org/10.1117/1.JBO.17.5.056009 JBOPFO 1083-3668 Google Scholar
P. Thong et al.,
“Review of confocal fluorescence endomicroscopy for cancer detection,”
IEEE J. Sel. Top. Quantum Electron., 18 1355
–1366
(2012). http://dx.doi.org/10.1109/JSTQE.2011.2177447 IJSQEN 1077-260X Google Scholar
A. Hoffman et al.,
“Confocal laser endomicroscopy: technical status and current indications,”
Endoscopy, 38 1275
–1283
(2006). http://dx.doi.org/10.1055/s-2006-944813 ENDCAM Google Scholar
N. S. Claxton, T. J. Fellers and M. W. Davidson,
“Laser scanning confocal microscopy,”
(2017) http://www.olympusconfocal.com/theory/LSCMIntro.pdf May ). 2017). Google Scholar
A. Poddar et al.,
“Ultrahigh resolution 3D model of murine heart from micro-CT and serial confocal laser scanning microscopy images,”
in IEEE Nuclear Science Symp. Conf. Record,
2615
–2617
(2005). Google Scholar
A. Sotiras, C. Davatzikos and N. Paragios,
“Deformable medical image registration: a survey,”
IEEE Trans. Med. Imaging, 32 1153
–1190
(2013). http://dx.doi.org/10.1109/TMI.2013.2265603 Google Scholar
I.-H. Kim et al.,
“Nonrigid registration of 2-D and 3-D dynamic cell nuclei images for improved classification of subcellular particle motion,”
IEEE Trans. Image Process., 20 1011
–1022
(2011). http://dx.doi.org/10.1109/TIP.2010.2076377 IIPRE4 1057-7149 Google Scholar
J. Modersitzki, Numerical Methods for Image Registration (Numerical Mathematics and Scientific Computation), Oxford University Press(2004). Google Scholar
J. P. Thirion,
“Image matching as a diffusion process: an analogy with Maxwell’s demons,”
Med. Image Anal., 2 243
–260
(1998). http://dx.doi.org/10.1016/S1361-8415(98)80022-4 Google Scholar
W. M. Chiew et al.,
“Online volume rendering of incrementally accumulated LSCEM images for superficial oral cancer detection,”
World J. Clin. Oncol., 2 179
(2011). http://dx.doi.org/10.5306/wjco.v2.i4.179 Google Scholar
W. M. Chiew et al.,
“Reconfigurable logic for synchronization of endomicroscopy scanning and incrementally accumulated volume rendering,”
in Int. Conf. on Real-Time & Embedded Systems,
(2010). Google Scholar
J. Kybic and M. Unser,
“Fast parametric elastic image registration,”
IEEE Trans. Image Process., 12 1427
–1442
(2003). http://dx.doi.org/10.1109/TIP.2003.813139 Google Scholar
B. M. Dawant et al.,
“Automatic 3-D segmentation of internal structures of the head in MR images using a combination of similarity and free-form transformations. I. Methodology and validation on normal subjects,”
IEEE Trans. Med. Imaging, 18 909
–916
(1999). http://dx.doi.org/10.1109/42.811271 Google Scholar
F. Shiaofen et al.,
“Deformable volume rendering by 3D texture mapping and octree encoding,”
in Proc. Visualization,
73
–80
(1996). Google Scholar
C. Rezk-Salama et al.,
“Fast volumetric deformation on general purpose hardware,”
in Proc. of the ACM SIGGRAPH/EUROGRAPHICS Workshop on Graphics Hardware,
(2001). Google Scholar
R. Westermann and C. Rezk-Salama,
“Real-time volume deformations,”
Comput. Graphics Forum, 20 443
–451
(2001). http://dx.doi.org/10.1111/cgf.2001.20.issue-3 CGFODY 0167-7055 Google Scholar
L. Westover,
“Footprint evaluation for volume rendering,”
ACM SIGGRAPH Comput. Graphics, 24 367
–376
(1990). http://dx.doi.org/10.1145/97880 Google Scholar
M. Levoy,
“Display of surfaces from volume data,”
IEEE Comput. Graphics Appl., 8 29
–37
(1988). http://dx.doi.org/10.1109/38.511 Google Scholar
Y. Kurzion and R. Yagel,
“Space deformation using ray deflectors,”
in 6th Eurographics Workshop on Rendering,
21
–32
(1995). Google Scholar
H. Chen, J. Hesser and R. Männer,
“Ray casting free-form deformed-volume objects,”
Comput. Anim. Virtual Worlds, 14 61
–72
(2003). http://dx.doi.org/10.1002/vis.v14:2 Google Scholar
T. Brunet, K. E. Nowak and M. Gleicher,
“Integrating dynamic deformations into interactive volume visualization,”
in Proc. of the Eighth Joint Eurographics/IEEE VGTC Conf. on Visualization,
(2006). Google Scholar
X. Pennec, P. Cachier and N. Ayache,
“Understanding the” demon’s algorithm”: 3D non-rigid registration by gradient descent,”
in Proc. of the Second Int. Conf. on Medical Image Computing and Computer-Assisted Intervention,
(1999). Google Scholar
W. He et al.,
“Validation of an accelerated ‘demons’ algorithm for deformable image registration in radiation therapy,”
Phys. Med. Biol., 50 2887
(2005). http://dx.doi.org/10.1088/0031-9155/50/12/011 PHMBA7 0031-9155 Google Scholar
B. Zitova, J. Flusser and F. Šroubek,
“Image registration: a survey and recent advances,”
in Proc. of the 12th IEEE Int. Conf. on Image Processing (ICIP 2005),
(2005). Google Scholar
L. Feng and M. Wasser,
“Spatial pattern analysis of nuclear migration in remodelled muscles during Drosophila metamorphosis,”
BMC Bioinf., 18 329
(2017). http://dx.doi.org/10.1186/s12859-017-1739-0 BBMIC4 1471-2105 Google Scholar
X. Xu, X. Wu and F. Lin, Cellular Image Classification, Springer International Publishing AG(2017). Google Scholar
J. Ma et al.,
“Nonlinear statistical shape modeling for ankle bone segmentation using a novel kernelized robust PCA,”
in 20th Int. Conf. on Medical Image Computing and Computer Assisted Intervention (MICCAI’17),
(2017). Google Scholar
J. Cai et al.,
“Modeling and dynamics simulation for deformable objects of orthotropic materials,”
Visual Comp., 33
(10), 1307
–1318
(2017). http://dx.doi.org/10.1007/s00371-016-1221-4 VICOE5 0178-2789 Google Scholar
J. Cai, F. Lin and H. S. Seah, Graphical Simulation of Deformable Models, Springer International Publishing, Switzerland
(2016). Google Scholar
BiographyWei Ming Chiew used to be a PhD student at the School of Computer Science and Engineering, Nanyang Technological University, Singapore. He is now an R&D engineer in industry. His research interests are in biomedical imaging, computer graphics, and visualization, as well as embedded computing. Feng Lin is an associate professor of the School of Computer Science and Engineering, Nanyang Technological University, Singapore. His research interests are in biomedical informatics, biomedical imaging and visualization, computer graphics, and high-performance computing. He is a senior member of IEEE. Hock Soon Seah is a professor of the School of Computer Science and Engineering, Nanyang Technological University, Singapore. His research interests are in medical visualization, digital dynamic visualization, image sequence analysis with applications to digital, film effects, and automatic in-between frame generation from hand-drawn sketches. |