Demand for 3D X-ray CT inspection (in the medical, nondestructive testing, and security fields) is increasing year by year, and AR (Augmented Reality), VR (Virtual Reality), and MR (Mixed Reality) are being used for displaying internal structure of object. Although, these applications can display surface-rendered or volume-rendered objects, but have not been developed to accurately represent and spatially help the observer understand the point he or she wants to see. In this research, the representation method and system were proposed that 2D image output from DICOM is superimposed on the cross-section of a surface-rendered model, and by using a motion sensor, an object imaged with a 3D X-ray CT can be freely manipulated in the virtual space using the human hand, just as if the object were actually moved in real space, so the cross-section of the point to be checked can be viewed in any direction. By using the stencil buffer function, a shader function on Unity, unnecessary areas other than the cross-section can be hidden, so the object itself is not hidden in the 2D image. This system can be performed with either spatial reality display, VR device, or AR glasses. The results show that the proposed method is effective as a useful 3D representation in 3D X-ray CT, which is voxel data containing internal information, and as a method that allows viewing tomographic images in CT images and easily representing specific locations.
|