We developed an ultracompact three-dimensional (3D) endoscopic scanner that utilizes an electromagnetic-based stereo imaging method (EBSIM) for shape measurements. The method was used to investigate the applicability of imaging fiber optics to featureless surface profiles in confined spaces. The EBSIM is combined with a coherent flexible fiber bundle for structured light pattern projection to ensure that the scanner head can achieve a final diameter of merely 3.4 mm and a baseline length of 1.4 mm. Compared with the existing methods, the scanner is designed to have a minimum focal distance of 2 mm to address the intraoperative evaluation problem. System registration and complex calibration are unnecessary before measurements, and the scanner can be operated from any position and orientation. The scanner is capable of acquiring 3D videos at 30 frames per second. The experimental results demonstrate the feasibility of using the EBSIM for 3D surface profile measurements of featureless cylindrical surfaces. A prototype scanner is presented in extensor, and the average error of a series of known depth distances is found to be equal to 0.15 mm. The implementation shows that the 3D scanner can be potentially applied for endoscopic inspections and intraoperative evaluations in minimally invasive surgeries. |
1.IntroductionOver the last decade, laser-based three-dimensional (3D) shape-measurement methods have been developed and applied to industrial production, quality assurance, biotechnology, and medical technology.1 The test objects can be as small as coins or as large as buildings. Other techniques have also been developed for 3D shape measurement. Among them, the mainstream 3D shape-measurement methods are noncontact approaches that use optical measurement devices owing to their convenience and high speed. The 3D shape-measurement optical systems (such as 3D scanners) designed during the past 5 years have uniaxial configurations with beam splitters.2–4 Uniaxial configurations are characterized by laser illumination and imaging on the same axis. The combination of these two approaches allows the use of small scanning tips because the illumination and charge-coupled device camera are placed behind the specimen to ensure the use of complex imaging optics. Consequently, these optical systems can obtain high accuracy at values below . However, the maximum measurable depth range of these systems is 1.1 mm and the maximum lateral range is merely . Hence, such measurement systems are commonly used microscopy systems intended for medical and industrial applications using a fixed desktop scanner. Research aiming to develop new designs for more compact endoscopic system has been continuously conducted in the field of 3D endoscopic scanning. To this end, certain medical-application-oriented 3D endoscopic measurement systems have been developed.5,6 These systems are based on fringe projection and monocular endoscope. However, the measurement of multiple pattern projections in a limited space is challenging due to scattering caused by surface roughness and surface curvature has a significant impact on the phase contrast of interference fringe patterns. To ensure satisfactory phase contrast and phase-shifting results, these measurement systems are currently used for smooth surface measurements or measurements over a relatively large working distance. These endoscopic 3D measurement systems are still based on a fixed desktop measuring platform, which is not portable for use in real medical applications. Aiming at clinical research for tissue surface measurement, a dual-modality endoscopic system has been proposed.7 3D reconstruction is based on a light pattern formed of randomly distributed spots with different colors. The system setup was quite distinctive, which enabled the authors to reconstruct tissue surfaces within 15- to 40-mm working distances with a baseline of . This system can be used for tissue depth measurement through a small surgical access port. Motivated by the aforementioned endoscopic 3D measurement systems, in this work, we developed an endoscopic 3D scanner for orthopedists, who face difficulties with joint surface perception. Currently, intraoperative inspection and evaluation of the knee-joint surface are critical to osteochondral transplantation surgery. To avoid a potential pumping action and negative pressure in the depths at the recipient side, osteochondral grafts should be inserted flush with the knee-joint surface (perpendicular insertion) to maximize surface congruency.8,9 Currently, intraoperative inspections and evaluations within the knee-joint cavity are conducted (after the drilling of a small incision, see Fig. 1) based on the dexterity and expertise of the surgeon, yielding nonuniform results. Decision-making regarding the perpendicular insertion would be easier to accomplish if a 3D model of the knee-joint surface is available to the surgeon. However, no studies have been conducted on 3D profile measurements of knee-joint surfaces, especially as these pertain to in vivo applications. In this paper, we developed an ultracompact 3D endoscopic scanner using an electromagnetic-based stereo imaging method (EBSIM) to measure the 3D profiles of complete joint surfaces. The main highlights of this paper are as follows:
The remainder of this paper is organized as follows: In Sec. 2, we describe the details of the system setup and detection method. The experimental results and discussion are given in Sec. 3. Section 4 presents the conclusions. 2.Materials and Methods2.1.Basic Principle of Laser Light Section MethodThe laser light section method (LLSM), shown in Fig. 2, is an extensively used alternative to stereo vision, in which one of the binocular cameras is generally replaced by a light source. One or multiple structured light patterns are projected onto a test object and captured by a camera. The intersection of the camera ray with the corresponding light plane can yield 3D coordinates of point only if the imaging geometry is calibrated. Development based on this method benefits the miniaturization of the projection hardware, and the prototype can thus be designed as a compact unit for applications in confined spaces. Fig. 2Principle of the LLSM. A light plane from the projector illuminates the object and is captured by a camera. The 3D coordinates of point relative to the camera system can be computed after calibration. ![]() Sensitivity to depth is generally determined by the geometry of triangulation in the LLSM, especially by the length of the baseline.12,13 Our previous study showed that a shorter baseline indicates a larger systematic error associated with the detection of the distance.14 Additional beam scanning is needed with one-or two-dimensional (2D) relative motions between the detector and test object to measure the 3D shape of the object. However, if test objects have deep holes or are located in confined spaces, the relative motion cannot be easily measured. Consequently, developing a unique 3D measurement/imaging method is necessary for acquiring the 3D shape of an object with an ultracompact system configuration. Notably, a compact configuration will lead to a short baseline, and the change in baseline will increase systematic error. Hence, the designing of a compact optical configuration with a preferable precision is one of the highlights of this study. In addition, the measurement of the 3D shape of an object alone is insufficient in some surgical inspections. The focus should also be on the needs of surgeons, such as shape rotation, zooming ability, and even relative position relationships between the endoscope’s distal tip and other intraoperative tools. 2.2.Endoscope Design2.2.1.Imaging fiber-based optical configurationFigure 3 shows the optical configuration of the endoscopic 3D scanner using the LLSM method. The optical configuration mainly consists of three parts: the light source projection, coupling optics, and scanner’s distal tip. For the first part, we adopted a helium–neon laser projector supplied by Daheng Optics (DH-HN, Beijing, China) as the laser scanning light source. The projected laser has a wavelength of 632.8 nm, since the knee-bone has the highest reflection at this wavelength. The diameter of the laser line must remain as small as possible to achieve a high-resolving power. Hence, the laser line is first focused using lens L1, and then projected onto the proximal end surface of the imaging fiber, wherein the coupling optics consist of two lenses. As a result, the laser line is delivered via the flexible imaging fiber and transmitted to the reference plane through the probe imaging lens mounted on the distal-end surface of the optical fiber. In addition, the optical fiber was manufactured by Asahi Kasei Corporation (SPN, Tokyo, Japan) and comprised 13,000 pixel elements with a distal-end diameter of 1.0 mm. Baseline indicates the distance between the distal-end surface of the optical fiber and optical axis of the camera. In this case, was an extremely short distance of 1.4 mm. 2.2.2.Ultracompact cameraThe applicability of the proposed 3D endoscopic scanner in a confined space cannot be ensured using a commercially available, normal-sized camera that does not have a compact design or high precision. For example, in Ref. 15, the image sequence captured by a normal camera demonstrated an extremely high noise level and periodic pulsation. If the system operates in an underwater environment, the camera would not be able to acquire any images effectively. Therefore, a customized camera (ultracompact camera) with a diameter of 1.8 mm and its control circuit were designed for the proposed fiber-based endoscopic imaging, as shown in Fig. 4. To avoid causing illumination artifacts, four white micro-LEDs were mounted radially around the front of the camera. This camera is ideal for medical applications owing to its wide field-of-view with ultrashort focus distance. Specifications of the camera are listed in Table 1. Fig. 4Photograph of the (a) distal tip of the camera and the LEDs and (b) the customized control circuit (size: ). ![]() Table 1Specifications of the ultracompact camera used.
According to the optical configuration shown in Fig. 3, the camera coordinate system and the image plane we defined are both shown in Fig. 5. A point is captured by camera and is projected onto the projection center of the camera with a corresponding point () on the image plane. The relationship of the 3D coordinates of point () in the camera coordinate system can be denoted as follows based on the laser triangulation of the pinhole camera model: where () is the principal point, and and are the focal lengths in the and directions, respectively, which can be obtained following the calibration of the camera.As the imaging fiber is located on the top of the camera (Fig. 3), the imaged laser line only appears on the top half of the image plane. Based on the camera coordinates, if light spots were captured successfully, then and . In addition, the imaged light spots located on the top half of the image plane are negative in the , and those on the left half of the image plane are negative whereas those on the right half are positive in the , as shown in Fig. 5. Thus, the depth distance can be expressed as follows: where is the baseline length with a value of 1.4 mm.2.3.Electromagnetic-Based Stereo Imaging MethodSimilar to many monocular structured light systems, the scanning results of the current frame (e.g., frame ) will be replaced immediately by those of frame with respect to the movement of the endoscope because the camera is not equipped with a functionality to save previous data. Furthermore, a common problem in monocular endoscopic imaging is the initialization, because the depth information of a scene cannot be recovered from only one single image frame.16 Hence, a unique method for 3D imaging is essential in our design to save the complete scanned data and display the shape of the objects accurately. We propose a real-time stereo imaging method for 3D shape measurements using a state-of-the-art electromagnetic-based tracking system (LIBERTY, Polhemus, United States). The system consists of a transmitter and microsensor. The transmitter produces an electromagnetic field that acts as an accurate reference for position and orientation measurements of the microsensor. The microsensor (with a diameter of 1.6 mm) can offer high-fidelity continuous tracking with six degrees-of-freedom and a maximum update rate of 240 Hz. Furthermore, the microsensor is assembled with a camera and imaging fiber that allows the distal tip of the endoscope to be small with a diameter of 3.4 mm. All three parts were fixed in a simple axially aligned configuration comprising the distal tip. The difference between our proposed approach and the feature-based tracking method presented in a previous 3D imaging study for minimally invasive surgery15,17 is shown in Fig. 6, demonstrating the use of different coordinate systems during object imaging. The endoscope is located in the camera coordinate system and aimed at the virtual object (the bone) to be displayed. Figure 6(a) shows the coordinate system of the feature-based tracking method. Once the feature is detected and tracked, the object is imaged in the feature coordinate system. Assuming as a 3D point () on the bone surface, the 3D point can be transformed into a 2D point () in the endoscopic camera’s view using the following equation: where is a () translation vector and is a () rotation matrix.Fig. 6The comparison of the (a) marker-less tracking method and (b) our proposed EBSIM tracking method. ![]() In our proposed EBSIM, as can be seen in Fig. 6(b), we add a microsensor local space as an agent, which serves as the intermediary and is incrementally built from the point cloud sensed in the environment, enabling the achievement of significant robustness in data collection. Let describe the same point, , with respect to the microsensor coordinate system as follows: where is the () translation vector that locates the camera’s origin with respect to the microsensor’s coordinate system, and is the () rotation matrix that transforms a point from camera frame to microsensor frame.Euler angles, namely, azimuth, elevation, and roll are represented by , , and , respectively. These angles represent an azimuthal primary sequence of frame rotations (counterclockwise rotation) that define the current orientation of the microsensor corresponding to the zero-orientation state of the transmitter. Thus, the rotation matrix of the current orientation of the sensor is expressed as follows: whereEquations (4) and (5) show that the 3D coordinates of point with respect to the transmitter coordinate system are denoted as where is the position vector from the sensor to the transmitter. Using the transmitter space as a terminal, we solved two important issues in monocular imaging: (i) no precaptured images/initialization is needed, which saves time and avoids information overlay; (ii) the endoscope can scan from any position and orientation, which is convenient for constant extraction and insertion of the endoscope.Real-time imaging reconstruction can be performed on a frame-by-frame basis on the 3D coordinates of a specimen’s surface scanned by the camera using Eq. (6) and the OpenGL technique. In addition, the reconstructed 3D shapes, including shape rotation, zooming, and offline observations based on the unified transmitter coordinate system, are available for online processing (Sec. 3.3). 2.4.Characteristic Curve of the Proposed Endoscopic Scanner2.4.1.System constantsThe characteristic curve of the proposed endoscopic scanner is formulated in this section. The curve describes the relationship between the depth distance and location of the imaged light spot of the camera detector. Using Eq. (2), the corresponding point, , can be expressed as follows: where and are the system constants of the endoscopic scanner. The values of the system constants can be determined based on the calibration results shown in Sec. 3.1.Equation (7) indicates that point is not uniform but has an inverse linear relationship with respect to depth distance . This property hinders the operation of a 3D endoscopic scanner. Furthermore, any change in the corresponding point will result in a systematic error; the larger the change, the larger the error. Therefore, the corresponding point is an important parameter when designing a 3D endoscopic scanner. To reveal the relationship between the point and the depth distance , a spot was projected onto a white flat board vertically by the scanner when located on a moving stage. The details of the components setting and operations were same as that for the flat-plane evaluation experiment described in Sec. 3.2. By doing so, we can obtain the relationship between a corresponding point and the illumination distance. Figure 7 shows the ideal characteristic curve of the proposed endoscopic scanner. However, the actual working characteristics of the endoscopic scanner must be considered thoroughly. A series of illuminated spots, , and their corresponding points, , were measured, as shown in Fig. 7, for comparison with the results obtained by Eq. (7). The results from the measured positions are consistent with the ideal characteristic curve. In addition, the measuring range of the endoscopic scanner in the -coordinate range of 2 to 21 mm is imaged on the camera’s detector length at a -coordinate, of in this case. The measurement ranges for the and coordinates correspond to the fixed field-of-views of 120 deg and 60 deg, respectively, regardless of the measurement distance. 2.4.2.Error propagationThe displacement of the illuminated spot by amount leads to the displacement of the corresponding image point by , which is dependent on the illuminated distance as a consequence of the nonlinear characteristic depicted by the curve obtained using Eq. (7). We considered the derivation of function from Eq. (7) with respect to . According to the law of error propagation, the standard deviation of image spot coordinates on the image plane is defined as follows: where is the standard deviation of illuminated spot , and is the gradient of the relationship between the illuminated spot and the -coordinate of the image spot. It can be rewritten asFigure 8 shows the relationship between the gradient and standard deviation of the image spot coordinate, , based on the illuminated spot coordinate. Gradient is considerably smaller than in this figure. In this case, standard deviation becomes smaller than the standard deviation when standard deviations of and are constant at . Therefore, the gradient is a vital factor in considering the measurement accuracy and range when utilizing a 3D endoscopic scanner. Moreover, the scanning of specimens with the maximum working distance of 12 mm can achieve high precision. 3.Results3.1.Experimental ConfigurationThe 3D shape measurement was performed using the hardware setup shown in Fig. 9. The control circuit mainly consists of a decoding circuit for the OVM 6946 chip sensor and a driver for the light-emitting diode (LED) illumination, with adjustable brightness. The microsensor is connected to the electronic unit by a cable for data exchange. The transmitter is placed at an optimal distance of 1 m from the microsensor during the data-collection process. Figure 9 also shows the front view of the endoscope tip. The camera, fiber, and microsensor were fixed in a tube of diameter 3.4 mm. The is a user-defined distance between the camera and microsensor, which can yield the translation vector given in Eq. (4). The pinhole camera was calibrated prior to the verification of the proposed endoscope scanner using Zhang’s method as used in Ref. 18. The employed chessboard has a pattern accuracy of 0.01 mm. The comparison of the chessboard capture before and after calibration is shown in Fig. 10. The radial lens distortion shown in Fig. 10(a) is large owing to the large field-of-view of the camera. Figure 10(b) shows the precise correction of the distortion after camera calibration. The intrinsic camera matrix obtained by calibration was as follows: 3.2.Flat-Plane Evaluation3D scanners are commonly evaluated to develop a set of procedures that use artifacts with a common geometry, such as planes, spheres, and cones.19 Liquid crystal displays were assumed as flat plane specimens to evaluate the precision of the endoscopic scanner.20 First, the test plane was positioned in front of the scanner. Second, the scanner was located on a handwheel translational stage (GCM-83 Daheng Optics, Beijing, China) and was moved in a controlled fashion along its -axis. In addition, the -axis was parallel to the normal direction of the reference plate. Thus, the position of the scanner was changed precisely with a positioning resolution of 0.01 mm. Reference planes were scanned from 2 to 12 mm with intervals of 2 mm. This range was chosen based on the actual distance within which surgeons usually conduct observations in actual medical cases.10 The reference planes were placed perpendicular to the optical axis of the endoscope to obtain the optimal focus of the laser. Table 2 lists the average depth distributions and measurement errors between the plane positions and measured results. These results indicate that the measurement error of the flat plane was , with the standard deviation of , which shows a slight improvement compared with the values obtained for a distance of 8.00 mm. Table 2Measured results of reference planes.
The captured image of the laser beam and the extracted center lines are shown in Fig. 11. As the laser beam was very short, we indicate the local amplification effect for a better observation. Figure 12 shows the depth distribution results of the flat-plane with a scanning distance of 8 mm. 3.3.Cylindrical Surface EvaluationA cylindrical wood specimen with a diameter of 40.05 mm, shown in Fig. 13(a), was used to test the performance of the endoscopic scanner system. The shape was machined via CNC milling. A quarter of the cylindrical surface was radially and manually scanned at a distance of with the endoscopic sensor. Figure 13(b) clearly depicts the 3D data of the obtained surface. The original 3D raw point measurement data contained 49,778 points. However, the standard evaluation criteria or standard specimens for 3D point cloud data of the endoscope are lacking. We then employed a geometric algorithm to fit a set of 3D points with the tested cylinder.21 Fig. 13Measured results of the cylindrical surface: (a) photograph of the measured wood and (b) point-by-point 3D representation of the scanned surface. ![]() Figure 14 shows the representation of a cylinder. The cylinder was specified by an axis containing a point and having unit-length direction . Moreover, was the radius of the cylinder. Two more unit-length vectors and were defined such that was a right-handed orthonormal set and each vector was mutually perpendicular. Thus, any 3D point can be written uniquely as where is a rotation matrix with columns , , and and where is a column vector with rows . To be on the cylinder, we needLet be the cylindrical point set. An error function for a cylinder is expressed as . Setting the partial derivative of the error function with respect to the squared radius to zero, we then obtained the constraint where the obtained parameter yields the radius of the cylinderThe experimental results indicate that the diameter of the obtained cylindrical surface is 39.81 mm. The deviation from the cylindrical surface was 0.24 mm with a standard deviation of 0.031 mm. An error of approximately two times the error of the flat plane was observed in the measurement of the cylindrical surface mainly because the manual measurement of the flat plane was static, while that of the cylindrical test was based on dynamic scanning. Although unrelated to the original intention (manual scanning) of the endoscopic development, the precision of the measurement can be improved slightly if the experiment is designed to use a microuniversal stage to scan shapes with cylindrical surfaces. Keeping the endoscope perpendicular to the surface is difficult because of the variable scanning distance. The poor focus of the projected structured light results in a weaker detection and data losses. The nonuniform movement during manual scanning is another factor that must be considered. Figure 15 shows the position tracking with six degrees-of-freedom and orientation of the endoscope during the measurement of the cylindrical surface. The tracking curves for the - and -positions are clearly not as smooth as those of the 3D orientation. Hence, an excessively fast scan will cause data loss. Accordingly, an extremely slow scan could result in repeated scans at the same location and duplicated data. Our previous endoscope characterized image data through nonperiodic fluctuations. These fluctuations during the detection of light intensity are caused by the exposure or frame drops of the camera. These situations often occur in web cameras. Therefore, this feature was eliminated by locking the exposure parameters, which were set in the customized control circuit. The endoscopic scanner software was developed on the Windows platform based on EBSIM and then combined with a rendering algorithm.22 The software interface is shown in Fig. 16. The left side of the image shows the 3D raw point measurement data for the surface. The two other forms can provide users with shape rotation, tilting, and zooming of the scanned surface. Figure 17 presents the rendered surface from different viewpoints. The recovered surface quality is highly promising. 3.4.Knee-Joint TestAn ex vivo knee-joint of a chicken bone sample with an approximate size of was used to demonstrate the functionality of the endoscopic system and obtain the 3D model (see Fig. 18). To simulate the intraoperative inspections of the knee-joint, this measurement test was conducted in an underwater environment. The measurement was completed in and the collected data consisted of 103,672 points. The point density is significantly higher (more than two times) than that of the cylindrical surface owing to the better reflection of light compared with that of the cylindrical surface. The experimental results showed that the endoscopic scanner works even on biological surfaces that cannot be easily scanned owing to volume scattering and highlights. The data quality demonstrates potential because the curvature of the complete joint can be recovered. However, in a microscopic environment, measuring the surface of tissue with high reflectivity is inevitable. Reflective highlights from the water cause some interferential data, as shown by the black arrow in Fig. 19. We attribute the possible reasons to the following two points. For one thing, the material of the knee surface was smooth and characterized by reflection of light. For another, measuring a surface under water with a short scan distance made the camera more sensitive to the light intensity. There are mainly two types of techniques that may optimize the measurement results for such problems. One technique is based on changing the illustration light source intensity or based on camera exposure techniques.23–25 The other is to employ a filter in front of the camera tip. The former method is more complicated for operation because it requires to fine-tuning the projection and exposure parameters several times to find a best effect before measurement. However, the exposure settings of the camera cannot be adjusted in some cases. Moreover, low intensity of the light may probably lead to a worse detection of the structured light. In contrast, the latter method is easier to implement if economic cost is disregarded. To quantitatively evaluate the surface congruency of the measured knee-joint, an assessment method that is based on the measured 3D point sets must be designed for free-form surfaces. This will be considered in our future work. 4.ConclusionsA flexible 3D endoscopic scanner based on EBSIM with a final diameter of 3.4 mm has been proposed in this study. To the best of our knowledge, this is the first study to investigate a 3D endoscope based on imaging fiber structured light combined with an electromagnetic tracking strategy and the smallest distal-tip configuration possible. Complex calibration and system registration are unnecessary for the proposed endoscope, and objects can be scanned from any position and orientation. The characteristic curve and error propagation of the endoscope were analyzed, and the results showed that the working distance is an important parameter for the precision of a 3D endoscope using the EBSIM. Two experiments with a flat plane and cylindrical surface demonstrated the endoscope’s precision. A remarkable accuracy of was obtained for an extremely short baseline of 1.4 mm. A knee-joint of a chicken bone was used as an ex vivo example, where the joint was measured and the 3D profile was accurately reconstructed. The proposed 3D endoscope is expected to provide qualitative analysis to surgeons in their decision-making with the aid of a 3D surface shape. AcknowledgmentsThis research was supported by Beijing Natural Science Foundation (Grant Nos. 3204039 and 4204113), the National Natural Science Foundation of China (Grant No. 52005046), and Beijing Municipal Commission of Education (Grant No. KM201911232021). ReferencesA. Donges and R. Noll, Laser Measurement Technology: Fundamentals and Applications, 1st ed.Springer-Verlag, Berlin, Heidelberg
(2015). Google Scholar
G. A. P. Escamilla, F. Kobayashi and Y. Otani,
“Three-dimensional surface measurement based on the projected defocused pattern technique using imaging fiber optics,”
Opt. Commun., 390 57
–60
(2017). https://doi.org/10.1016/j.optcom.2016.12.057 OPCOB8 0030-4018 Google Scholar
H. M. Park and K.-N. Joo,
“Endoscopic precise 3D surface profiler based on continuously scanning structured illumination microscopy,”
Curr. Opt. Photonics, 2
(2), 172
–178
(2018). https://doi.org/10.1364/COPP.2.000172 Google Scholar
M. Chen et al.,
“Three-dimensional surface profile measurement of a cylindrical surface using a multi-beam angle sensor,”
Precis. Eng., 62 62
–70
(2020). https://doi.org/10.1016/j.precisioneng.2019.11.009 PREGDL 0141-6359 Google Scholar
J. Schlobohm, A. Posch and E. Reithmeier,
“A raspberry pi based portable endoscopic 3D measurement system,”
Electronics, 5
(3), 43
(2016). https://doi.org/10.3390/electronics5030043 ELECAD 0013-5070 Google Scholar
S. Pulwer et al.,
“Dynamic pattern generation by single-mode fibers for endoscopic 3D measurement systems,”
Proc. SPIE, 11293 112930F
(2020). https://doi.org/10.1117/12.2543526 PSISDG 0277-786X Google Scholar
J. Lin et al.,
“Dual-modality endoscopic probe for tissue surface shape reconstruction and hyperspectral imaging enabled by deep neural networks,”
Med. Image Anal., 48 162
–176
(2018). https://doi.org/10.1016/j.media.2018.06.004 Google Scholar
S. G. Pearce et al.,
“An investigation of 2 techniques for optimizing joint surface congruency using multiple cylindrical osteochondral autografts,”
J. Arthrosc. Relat. Surg., 17
(1), 50
–55
(2001). https://doi.org/10.1053/jars.2001.19966 Google Scholar
L. Hangody et al.,
“Osteochondral plugs: autogenous osteochondral mosaicplasty for the treatment of focal chondral and osteochondral articular defects,”
Oper. Tech. Orthop., 7
(4), 312
–322
(1997). https://doi.org/10.1016/S1048-6666(97)80035-3 Google Scholar
H. Robert,
“Chondral repair of the knee joint using mosaicplasty,”
Orthop. Traumatol.: Surg. Res., 97
(4), 418
–429
(2011). https://doi.org/10.1016/j.otsr.2011.04.001 Google Scholar
P. Di Benedetto et al.,
“Arthroscopic mosaicplasty for osteochondral lesions of the knee: computer-assisted navigation versus freehand technique,”
Arthroscopy, 28
(9), 1290
–1296
(2012). https://doi.org/10.1016/j.arthro.2012.02.013 ARTHE3 0749-8063 Google Scholar
H. Haneishi, T. Ogura and Y. Miyake,
“Profilometry of a gastrointestinal surface by an endoscope with laser beam projection,”
Opt. Lett., 19
(9), 601
–603
(1994). https://doi.org/10.1364/OL.19.000601 OPLEDP 0146-9592 Google Scholar
L. Maier-Hein et al.,
“Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery,”
Med. Image Anal., 17
(8), 974
–996
(2013). https://doi.org/10.1016/j.media.2013.04.003 Google Scholar
Z. Long and K. Nagamune,
“Underwater 3D imaging using a fiber-based endoscopic system for arthroscopic surgery,”
J. Adv. Comput. Intell. Inf., 20
(3), 448
–454
(2016). https://doi.org/10.20965/jaciii.2016.p0448 Google Scholar
N. Haouchine et al.,
“Impact of soft tissue heterogeneity on augmented reality for liver surgery,”
IEEE Trans. Vis. Comput. Graphics, 21
(5), 584
–597
(2015). https://doi.org/10.1109/TVCG.2014.2377772 1077-2626 Google Scholar
L. Chen et al.,
“SLAM-based dense surface reconstruction in monocular minimally invasive surgery and its application to augmented reality,”
Comput. Methods Prog. Biomed., 158 135
–146
(2018). https://doi.org/10.1016/j.cmpb.2018.02.006 Google Scholar
J. H. Kim et al.,
“Tracking by detection for interactive image augmentation in laparoscopy,”
Lect. Notes Comput. Sci., 7359 246
–255
(2012). https://doi.org/10.1007/978-3-642-31340-0_26 LNCSD9 0302-9743 Google Scholar
Z. Zhang,
“A flexible new technique for camera calibration,”
IEEE Trans. Pattern Anal. Mach. Intell., 22
(11), 1330
–1334
(2000). https://doi.org/10.1109/34.888718 ITPIDJ 0162-8828 Google Scholar
P. Rachakonda, B. Muralikrishnan and D. Sawyer,
“Sources of errors in structured light 3D scanners,”
Proc. SPIE, 10991 1099106
(2019). https://doi.org/10.1117/12.2518126 PSISDG 0277-786X Google Scholar
M. Fujigaki, T. Sakaguchi and Y. Murata,
“Development of a compact 3D shape measurement unit using the light-source-stepping method,”
Opt. Lasers Eng., 85 9
–17
(2016). https://doi.org/10.1016/j.optlaseng.2016.04.016 Google Scholar
D. Eberly,
“Least squares fitting of data by linear or quadratic structures: 7 fitting a cylinder to 3D points,”
(1999). https://www.geometrictools.com/Documentation/LeastSquaresFitting.pdf Google Scholar
Z. Long and K. Nagamune,
“A marching cubes algorithm: application for three-dimensional surface reconstruction based on endoscope and optical fiber,”
Information, 18
(4), 1425
–1437
(2015). Google Scholar
Z. Cai et al.,
“Structured light field 3D imaging,”
Opt. Express, 24
(18), 20324
–20334
(2016). https://doi.org/10.1364/OE.24.020324 OPEXFF 1094-4087 Google Scholar
Z. Song and S. Yau,
“High dynamic range scanning technique,”
Proc. SPIE, 48
(3), 033604
(2009). https://doi.org/10.1117/1.3099720 PSISDG 0277-786X Google Scholar
H. Zhao et al.,
“Rapid in-situ 3D measurement of shiny object based on fast and high dynamic range digital fringe projector,”
Opt. Lasers Eng., 54 170
–174
(2014). https://doi.org/10.1016/j.optlaseng.2013.08.002 Google Scholar
BiographyZhongjie Long received his BE degree in vehicle engineering from South China University of Technology in 2010, his ME degree in mechanical engineering from Beijing Information Science and Technology University in 2013, and his PhD in advanced interdisciplinary science and technology from the University of Fukui, Japan, in 2016. Currently, he is an associate professor at Beijing Information Science and Technology University. His research interests include computer-assisted surgery systems and 3D endoscopic imaging. Hengbing Guo received his MD degree in clinical medicine from the Health Science Center of Xi’an Jiaotong University in 2001. He is now an associate chief physician at the Orthopedics Rehabilitation Center, Beijing Rehabilitation Hospital of Capital Medical University. He is engaged in the treatment and rehabilitation of sports medicine and sports trauma and is experienced in arthroscopic minimally invasive surgery. He is a member of the Shoulder and Elbow Surgery Professional Committee of CMEA. Kouki Nagamune received his PhD in computer engineering from Himeji Institute of Technology, Japan, in 2004. He worked as a lecturer at Kobe University Graduate School of Medicine, in 2006 to 2007. He relocated to the University of Fukui in 2007. Currently, he is an associate professor with the University of Fukui. Since 2017, he has been an IEEE senior member. He is a member of IEICE, JSMBE, JSCB, and JSFTII. His research interest includes medical imaging. Yunbo Zuo received his PhD in mechanical manufacture and automation from Beijing Institute of Technology in 2008. He is now an associate fellow at the Key Laboratory of Modern Measurement and Control Technology, Ministry of Education, Beijing Information Science and Technology University. His research interests include robot vision recognition, image object detection, image target tracking, and state monitoring of mechanical systems. |