Ultrasound imaging is widely used in clinical imaging because it is non-invasive, real-time, and inexpensive. Due to the
freehand nature of clinical ultrasound, analysis of an image sequence often requires registration between the images. Of
the previously developed mono-modality ultrasound registration frameworks, only few were designed to register small
anatomical structures. Monitoring of small finger vessels, in particular, is essential for the treatment of vascular diseases
such as Raynaud’s Disease. High frequency ultrasound (HFUS) can now image smaller anatomic details down to 30
microns within the vessels, but no work has been done to date on such small-scale ultrasound registration. Due to the
complex internal finger structure and increased noise of HFUS, it is difficult to register 2D images of finger vascular
tissue, especially under deformation. We studied a variety of similarity measurements with different pre-processing
techniques to find which registration similarity metrics were best suited for HFUS vessel tracking. The overall best
performance was obtained with a normalized correlation metric coupled with HFUS downsampling and a one-plus-one
evolutionary optimizer, yielding a mean registration error of 0.05 mm. We also used HFUS to study how finger tissue
deforms under an ultrasound transducer, comparing internal motion vs. transducer motion. Improving HFUS registration
and tissue modeling may lead to new research and improved treatments for peripheral vascular disorders.
Medical ultrasound typically deals with the interior of the patient, with the exterior left to the original medical
imaging modality, direct human vision. For the human operator scanning the patient, the view of the external anatomy is
essential for correctly locating the ultrasound probe on the body and making sense of the resulting ultrasound images in
their proper anatomical context. The operator, after all, is not expected to perform the scan with his eyes shut. Over the
past decade, our laboratory has developed a method of fusing these two information streams in the mind of the operator,
the Sonic Flashlight, which uses a half silvered mirror and miniature display mounted on an ultrasound probe to produce
a virtual image within the patient at its proper location. We are now interested in developing a similar data fusion
approach within the ultrasound machine itself, by, in effect, giving vision to the transducer. Our embodiment of this
concept consists of an ultrasound probe with two small video cameras mounted on it, with software capable of locating
the surface of an ultrasound phantom using stereo disparity between the two video images. We report its first successful
operation, demonstrating a 3D rendering of the phantom's surface with the ultrasound data superimposed at its correct
relative location. Eventually, automated analysis of these registered data sets may permit the scanner and its associated
computational apparatus to interpret the ultrasound data within its anatomical context, much as the human operator does
today.
We have developed a new framework for analyzing images called Shells and Spheres (SaS) based on a set of spheres
with adjustable radii, with exactly one sphere centered at each image pixel. This set of spheres is considered optimized
when each sphere reaches, but does not cross, the nearest boundary of an image object. Statistical calculations at varying
scale are performed on populations of pixels within spheres, as well as populations of adjacent spheres, in order to
determine the proper radius of each sphere. In the present work, we explore the use of a classical statistical method, the
student's t-test, within the SaS framework, to compare adjacent spherical populations of pixels. We present results from
various techniques based on this approach, including a comparison with classical gradient and variance measures at the
boundary. A number of optimization strategies are proposed and tested based on pairs of adjacent spheres whose size are
controlled in a methodical manner. A properly positioned sphere pair lies on opposite sides of an object boundary,
yielding a direction function from the center of each sphere to the boundary point between them. Finally, we develop a
method for extracting medial points based on the divergence of that direction function as it changes across medial ridges,
reporting not only the presence of a medial point but also the angle between the directions from that medial point to the
two respective boundary points that make it medial. Although demonstrated here only in 2D, these methods are all
inherently n-dimensional.
We have developed a new image-based guidance system for microsurgery using optical coherence tomography
(OCT), which presents a virtual image in its correct location inside the scanned tissue. Applications include surgery of
the cornea, skin, and other surfaces below which shallow targets may advantageously be displayed for the naked eye or
low-power magnification by a surgical microscope or loupes (magnifying eyewear). OCT provides real-time highresolution
(3 micron) images at video rates within a two or more millimeter axial range in soft tissue, and is therefore
suitable for guidance to various shallow targets such as Schlemm's canal in the eye (for treating Glaucoma) or skin
tumors. A series of prototypes of the "OCT penlight" have produced virtual images with sufficient resolution and
intensity to be useful under magnification, while the geometrical arrangement between the OCT scanner and display
optics (including a half-silvered mirror) permits sufficient surgical access. The two prototypes constructed thus far have
used, respectively, a miniature organic light emitting diode (OLED) display and a reflective liquid crystal on silicon
(LCoS) display. The OLED has the advantage of relative simplicity, satisfactory resolution (15 micron), and color
capability, whereas the LCoS can produce an image with much higher intensity and superior resolution (12 micron),
although it is monochromatic and more complicated optically. Intensity is a crucial limiting factor, since light flux is
greatly diminished with increasing magnification, thus favoring the LCoS as the more practical system.
The design of the first Real-Time-Tomographic-Holography (RTTH) optical system for augmented-reality applications
is presented. RTTH places a viewpoint-independent real-time (RT) virtual image (VI) of an object
into its actual location, enabling natural hand-eye coordination to guide invasive procedures, without requiring
tracking or a head-mounted device. The VI is viewed through a narrow-band Holographic Optical Element
(HOE) with built-in power that generates the largest possible near-field, in-situ VI from a small display chip
without noticeable parallax error or obscuring direct view of the physical world. Rigidly fixed upon a medical-ultrasound
probe, RTTH could show the scan in its actual location inside the patient, because the VI would
move with the probe. We designed the image source along with the system-optics, allowing us to ignore both
planer geometric distortions and field curvature, respectively compensated by using RT pre-processing software
and attaching a custom-surfaced fiber-optic-faceplate (FOFP) to our image source. Focus in our fast, non-axial
system was achieved by placing correcting lenses near the FOFP and custom-optically-fabricating our volume-phase
HOE using a recording beam that was specially shaped by extra lenses. By simultaneously simulating and
optimizing the system's playback performance across variations in both the total playback and HOE-recording
optical systems, we derived and built a design that projects a 104x112 mm planar VI 1 m from the HOE using
a laser-illuminated 19x16 mm LCD+FOFP image-source. The VI appeared fixed in space and well focused.
Viewpoint-induced location errors were <3 mm, and unexpected first-order astigmatism produced 3 cm (3% of
1 m) ambiguity in depth, typically unnoticed by human observers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.