Percutaneous Transluminal Coronary Angioplasty is currently the preferred method for coronary artery disease treatment. Angiograms depict residual lumen, but lack information about plaque characteristics and exact geometry. During instrument positioning, intracoronary characterization at the current instrument location is desirable. By pulling back an intravascular ultrasound (IVUS) probe through a stenosis, cross-sections of the artery are acquired. These images can provide the desired characterization if they are properly registered to diagnostic angiograms or interventional fluoroscopies. The method we propose acquires fluoroscopy frames at the beginning, end, and optionally during a constant speed pullback. The IVUS probe is localized and registered to previously acquired angiograms using a compensation algorithm for heartbeat and respiration. Then, for each heart phase, the pullback path is interpolated and the corresponding IVUS frames are positioned. During the intervention the instrument is localized and registered onto the pullback path. Thus, each IVUS frame can be registered with a position on an angiogram or to an instrument location and during subsequent steps of the intervention the appropriate IVUS frames can be displayed as if an IVUS probe were present at the instrument position. The method was tested using a phantom featuring respiratory and contraction movement and an automatic pullback with constant speed. The IVUS acquisition was replaced by fibre optics and the phantom was imaged in angiographic and fluoroscopic modes. The study showed that for the phantom case it is indeed possible to register the IVUS cross-section to the interventional instrument positions to an accuracy of less than 2mm.
Minimally-invasive interventions are an important domain of medical real-time imaging modalities. Image processing algorithms that enhance interventional images run within hard real-time and latency constraints due to the required hand-eye coordination of physicians which perform the intervention. To support research activities,
we present a flexible software architecture that allows to transfer image enhancement algorithms from research to clinical validation. The software architecture especially pays regard to multimodality interventional scenarios where an intervention runs in close succession to the acquisition of diagnostic data. Including the additional information of such diagnostic acquisitions enables content-based image enhancement. The proposed software
architecture administers threads for a graphical user interface, data acquisition, offline preparation of diagnostic data, and the context-based real-time enhancement itself. Using this architecture, it is possible to run arbitrary complex content-based image analysis in real-time with only 9% computational overhead during the latency introducing algorithm run time. The proposed architecture is exemplified with an application for navigation support in cardiac CathLab interventions where diagnostic exposure acquisitions and interventional fluoroscopy can alternate in close succession.
An overlay of diagnostic angiograms and interventional fluoroscopy during minimally invasive cathlab interventions can support navigation but suffers from artifacts due to mismatch of vessels and interventional devices. Here, weak image features and strict real-time constraints do not allow for standard multi-modality registration
techniques. In the presented method, diagnostic angiograms are filtered to extract the imaged vessel structure. A distance-transform of the extracted vessels allows for fast matching with interventionally imaged devices which are extracted with fast local filters only. Competing vessel and object filters are tested on 10 diagnostic angiograms and 25 fluoroscopic frames showing a guidewire. Their performance is tested in comparison to manual segmentations. A newly presented directional stamping-filter based on anisotropic diffusion of local image patches offers the best results for vessel extraction and also improves the guidewire detection. Using these filters, the device-to-vessel match succeeds in 92% of the tested frames. This rate decreases to 75% for an initial mismatch
of 16 pixels.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.