As we move through an environment, we are constantly making assessments, judgments, and decisions about the things we encounter. Some are acted upon immediately, but many more become mental notes or fleeting impressions -- our implicit "labeling" of the world. In this talk I will describe our work using physiological correlates of this labeling to construct a hybrid brain-computer interface (hBCI) system for efficient navigation of a 3-D environment.
Specifically, we record electroencephalographic (EEG), saccadic, and pupillary data from subjects as they move through a small part of a 3-D virtual city under free-viewing conditions. Using machine learning, we integrate the neural and ocular signals evoked by the objects they encounter to infer which ones are of subjective interest. These inferred labels are propagated through a large computer vision graph of objects in the city, using semi-supervised learning to identify other, unseen objects that are visually similar to those that are labelled. Finally, the system plots an efficient route so that subjects visit similar objects of interest.
We show that by exploiting the subjects' implicit labeling, the median search precision is increased from 25% to 97%, and the median subject need only travel 40% of the distance to see 84% of the objects of interest. We also find that the neural and ocular signals contribute in a complementary fashion to the classifiers' inference of subjects' implicit labeling. In summary, we show that neural and ocular signals reflecting subjective assessment of objects in a 3-D environment can be used to inform a graph-based learning model of that environment, resulting in an hBCI system that improves navigation and information delivery specific to the user's interests.
We present an algorithm for blindly recovering constituent source
spectra from magnetic resonance spectroscopic imaging (MRSI) of human
brain. The algorithm is based on the non-negative matrix
factorization (NMF) algorithm, extending it to
include a constraint on the positivity of the amplitudes of the
recovered spectra and mixing matrices. This positivity constraint
enables recovery of physically meaningful spectra even in the presence
of noise that causes a significant number of the observation
amplitudes to be negative. The algorithm, which we call constrained
non-negative matrix factorization (cNMF), does not enforce
independence or sparsity, though it recovers sparse sources quite
well. It can be viewed as a maximum likelihood approach for finding
basis vectors in a bounded subspace. In this case the optimal basis
vectors are the ones that envelope the observed data with a minimum
deviation from the boundaries. We incorporate the cNMF algorithm into
a hierarchical decomposition framework, showing that it can be used to
recover tissue-specific spectra, e.g., spectra indicative of
malignant tumor. We demonstrate the hierarchical procedure on
1H long echo time (TE) brain absorption spectra and conclude that the
computational efficiency of the cNMF algorithm makes it well-suited
for use in diagnostic work-up.
In this paper a constrained non-negative matrix factorization (cNMF) algorithm for recovering constituent spectra is described together with experiments demonstrating the broad utility of the approach. The algorithm is based on the NMF algorithm of Lee and Seung, extending it to include a constraint on the minimum amplitude of the recovered spectra. This constraint enables the algorithm to deal with observations having negative values by assuming they arise from the noise distribution. The cNMF algorithm does not explicitly enforce independence or sparsity, instead only requiring the source and mixing matrices to be non-negative. The algorithm is very fast compared to other "blind" methods for recovering spectra. cNMF can be viewed as a maximum likelihood approach for finding basis vectors in a bounded subspace. In this case the optimal basis vectors are the ones that envelope the observed data with a minimum deviation from the boundaries. Results for Raman spectral data, hyperspectral images, and 31P human brain data are provided to illustrate the algorithm's performance.
We formulate a model for probability distributions on image spaces. We show that any distribution of images can be factored exactly into conditional distributions of feature vectors at one resolution (pyramid level) conditioned on the image information at lower resolutions. We would like to factor this over positions in the pyramid levels to make it tractable, but such factoring may miss long-range dependencies. To fix this, we introduce hidden class labels at each pixel in the pyramid. The result is a hierarchical mixture of conditional probabilities, similar to a hidden Markov model on a tree. The model parameters can be found with maximum likelihood estimation using the EM algorithm. We have obtained encouraging preliminary results on the problems of detecting masses in mammograms.
We have previously presented a hierarchical pyramid/neural network (HPNN) architecture which combines multi-scale image processing techniques with neural networks. This coarse-to- fine HPNN was designed to learn large-scale context information for detecting small objects. We have developed a similar architecture to detect mammographic masses (malignant tumors). Since masses are large, extended objects, the coarse-to-fine HPNN architecture is not suitable for the problem. Instead we constructed a fine-to- coarse HPNN architecture which is designed to learn small- scale detail structure associated with the extended objects. Our initial result applying the fine-to-coarse HPNN to mass detection are encouraging, with detection performance improvements of about 30%. We conclude that the ability of the HPNN architecture to integrate information across scales, from fine to coarse in the case of masses, makes it well suited for detecting objects which may have detail structure occurring at scales other than the natural scale of the object.
In this paper we explore the use of feature selection techniques to improve the generalization performance of pattern recognizers for computer-aided diagnosis. We apply a modified version of the sequential forward floating selection (SFFS) of Pudil et al. to the problem of selecting an optimal feature subset for mass detection in digitized mammograms. The complete feature set consists of multi-scale tangential and radial gradients in the mammogram region of interest. We train a simple multi-layer perceptron (MLP) using the SFFS algorithm and compare its performance, using a jackknife procedure, to an MLP trained on the complete feature set (35 features). Results indicate that a variable number of features is chosen in each of the jackknife sets (12 +/- 4) and the test performance, Az, using the chosen feature subset is no better than the performance using the entire feature set. These results may be attributed to the fact that the feature set is noisy and the data set used for training/testing is small. We next modify the feature selection technique by using the results of the jackknife to compute the frequency at which different features are selected. We construct a classifier by choosing the top N features, selected most frequently, which maximize performance on the training data. We find that by adding this `hand-tuning' component to the feature selection process, we can reduce the feature set from 35 to 8 features and at the same time have a statistically significant increase in generalization performance (p < 0.015).
We formulate an error function for the supervised learning of image search/detection tasks when the positions of the objects to be found are uncertain or ill-defined. The need for this uncertain object position (UOP) error function arises in at least two ways. First, point-like objects frequently have positions that are inaccurately specified. We illustrate this with the problem of detecting microcalcifications in mammograms. The second type of position uncertainty occurs with extended objects whose boundaries are not accurately defined. In this case we usually only need the detector to respond at one pixel within each object. As an example of this, we present results for neural networks trained to detect clusters of buildings in aerial photographs. We are currently applying the UOP error function to the detection of masses in mammograms, which also have poorly-defined boundaries. In all of these examples, neural networks trained with the UOP error function perform much better than networks trained with the conventional cross-entropy error function.
Microcalcifications are important cues used by radiologists for early detection in breast cancer. Individually, microcalcifications are difficult to detect, and often contextual information (e.g. clustering, location relative to ducts) can be exploited to aid in their detection. We have developed an algorithm for constructing a hierarchical pyramid/neural network (HPNN) architecture to automatically learn context information for detection. To test the HPNN we first examined if the hierarchical architecture improves detection of individual microcalcifications and if context is in fact extracted by the network hierarchy. We compared the performance of our hierarchical architecture versus a single neural network receiving input from all resolutions of a feature pyramid. Receiver operator characteristic (ROC) analysis shows that the hierarchical architecture reduces false positives by a factor of two. We examined hidden units at various levels of the processing hierarchy and found what appears to be representations of ductal location. We next investigated the utility of the HPNN if integrated as part of a complete computer-aided diagnosis (CAD) system for microcalcification detection, such as that being developed at the University of Chicago. Using ROC analysis, we tested the HPNN's ability to eliminate false positive regions of interest generated by the computer, comparing its performance to the neural network currently used in the Chicago system. The HPNN achieves an area under the ROC curve of Az equal to .94 and a false positive fraction of FPF equal to .21 at TPF equals 1.0. This is in comparison to the results reported for the Chicago network; Az equal to .91, FPF equal to .43 at TPF equal to 1.0. These differences are statistically significant. We conclude that the HPNN algorithm is able to utilize contextual information for improving microcalcifications detection and potentially reduce the false positive rates in CAD systems.
An important problem in image analysis is finding small objects in large images. The problem is challenging because (1) searching a large image is computationally expensive, and (2) small targets (on the order of a few pixels in size) have relatively few distinctive features which enable them to be distinguished from non-targets. To overcome these challenges we have developed a hierarchical neural network (HNN) architecture which combines multi-resolution pyramid processing with neural networks. The advantages of the architecture are: (1) both neural network training and testing can be done efficiently through coarse-to-fine techniques, and (2) such a system is capable of learning low-resolution contextual information to facilitate the detection of small target objects. We have applied this neural network architecture to two problems in which contextual information appears to be important for detecting small targets. The first problem is one of automatic target recognition (ATR), specifically the problem of detecting buildings in aerial photographs. The second problem focuses on a medical application, namely searching mammograms for microcalcifications, which are cues for breast cancer. Receiver operating characteristic (ROC) analysis suggests that the hierarchical architecture improves the detection accuracy for both the ATR and microcalcification detection problems, reducing false positive rates by a significant factor. In addition, we have examined the hidden units at various levels of the processing hierarchy and found what appears to be representations of road location (for the ATR example) and ductal/vasculature location (for mammography), both of which are in agreement with the contextual information used by humans to find these classes of targets. We conclude that this hierarchical neural network architecture is able to automatically extract contextual information in imagery and utilize it for target detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.