One of the most challenging problems encountered in deep learning-based brain tumor segmentation models is the misclassification of tumor tissue classes due to the inherent imbalance in the class representation. Consequently, strong regularization methods are typically considered when training large-scale deep learning models for brain tumor segmentation to overcome undue bias towards representative tissue types. However, these regularization methods tend to be computationally exhaustive, and may not guarantee the learning of features representing all tumor tissue types that exist in the input MRI examples. Recent work in context encoding with deep CNN models have shown promise for semantic segmentation of natural scenes, with particular improvements in small object segmentation due to improved representative feature learning. Accordingly, we propose a novel, efficient 3DCNN based deep learning framework with context encoding for semantic brain tumor segmentation using multimodal magnetic resonance imaging (mMRI). The context encoding module in the proposed model enforces rich, class-dependent feature learning to improve the overall multi-label segmentation performance. We subsequently utilize context augmented features in a machine-learning based survival prediction pipeline to improve the prediction performance. The proposed method is evaluated using the publicly available 2019 Brain Tumor Segmentation (BraTS) and survival prediction challenge dataset. The results show that the proposed method significantly improves the tumor tissue segmentation performance and the overall survival prediction performance.
As machine-learning algorithms continue to expand their scope and approach more ambiguous goals, they may be required to make decisions based on data that is often incomplete, imprecise, and uncertain. The capabilities of these models must, in turn, evolve to meet the increasingly complex challenges associated with the deployment and integration of intelligent systems into modern society. Historical variability in the performance of traditional machine-learning models in dynamic environments leads to ambiguity of trust in decisions made by such algorithms. Consequently, the objective of this work is to develop a novel computational model that effectively quantifies the reliability of autonomous decision-making algorithms. The approach relies on the implementation of a neural network based reinforcement learning paradigm known as adaptive critic design to model an adaptive decision making process that is regulated by a quantitative measure of risk associated with each possible decision. Specifically, this work expands on the risk-directed exploration strategies of reinforcement learning to obtain quantitative risk factors for an automated object recognition process in the presence of imprecise data. Accordingly, this work addresses the challenge of automated risk quantification based on the confidence of the decision model and the nature of given data. Additionally, further analysis into risk directed policy development for improved object recognition is presented.
Brain tumor segmentation is a fundamental step in surgical treatment and therapy. Many hand-crafted and learning based methods have been proposed for automatic brain tumor segmentation from MRI. Studies have shown that these approaches have their inherent advantages and limitations. This work proposes a semantic label fusion algorithm by combining two representative state-of-the-art segmentation algorithms: texture based hand-crafted, and deep learning based methods to obtain robust tumor segmentation. We evaluate the proposed method using publicly available BRATS 2017 brain tumor segmentation challenge dataset. The results show that the proposed method offers improved segmentation by alleviating inherent weaknesses: extensive false positives in texture based method, and the false tumor tissue classification problem in deep learning method, respectively. Furthermore, we investigate the effect of patient’s gender on the segmentation performance using a subset of validation dataset. Note the substantial improvement in brain tumor segmentation performance proposed in this work has recently enabled us to secure the first place by our group in overall patient survival prediction task at the BRATS 2017 challenge.
Large-scale feed-forward neural networks have seen intense application in many computer vision problems.
However, these networks can get hefty and computationally intensive with increasing complexity of the task. Our
work, for the first time in literature, introduces a Cellular Simultaneous Recurrent Network (CSRN) based
hierarchical neural network for object detection. CSRN has shown to be more effective to solving complex tasks
such as maze traversal and image processing when compared to generic feed forward networks. While deep neural
networks (DNN) have exhibited excellent performance in object detection and recognition, such hierarchical
structure has largely been absent in neural networks with recurrency. Further, our work introduces deep hierarchy in
SRN for object recognition. The simultaneous recurrency results in an unfolding effect of the SRN through time,
potentially enabling the design of an arbitrarily deep network. This paper shows experiments using face, facial
expression and character recognition tasks using novel deep recurrent model and compares recognition performance
with that of generic deep feed forward model. Finally, we demonstrate the flexibility of incorporating our proposed
deep SRN based recognition framework in a humanoid robotic platform called NAO.
Image registration using Artificial Neural Network (ANN) remains a challenging learning task. Registration can be posed as a two-step problem: parameter estimation and actual alignment/transformation using the estimated parameters. To date ANN based image registration techniques only perform the parameter estimation, while affine equations are used to perform the actual transformation. In this paper, we propose a novel deep ANN based image rigid registration that combines parameter estimation and transformation as a simultaneous learning task. Our previous work shows that a complex universal approximator known as Cellular Simultaneous Recurrent Network (CSRN) can successfully approximate affine transformations with known transformation parameters. This study introduces a deep ANN that combines a feed forward network with a CSRN to perform full rigid registration. Layer wise training is used to pre-train feed forward network for parameter estimation and followed by a CSRN for image transformation respectively. The deep network is then fine-tuned to perform the final registration task. Our result shows that the proposed deep ANN architecture achieves comparable registration accuracy to that of image affine transformation using CSRN with known parameters. We also demonstrate the efficacy of our novel deep architecture by a performance comparison with a deep clustered MLP.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.