Artificial intelligence (AI) / Machine Learning (ML) applications are widely available for different domains such as commercial, industrial, and intelligence applications. In particular, the use of AI applications for the security environment requires standards to manage expectations for users to understand how the results were derived. A reliance on “black boxes” to generate predictions and inform decisions could lead to errors of analysis. This paper explores the development of potential standards designed for each stage of the development of an AI/ML system to help enable trust, transparency, and explainability. Specifically, the paper utilizes the standards outlined in Intelligence Community Directive 203 (Analytic Standards) to hold machine outputs to the same rigorous accountability standards as performed by humans. Building on the ICD203, the Multi-Source AI Scorecard Table (MAST) was developed to support the community towards test and evaluation of AI/ML techniques. The paper provides discussion towards using MAST to rate a semantic processing tool for processing noisy, unstructured, and complex microtext in the form of streaming chat for video call outs. The scoring is notional, but provides a discussion on how MAST could be used as a standard to compare AI/ML methods that complements datasheets and model cards.
Various techniques, applications, and tools for space situational awareness (SSA) have been developed for specific functions that can provide decision support tools. The generality of tools to enable a user-defined operating picture (UDOP) enables analysis across a wide variety of applications. This paper explores the Multisource AI Scorecard Table (MAST) for artificial intelligence/machine learning methods. Using the MAST categories, the Adaptive Markov Inference Game Optimization (AMIGO) SSA tool is presented as an example. The analysis reveals the importance of human interaction in the task, user, and technology operations. Recent advances in artificial intelligence (AI) have led to an explosion of multimedia applications (e.g., computer vision (CV) and natural language processing (NLP)) for different domains such as commercial, industrial, and intelligence. In particular, the use of AI applications is often problematic because the opaque nature of most systems leads to an inability for a human to understand how the results came about. A reliance on “black boxes” to generate predictions and inform decisions but requires explainability. This paper explores how MAST can support human-machine interactions to support the design and development of SSA tools. After describing the elements of MAST, the use case for AMIGO explains the general rating concept for the community to consider and modify for the interpretability of advanced data analytics that support various elements of data awareness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.