Uncertainty represents the quantification of the spread of the distribution of possible ground truths that can be inferred from observed evidence. As such, uncertainty is one of the major factors in determining confidence when making decisions (i.e., uncertainty and confidence are in an inverse relationship). Bayesian statistics and subjective logic provide tools for Artificial Intelligence (AI) to derive uncertainty quantification. These processes require base rates, which are large-population determinations of probabilities that are not contextualized for the specific situation. The AI computes probabilities based upon the specific situation and context in light of historical (or training) data. As more evidence/training data is available for the context, the base rate gets washed out in the probability calculation. For most Army applications, an AI does not act or decide on its own with the rare exception of complete automaticity, but rather in collaboration with at least one human user. In this paper, we propose that the ways AI represents uncertainty ought to be optimally aligned with human preferences to provide best possible human-AI collaborative performance. Exploring this topic requires human-subjects experimentation to test how well users understand different representations of uncertainty that include base-rate information, which quantifies belief in predictions. Variations of these experiments could include different types of training to interpret uncertainty representations.
|