IBM has created an open source Python library, called Uncertainty Qualification 360 or UQ360, that provides developers and data scientists with algorithms to quantify the uncertainty of machine learning predictions, with the goal of improving the transparency of machine learning models and trust in artificial intelligence (AI).
Available from IBM Research, UQ360 aims to address problems that result when AI systems based on deep learning make overconfident predictions. With the Python toolkit, users are provided algorithms to streamline the process of quantifying, evaluating, improving, and communicating the uncertainty of predictive models.
Currently, the UQ360 toolkit provides 11 algorithms to estimate different types of uncertainties, collected behind a common interface. IBM also provides guidance on choosing UQ algorithms and metrics.
IBM stressed that overconfident predictions of AI systems can have serious consequences. Examples cited included a chatbot being unsure of when a pharmacy closes, resulting in a patient not getting needed medication, and the life-or-death importance of reliable uncertainty estimates in the detection of sepsis.
UQ exposes the limits and potential failure points of predictive models, enabling AI to express that it is unsure and increasing the safety of deployment.
Previous IBM efforts to advance trust in AI have included the AI Fairness 360 toolkit, which mitigates bias in machine learning models; the Adversarial Robustness Toolbox, which is a Python library for machine learning security; and the AI Explainability 360 toolkit, which helps users comprehend how machine learning models predict labels.