PRMIA Institute Releases New Paper: Explainable AI as a Tool for Risk Managers


Improvements to machine learning (ML) technology are taking place rapidly, and while finding widespread application, are "black box" in design, which obscures how inputs are converted into outputs. As the use of machine learning decision processes grows, it is becoming increasingly important to monitor ML algorithms, apply explainable technologies to assess what is inside their black boxes, and address issues related to fairness or a lack thereof.

The PRMIA Institute's paper, Explainable AI as a Tool for Risk Managers, focuses on subtleties associated with the black box character of ML algorithms and techniques to infer the nature of what is going on inside those black boxes.

The author, Prof Hersh Shefrin, explains that this will be an important function for risk managers going forward. Explainable AI is valuable, but like ML-based algorithms, can be imperfect. Risk managers will need to have a good understanding of how explainable AI works, and to understand where explainable AI might lead its users astray.

The goal of this paper is to use a set of stripped down examples to help risk managers in this regard. A general understanding will be insufficient when intuition cannot be relied upon. The details are important. For risk managers, forewarned precedes becoming forearmed. 

Download paper

Thank you to our sponsors, including:


Questions?

Contact Us


Looking to further your career?

Become a Member

Sign Up for Mailing List