Explainable AI as a Tool for Risk Managers

Improvements to machine learning (ML) technology are taking place rapidly, and while finding widespread application, are “black box” in design, which obscures how inputs are converted into outputs. As the use of machine learning decision processes grows, it is becoming increasingly important to monitor ML algorithms, apply explainable technologies to assess what is inside their black boxes, and address issues related to fairness or a lack thereof.

The PRMIA Institute’s newest paper, Explainable AI as a Tool for Risk Managers, focuses on subtleties associated with the black box character of machine learning algorithms and techniques to infer the nature of what is going on inside those black boxes. The author, Hersh Shefrin, explains that this will be an important function for risk managers going forward. Explainable AI is valuable, but like ML-based algorithms, can be imperfect. Risk managers will need to have a good understanding of how explainable AI works, and to understand where explainable AI might lead its users astray. The goal of this paper is to use a set of stripped down examples to help risk managers in this regard. A general understanding will be insufficient when intuition cannot be relied upon. The details are important. For risk managers, forewarned precedes becoming forearmed

Pricing

Price: $20
Sustaining Members: Complimentary!  Free download for Sustaining Members
 

Purchase the Whitepaper

Want the paper complimentary? 
 
Become a Member Today

PRMIA Certification is already on my resume and I have had three interviews in just last week, for risk management positions. The certification is opening up a lot of new career opportunities for me that I had never even dreamed of. Gundeep Anand, PRM, Chicago, USA