WBS Distinguished Seminar Series / Analytics Insights Forum
The use of Artificial Intelligence and Machine Learning algorithms is ubiquitous in Data Driven Decision Making. Despite their excellent accuracy, these algorithms are often criticised for their lack of transparency. Algorithms such as Random Forest, XGBoost and Deep Learning are often seen as black boxes, for which it is difficult to explain their predictions. In addition, when applied to sensitive situations with consequential impact on citizens’ lives, including access to social services, lending decisions or parole applications, this opaqueness may hide unfair outcomes for risk groups. Therefore, there is an urgent need to strike a balance between three goals, namely, accuracy, explainability and fairness. In this presentation, and with an Operations Research lens, we will navigate through some novel Machine Learning models that embed explainability and fairness in their training.