If you fail an AI interview, shouldn’t you be able to find out why? If you receive a higher interest rate than a friend with the same salary as you, shouldn’t you be able to hear an explanation of why the AI credit rating model made that decision?
Professor Jaesik Choi (47) of KAIST is a leading researcher in the field of ‘Explainable AI (XAI)’, which explains the decision-making process of AI so that people can understand it. He is currently the director of the KAIST XAI Research Center.
XAI is a technology that makes existing black-box AI models transparent and provides users with the basis for how the AI made its decisions. The goal is to increase the reliability and fairness of AI.
In particular, the Basic AI Act, which is scheduled to go into effect in January next year, further highlights the importance of XAI. The law includes a clause that requires the development and use of ‘trustworthy AI’ through the use of fair, transparent, and interpretable algorithms, and a clause guaranteeing the ‘user’s right to know’ that requires AI service providers to provide users with information that explains the reasons and process for AI’s decision-making.
In an era where AI judgments have a real impact on our daily lives, the right to ask about and understand the basis for such judgments is now moving toward being legally guaranteed.