close
PRESS
Sharing the latest XAI research trends and industrial use cases.... KAIST Explainable AI Research Center, 'KCC Explainable AI Workshop 2024' completed
작성일 2024.07.02조회수 322

With the final passage of the EU Artificial Intelligence Act in March of this year and global regulation of AI systems becoming a reality, interest in explainable artificial intelligence (XAI) technology that can improve the transparency of AI models and support compliance with AI regulations is growing.

Here, domestic research institutes and corporate officials actively researching XAI gathered together to share the latest research trends. The 'KCC Explainable Artificial Intelligence (XAI) Workshop 2024' was held at the Jeju International Convention Center on the 27th of last month, hosted by the KAIST Explainable Artificial Intelligence Research Center (Director: Professor Jaesik Choi).

The event featured invited lectures and paper presentations on the latest research trends in the XAI field and industrial use cases.

There were invited lectures by Professor Seo Hong-seok of Korea University ('Research Trends Related to Multimodal Conversational Artificial Intelligence') and Professor Park Cheon-eum of Hanbat National University ('Research Trends on Natural Language Interpretation Using Multimodal Counterfactual Reasoning') on the research and interpretation trends of multimodal AI models, a field in which research is currently active.

In addition, lectures on cases of XAI utilization in the financial and medical fields were held, such as 'Explainable AI in Finance and Banking' (Co-chair Oh Soon-young, Future Forum of Korea Science and Technology Institute) and 'The Role of XAI in Digital Health: Focusing on the Case of Repeech' (CEO Kim Jin-woo, HAII Inc.). In addition, Professor Lee Jae-ho (Seoul Metropolitan University), leader of the international standardization group in the field of explainable artificial intelligence, gave a lecture on 'Frontier AI Reliability Technology and Policy.'

A total of 38 cutting-edge research papers were presented at the workshop, covering new algorithms that improve the performance of existing XAI algorithms, techniques for providing interpretability and reliability for generative AI models such as Large Language Models (LLM), and research on domain-specific XAI application cases.

Among these, the Best Paper Award went to 'Efficient Large-Scale Language and Vision Model Using Object-Level Visual Prompts' (Byeong-Gwan Lee, Beom-Chan Park, Chae-Won Kim, Yong-Man Noh (all KAIST)). This study was recognized for significantly improving the performance of AI models as well as the interpretability of the decision-making process of AI models by introducing a new technique capable of understanding object-level images instead of increasing the model size in the Large Language and Vision Model (LLVM).

Professor Jaesik Choi, Director of the KAIST Explainable Artificial Intelligence Research Center and CEO of INEEJI, who hosted the event, said, “I hope that this workshop will share the latest research trends in explainable artificial intelligence (XAI), a key technology for improving the transparency and reliability of AI technology, and contribute to the application of XAI technology in various industrial fields.”

Meanwhile, this event was held with the support of the Ministry of Science and ICT's Information and Communication Planning and Evaluation Institute's "Human-Centered Artificial Intelligence Core Source Technology Development Project" (Project name: Development of technology to provide explainability through user-tailored plug-and-play method). For more information, including research technologies presented at the event, please refer to the workshop homepage