close
PRESS
The steps of a ‘dinosaur’ are difficult to keep up with… Conditions for the success of ‘Korean Chat GPT’
Date 2023.02.15.View 966

The issue of ‘trust’ is highlighted   

There are also technical mountains to overcome. This is an issue related to the reliability of ‘generative AI’, which goes beyond simply listing information and produces creative results. 

This means that artificial intelligence ‘lies’ in a plausible way. Professor Kim Seon-ju said, “When writing a paper through ChatGPT, references are added just like a human would. “The problem is that it all seems plausible but does not actually exist,” he pointed out. Another official in the IT industry said, “ChatGPT cannot learn data in real time, so if it learned with data from yesterday, from the artificial intelligence perspective, it will start saying wrong things because it is a fact that nothing happened today.”

When I asked ChatGPT to explain Google's AI chatbot 'Bard', the answer was 'an error reporting and tracking system used to detect and track bugs that occur in large-scale software projects.' Because we were only able to learn information up to 2021, information related to Bard released in 2023 was not updated. Nevertheless, because the answer had to be generated, the answer was full of errors. If users do not know the facts in advance, they have no choice but to be deceived.

It is pointed out that even if a large amount of money is spent on development, unless reliability issues are resolved, there is a high possibility that it will end up being a simple interactive product like 'Iruda'. Iruda, created by domestic AI startup Scatter Lab, is an interactive artificial intelligence that learns from conversations with users and web documents as main text. A method of applying a generative AI model to generate answers in real time according to people's questions was adopted, but it was evaluated as difficult to do more than have a conversation for fun. Another official in the IT industry said, “In order to expand into the biomedical field and be utilized indefinitely in the future, reliability must be extremely high. “It is difficult to expect this in a situation like now where people make arbitrary decisions and give wrong answers even when there is not enough data,” he said.

Domestic companies are also working on this issue. Jaesik Choi, Professor of ICT at KAIST, said, “Users in Korea are picky about these issues. “In particular, there is a high possibility that the ChatGPT model of a domestic company using domestic data will receive negative comments if it answers with incorrect facts, so the Korean model is likely to consider automatic correction or verification functions before launch,” he said.

However, there are some opinions that guaranteeing reliability is not necessarily a prerequisite for service. Kim Seung-ju, a professor at Korea University’s Graduate School of Information Security, said, “Even if it does not develop into a 100% reliable service, it can be used as a ‘tool’ that is good for reference.” He continued, “Just like when Naver’s intellectual service first came out, the objects of reference for people have expanded from encyclopedias to intellectuals, Wikipedia, and now to ChatGPT. “We just need to establish social standards for where and how far knowledge can be utilized when dealing with knowledge,” he said.