close
PRESS
The future of AI as seen by Professor Jaesik Choi of KAIST’s Graduate School of AI, Jae-cheol Kim
Date 2024.03.27View 817

photo Geon-song Lee, video media reporter

Last March 13th was a day when both expectations and concerns about the development of artificial intelligence (AI) were confirmed. In the United States, AI robots that talk to humans and take actions appropriate to the situation have appeared, and the European Union (EU) has passed the world's first AI technology regulation bill. 

On this day, 'Figure AI', an American robot development startup, released a video of 'Figure 01', a robot created in collaboration with 'Open AI', a ChatGPT developer. When the human asks the robot, “What do you see now?” Figure 01 replies, “I see a red apple on a plate in the center of the table and you standing with your hands on the table.” When the human asked, “Can I have something to eat?” he said, “Sure,” and picked up an apple and handed it to him. When asked to explain its actions just now, the robot says, “I gave you the apple because it was the only food available on the table.” Has an AI robot that can freely converse with humans really appeared?

Jaesik Choi, a professor at the Jaecheol Kim Graduate School of AI at KAIST, said, “Picking up objects is something that robots are naturally good at,” adding, “Previously, an expert would give instructions in programming language and the robot would perform the task, but now natural language (the language that humans use in everyday life) is used.” “It is meaningful to be able to give instructions and see them right away,” he said. Professor Choi is the director of KAIST's XAI (Explainable Artificial Intelligence) Research Center and the CEO of INEEJI, a leading XAI company in Korea. On March 19th, I met Professor Choi at the Seongnam Research Center of KAIST Kim Jae-cheol AI Graduate School in Seongnam-si, Gyeonggi-do.

- Looking at Figure 01, I am concerned that artificial intelligence will soon surpass human intelligence. “When a new AI comes out, people say, ‘Isn’t that all over?’ do. But as time passes, you say, ‘This doesn’t work, that doesn’t work either.’ In situations where expectations are high, it is important to know the reality. From a business perspective, AI must attract attention to receive investment. So it takes time to know the extent of AI’s actual capabilities. The starting point where artificial intelligence surpasses human intelligence is called the ‘singularity’. The arrival of the singularity means the arrival of general artificial intelligence (AGI). Just because an AI robot can pick up trash, can we say that a robot can automatically wash dishes at home? Before worrying about AGI, it is important to accurately understand the capabilities of the AI ​​models that have been released so far. “I think concerns that AGI will appear immediately are premature.”

- Nvidia CEO Jensen Huang predicted that an era of AGI that will surpass human levels will emerge within five years. “Even if something close to AGI is created within 5 years, when you ask, ‘Is this as smart as a human being?’ you may say, ‘No, it’s not enough right now.’ AGI must be good at all fields, not just one specialized field. You can evaluate how well you are in one field, but by what standard can you measure being good at multiple fields? “The standards for AGI are not clearly defined.”

- Does this mean that even if AGI is introduced, human greed will try to create a better AGI? “Let’s first think about what AGI is. If it gives a reasonable answer when asked a question from the learned list, but cannot answer a question it has not learned, it cannot be called AGI. If we can reason and answer new questions, we can think of it as AGI. But what if AI learned a list of all the questions a person could ask? I feel it is the same as AGI. So you might think that AI can just memorize everything, but memorizing it is very cumbersome. This is because more storage devices are needed. If you understand the principles, you don’t have to memorize everything. “Currently, the method is to memorize a lot of information and get answers to similar questions, but AGI is moving in the direction of ‘understanding.’”

- How far has AI developed? “If human intelligence is 100, it seems to have reached 30 to 40. Of course, artificial intelligence may develop further than 100. Figure 01 has a technology that accurately recognizes the desired object and picks up only that object, which is already widely used in the field. A big advancement here is that instructions can be given through human language without the need to modify code. Last year, during the 'Sam Altman dismissal incident' at Open AI, what was internally discovered was that 'it's a problem we haven't seen, but AI is good at solving it.' There is a huge difference between solving a problem that accidentally entered the AI's training data and solving it through inference even after deleting all similar problems. However, it may be that we have not been able to erase all similar problems.”

- Is now the time to regulate AI or is it time to develop it? “I think it’s half and half. I am a person who believes that further development of AI is necessary, but I also believe that verification and research on whether AI can be used safely should always be conducted. It is important to check whether AI can be safely controlled and whether we can clearly see what it has learned. For example, if we scan the brain with an MRI, we can explain which part of the brain is problematic and causes slurred speech. In the same way, AI must be able to look inside to find or fix problematic parts.”

The 'AI Regulation Act', which was passed on March 13, will be fully implemented from 2026 if ministers of the 27 EU countries give final approval in April. According to the bill, the EU differentially regulates AI utilization areas by dividing them into four levels: 'unacceptable risk', 'high risk', 'limited risk', and 'low risk'. Public services, including medical care and education, elections, and autonomous driving are classified as 'high risk', and 'social scoring', which gives individual scores using data related to individual characteristics and behavior, or real-time using AI The use of remote biometric identification systems is prohibited due to ‘unacceptable risks’. Additionally, the EU has imposed a 'transparency obligation' on companies developing generative AI such as ChatGPT, requiring them to comply with EU copyright law and specify the content used in the AI ​​learning process. If the law is violated, fines ranging from 1.5% to up to 7% of global sales are imposed.

- Why did the EU create the ‘AI Regulation Act’? “This is to protect internal citizens from side effects that may occur in the process of using AI. A typical example of a side effect is that when you ask AI how to make a bomb or obtain drugs, the answer should be 'I can't tell you' or 'I don't know', but when you ask the AI ​​the question, it tells you how. Regulation itself may not be helpful in creating new technologies. A shift away from the EU may also occur. To put it simply, rather than creating an AI-related company in the EU, which has strong regulations, wouldn’t you rather go to a country with less regulations?”

- U.S. President Joe Biden announced the 'AI Executive Order' in October last year. Should the U.S. also see it as an attempt to regulate AI? “The AI ​​executive order includes a plan to mandatorily attach an identifiable mark (watermark) to AI-generated data, as well as notification to the federal government when testing AI models that pose a risk to national security or economic security. However, unlike the EU's AI regulation law, which focuses on regulation, the AI ​​Executive Order can be seen as an effort to create an AI verification system. “It feels like a safety measure that can be put in place when a problem arises.”

- To what extent have AI-related regulations been discussed in Korea? “The ‘Personal Information Protection Act Amendment’ went into effect on March 15th. It can be seen as the first AI-related regulation implemented in Korea. Now, if a data subject who provided personal information about a ‘fully automated decision’ requests it, there is an obligation to explain why the decision was made. For example, if an interviewer objects to a decision made by an AI interviewer during an employment exam, the reason for being rejected must be explained. “If you have a very important text message and you don’t receive it because it’s classified as spam, you have to explain to your carrier why it was blocked.”