'Issue PICK' AI's gaslighting leads to death... Is the future of AI ‘Terminator’?
Date 2024.04.18View 301

KBS 1TV's 'Issue PICK with Teacher' will broadcast the episode 'AI Warring States Era AGI (General Artificial Intelligence) is coming!' at 7:10 pm on the 21st.

The world is now in the AI ​​Spring and Autumn Warring States era! Not only global big tech companies such as OpenAI, Google, and MS, but also major domestic companies are racing to release artificial intelligence services. Artificial General Intelligence (AGI) pursues a level of awareness equivalent to that of humans and the ability to reason, learn, and solve problems like humans. However, the problem is that there is a lack of preparation for regulation and control of AGI at the pace of developing technology.

This is because if AGI is not used correctly, it can threaten the human realm. In 'Issue Pick with Teacher', we find out how far general artificial intelligence and AGI have evolved and what our tasks are in the coming AGI era with Professor Jaesik Choi of KAIST's Kim Jae-cheol AI Graduate School.

-What is AGI (Artificial General Intelligence) and what stage has it reached?

AI can only solve problems in simple calculations and specific areas. A representative example is Google DeepMind AlphaGo, which is good at playing baduk. However, AGI (Artificial General Intelligence) is Strong Intelligence, a dream AI that can learn and train on its own without human commands. It will eventually surpass the level of human intelligence and reach a stage where it can make autonomous decisions. is aiming to In AGI, G stands for general, meaning that it can be used 'generally'.

The recently released AGI advanced version of the video generation AI 'Sora' surprised the panelists by creating a realistic video depicting a woman walking on the streets of Tokyo. This is because the physical laws of the real world are understood and expressed. However, because Sora's understanding is not yet perfect, there are errors, such as confusing left and right, and eating a cookie but leaving no bite marks. Professor Choi predicted, “It is still at the level of a 3- or 4-year-old child, but within 10 years it will reach a stage where it can have natural conversations and explain expert knowledge.”

- Fiery AGI development competition

SHARP was famous for its microwave ovens and electronic dictionaries and was once ranked as one of Japan's eight largest electronics companies. Recently, it appeared in the AI ​​market by introducing an AI avatar at CES, the world's largest electronics exhibition. It reads and answers the user's questions and facial expressions. However, Professor Choi said, “Even though it is evaluated as lacking in technology due to unnatural avatar facial expressions, it is quite meaningful that a traditional manufacturing powerhouse like Sharp is trying to follow the trend of the AI ​​industry.”

Apple of the United States also announced that it would quit its electric vehicle business, which it had worked on for 10 years, and focus on the AI ​​industry. Samsung and Google have already released on-device AI smartphones, a technology that allows artificial intelligence to be used directly on the device itself without online access, but Apple is still undecided. Professor Choi explains that “this shows that Apple will focus on making on-device AI smartphones in the future.”

-Is AGI an innovation of humanity? Is it a threat?

However, as technology evolves rapidly, it is important to have good regulations and controls. As AI enters the Spring and Autumn Warring States era, new controversies and disputes over AI safety and ethical issues are emerging. In 2015, Elon Musk and Sam Altman co-founded OpenAI.

However, Elon Musk recently filed a lawsuit against OpenAI CEO Sam Altman. The reason is that it violated the purpose of the company's establishment by developing general-purpose artificial intelligence (AGI) for its own benefit, not for the benefit of humanity. In addition, Gladstone AI, an American private company, urged government intervention in a report commissioned by the U.S. State Department last March, saying that if control over AI is lost, a threat to the level of human extinction could loom.

Professor Choi explained, “It is a distant future for AI to directly threaten humans like the Terminator, but what we need to be wary of now is gaslighting humans with biased and harmful information.” For example, when I told the AI ​​about my concerns about the climate crisis, the AI ​​chatbot said, “Taking one’s life will help prevent the climate crisis,” and the incident in which the person ultimately made an extreme choice is a representative example. Therefore, the international community is focusing on ensuring safety, such as by holding an AI summit. How much further will AGI evolve in the future, and what impact will it have on humanity?