close
PRESS
[News Lounge] ‘AI’ has emerged as a double-edged sword...what is its current status and future challenges?
Date 2023.07.14.View 1,050

■ Host: Anchor Ham Hyeong-gun
■ Appearance: Jaesik Choi, Professor Kim Jae-cheol, KAIST Graduate School of AI & CEO of INEEJI


* The text below may differ from the actual broadcast content, so please check the broadcast for more accurate content. When quoting, please specify [YTN News Lounge].

[Anchor]
Every Friday, this is the 'Vision Café', a corner where major topics that will shake up future society, such as artificial intelligence, population decline, and the climate crisis, are discussed in depth and countermeasures are discussed.

Today, we will first talk about the possibilities and future challenges of generative AI with Professor Jaesik Choi of the KAIST Graduate School of AI, Jaecheol Kim. When ChatGPT appeared at the end of last year, interest was so strong that it could literally be described as an AI craze. It's been more than half a year now, but at this point, let's talk about how we can objectively judge this AI craze.

Over the past year and a half, we have been talking about so-called generative AI such as ChatGPT. This has brought to the fore an outlook mixed with expectations, along with various concerns and concerns. How do you see it at this point?

[Jaesik Choi]
The use of Open AI and ChatGPT is spreading globally. However, as the number of users increases, voices are also growing that revenue must also increase accordingly. In the case of ChatGPT, there is also news that traffic has decreased by 9.7% as competing services continue to increase.

However, it can be seen that this does not mean that the technological trend of this type of generative AI is abating overall, as it is said that 51 startups around the world have received an investment of about 18 trillion won in this type of generative AI even though the world is still in a recession.

So it seems that people still think that the market will continue to expand with this new technology.

[Anchor]
In the case of ChatGPT, there were evaluations that some performance was improved as the version was upgraded. However, at this point, isn't such generative AI artificial intelligence in an advanced stage? How far have we come now?

[Jaesik Choi]
The reason people have different opinions is that this generative AI is really human-level, and we call it AGI. General-purpose artificial intelligence, like humans, learns with new and diverse data and is not just good at seeing, understanding, and speaking locally, but is it really good at thinking and understanding and creating new things? This is still a little different for each person. There seems to be a difference of opinion.

Among them, understanding language seems to have improved a lot, such as understanding expressions that people don't use often, and understanding what is said. The rest seems to still be a bit lacking.

[Anchor]
In some ways, general-purpose artificial intelligence seems to be what artificial intelligence researchers are aiming for. So, when it reaches that level, do you mean the stage where artificial intelligence becomes smarter and upgrades on its own without human help, as you said? ?

[Jaesik Choi]
That’s right. In fact, now with artificial intelligence, people create all the learning data and tell us how to learn, and if we do this, we can learn. Just like humans do, I have to learn that this time, this time I must learn it by hand, this time I must learn it with my eyes, and I have to learn data about various things. It would seem that it is a form that I can take, process, and store in the form of knowledge. It's not the case yet, but for example, I think learning coding is because ChatGPT is very good at coding.

So, what to do when coding is basically to watch how others code, and when an error occurs, fix the reason for the error, run it again, and then report. These haven't been released yet, but I think we can move toward general-purpose artificial intelligence in a very simple range, so that such systems may come out soon.

[Anchor]
Okay. Interest in generative AI has greatly increased, but in fact, the company that developed ChatGPT is a foundation called OpenAI. However, it is said that it was not a finished product. They say it was a research prototype, so it had several limitations, but they boldly made it public. After that, various problems were pointed out, and in fact, even the CEO of Open AI called for its limitations, risks, and need for regulation. So, there are some opinions that we released it too early or that it was reckless. What do you think?

[Choi Jae-sik]
Actually, he had a reckless side, but when we first saw Chat GPT, we thought, “Aren’t you good at this? It’s amazing that you’re good at it.” In this way, people's expectations have increased a lot as investment funds like this are raised to the tune of 18 trillion won, so it seems that such things can be seen as a positive effect. However, the reason that CEO Sam Altman thought that he should have been careful about these things is that if it is released as open source and is taken over by an enemy group, or a group that will abuse it, people may be deceived by it or it may be used for bad intentions. This is what I said.

For example, even on the Internet, if you learn with bad intentions by abusing malicious information, the dark web, drugs, gaslighting people, etc., problems can arise, so I think you need to keep paying attention to this.

[Anchor]
A free version and a paid version are available to use ChatGPT, and as you said, the source code is already public, right?

[Jaesik Choi]
In fact, the exact source code is closed, so I know what principle it is written on, but they do not disclose exactly what happens from the beginning of the study to the end.

[Anchor]
I’m not revealing the entire process.

[Choi Jae-sik]
So in fact, there is a closed code camp like this now, and Open AI is a closed source code, and in response to that, meta companies create open source code so that it can be used as a self-sufficiency model or general researchers can participate in development together. Let's see, there are two streams.

[Anchor]
Okay. So what are these generative AI methods like ChatGPT good at and what are they bad at? There have been many criticisms so far, but let me summarize them again. First of all, what do you think is the biggest advantage?

[Jaesik Choi]
What do normal people do in times like this? I'm very good at these things. So, what should I do if I am creative, or what should I do if I want to evaluate an employee? I am very good at these kinds of questions, and so I am very good at summarizing these questions online and saying what should I do. However, what I am still not good at is that they say hallucinations or false answers, but there are cases where the answer is wrong, or there are many cases where the math and formulas are wrong.

[Anchor]
You’re not good at arithmetic.

[Jaesik Choi]
I'm not good at arithmetic. I need to learn arithmetic using symbols, but since I learn it through writing, wouldn't I be able to memorize it all? Sometimes people think like this. So these answers are still there. And, importantly, there are still cases where people answer as they do, but they answered without knowing where they heard it, or they learned it from someone and answered without knowing where they heard it. Chat GPT is exactly what I said, but I don't know where I learned it.

[Anchor]
You can’t reveal the source.

[Choi Jae-sik]
Yes, the source has not been revealed yet, and because of that, there is a copyright lawsuit going on right now, so Open AI is in a situation where there is nothing they can do.

[Anchor]
That seems like a very fatal weakness.

[Jaesik Choi]
That’s right. However, the fundamental problem with this GPT model is as of now. However, I believe that technologies to improve these will continue to be developed in the future.

[Anchor]
The professor has pointed out a number of things that ChatGPT is not good at. One of the things that this kind of generative AI is not good at is that it is not good at predicting. How will society or companies change in the future? How will stock prices change? You can't ask something like this if it will change and then believe it.

[Jaesik Choi]
That’s right.

[Anchor]
This is the part I'm typically not good at. We asked a question to check the functionality of ChatGPT once again. Of course, this is the current version of ChatGPT, and we had a question about version 3.5, not the paid version 4.0. Could you please display the screen?

He asked me to write a poem. They asked me to comprehensively express the various expectations and concerns that have been raised about artificial intelligence in poetic language, and in particular, to write a poem with the feelings of poet Yoon Dong-ju. Then, I composed a plausible poem, but I am not sure whether the resulting poem is truly a poem that matches the diction of poet Yun Dong-ju.

Because when I asked questions while changing the names of representative domestic poets other than poet Yun Dong-ju, I got similar results again. So I was curious as to whether this was a limitation of ChatGPT and why the result was like that.

[Jaesik Choi]
We call this local information or local information. In fact, from ChatGPT's point of view, whether it is poet Dong-ju Yoon or another poet, there is very detailed information about the country in that region, so it is highly likely that learning is not possible with this information. So actually, let us know about a restaurant nearby. Then, ChatGPT talks about restaurants that seem to exist virtually, but there are many cases where such restaurants do not exist.

So, in my opinion, this GPT is what we call a foundation model, and there is a model that learns the common sense of human language, and more models that take this and learn with information about our company, country, or region will continue to be developed in the future, such as Dong-ju Yoon. I think we can also correct our misunderstanding of the poet’s sense of poetry.

[Anchor]
For example, specifically, let us write a poem with the feeling of poet Kim Su-young, not poet Yoon Dong-ju. Still, similar results are obtained. Do you think that it is not a problem with the algorithm of this system, but rather a problem with the learning process?

[Choi Jae-sik]
Yes, that's right, because I wasn't able to learn the poetry of poet Yun Dong-ju and other poets.

[Anchor]
If enough domestic data had been studied, better results would have been produced. Despite these various weaknesses and limitations, the level of interest has increased significantly both in the industry and among the general public. So, what does the emergence of generative AI, conversational AI such as ChatGPT, and generative AI mean? Every time a new technology, such as smartphones or smartphones, appears, it brings a huge change to our society. There are many opinions on whether this will be an important turning point in technological civilization or a major turning point in AI technology. Professor, what do you think?

[Jaesik Choi]
It is true that this technology has truly developed. However, rather than saying that there has been a complete transition to this kind of generative AI, AI technologies such as vision, hearing, control, and self-driving cars were difficult to imagine about 15 years ago. So, AI has gradually increased and developed into our lives, but this time it has come to the area of ​​language. But in fact, in the case of literacy, there are times when many people feel like they understand and are responding when they see the answer, even if they are not sure whether they actually understand the content accurately.

So, language contains a container of thought, there is diversity, and there are many languages, and it seems that models such as ChatGPT and Bard have made progress by seeing that they respond to this to a level where people can commercialize it.

So, I think there are definitely areas where this kind of automation is accelerating significantly. Things like call centers and translations. I once gave a lecture to graduate students who were interpreters and translators at a certain school. There were 1 hour of questions asked. What should we do next? A question has been asked: Aren't all of these interpretations and translations done by AI?

[anchor]
Especially since AI is considered one of the areas it excels at.

[Jaesik Choi]
That’s right. AI will definitely be used a lot in that area.

[Anchor]
There may be various evaluations, but let’s talk about the concerns at this point. In particular, world-class scholars are warning that the emergence of AI will pose a threat to humanity in many ways. This may vary depending on the level of AI we are talking about, but for example, MIT Professor Emeritus Noam Chomsky wrote an article in the New York Times last March. There, generative AI such as ChatGPT was compared to pseudoscience. The limitations are clear.

Or, there were other world-class scholars who announced something in the form of a joint declaration. There were voices warning that it would be a great threat to humanity. There are many different opinions, but do you agree with some of them? How do you see it?

[Jaesik Choi]
Actually, I think it’s a difference in perspective. Is this actually safe for such large-scale language models and ChatGPT without really understanding the principles one by one? Or is it true? You seem to have doubts about these things, but personally, I can't find a problem that poses a threat to humanity, such as a nuclear bomb.

I don't think that's likely, and it seems like it would have a lot of positive effects, but as I briefly mentioned earlier, it hasn't been solved yet. One of the problems that researchers who can answer this question can solve is, did ChatGPT really understand and answer the problem? In fact, I think we can ask this question by revealing this well.

However, as I mentioned earlier, the elements that can pose a threat are that people can easily access the dark web, pornography, drugs, and smuggling information, or that ChatGPT can easily accept the words and thoughts of a gaslighter. It's about getting people to sit in front of you and making them want to keep chatting. Do not go. Do this. Like this. This may be a bit of a threat, but I don't think it's yet to the point where it poses a major threat to people's lives and property.

[Anchor]
However, since ChatGPT itself can create fake information, there are many concerns about misleading public opinion. However, from a technical point of view, we also call it a so-called hallucination phenomenon, and such misinformation is like Is there room for much technical improvement in the parts where answers are made up to make it seem like it's real? What is your outlook?

[Choi Jae-sik]
Hallucinations, in fact, seem like they are very bad, but among the things people are writing about a lot on the Internet these days, tell us about the incident where King Sejong threw an alpha pad. Then, in reality, it can't happen, because times are different. But ChatGPT is just writing a novel. Have fun using it.

I've talked about things like hallucinations like this, but they don't make sense to a person's common sense, but in fact, hallucinations themselves happen because of the principle that ChatGPT answers. In the end, people say that they create it, but in many cases, ChatGPT answers by importing the knowledge in each document, which we call copy and pasting, and importing and combining it to see if it makes sense, and then importing and combining it again next time. It goes through this process, but in fact, it is brought in and attached like this. Does it make sense when brought in and attached like this? Because the technology to verify this is still lacking. When people create words, they do not just make this statement plausible, but also ask whether the facts are correct. ChatGPT still lacks common sense in checking and answering these questions.

[Anchor]
In the case of Open AI, you are saying that time will solve the problem, but are there fundamental limitations? What do you think?

[Choi Jaesik]
Of course, Open AI is doing it internally, but in reality, it is difficult to solve the problem using only the learning methods discussed in ChatGPT 3.5 or 4, which Open AI is known for. People also speak, but to speak well, you have to learn a lot, for a long time. Likewise, I think that more and more filtering tools will continue to be created to verify and filter out different types of words.

[Anchor]
We will continue to ask common questions to each participant at this Vision Café. So, can you give us your predictions on what today's topic, generative AI and artificial intelligence, will look like in 5 or 10 years?

[Jaesik Choi]
I think that in 5 or 10 years, this kind of generative AI will be very diversified. So, we are now only thinking about ChatGPT and Bard, but in fact, it seems to be diversified, including open software and the Korean AI ChatGPT giant language model. And one thing is certain: we are creating generative AI and a diversified foundation model. I think I'll be using a lot more applications.

So, I think things like the way we click and click and surf the web interactively or verbally will become much more widespread, and I think there will be more companies like that.

[anchor]
Some people are very interested in the so-called singularity. The singularity will come around 2045. This prediction was already made in a book a dozen years ago, but there are so many different opinions among experts about this, so to speak, the moment when artificial intelligence surpasses human intelligence, that moment will come. Although it is a matter of controversy, many questions are growing about whether artificial intelligence will have the same consciousness as humans if it develops to a high level. How do you see it?

[Jaesik Choi]
In fact, while I say that ChatGPT is lacking because of its false answers, on the other hand, what I am thinking is, then, do I know as much information as ChatGPT? If you think about it, in certain fields, AI computers are already like humans. There are many things that are beyond our intellectual abilities.

It's very advanced, and it detects minute details, but what we're talking about here as a singularity is that it surpasses people in all aspects of intelligence, and at the same time is socially capable of controlling people. We talked about gaslighting earlier, and things like this. Will it develop to that point? But the important thing is that if there is such a development, people will know in advance.


So, as I said earlier, I think we should continue to pay attention and develop more AI so that it can continue to develop in a way that benefits the right people through various technologies.

[Anchor]
Okay. We've talked a lot today about how artificial intelligence will change our society and our future. In fact, it contains a lot of hope and possibility, but on the contrary, there are also many points of negative concern. For example, various legal and ethical issues are being raised, such as the spread of fake information, the disappearance of jobs, and copyright infringement, but we will discuss these issues in detail in our Vision Café corner at a later date.

I will listen to today's message up to this point. We were joined by Professor Jaesik Choi of KAIST’s Graduate School of AI, Jae-cheol Kim. Thank you.