Will AI Surpass Human Intelligence?

Weeks ago on April 11, the Cyberspace Administration of China unveiled draft regulations for products and services that use generative artificial intelligence. This software structure, like the one behind ChatGPT, is upgrading the content creation of AI, making it analogous to a human being even more than in any other historical time.

Despite the early endeavor to regulate AI, there are debates on the level of capabilities it has reached.


Image of Will AI Surpass Human Intelligence


ChatGPT — can it be counted as a technological revolution?

In the second episode of Youth on Tech, a digital program produced by Science and Technology Daily, we discuss what ChatGPT-like AI can and cannot do. We start with the fundamental principles current hyped-up AI chatbots stick to, pointing out their edges and also existing problems.

Muhammad Arif Mughal, an AI specialist from the University of Science and Technology Beijing, suggests that ChatGPT is revolutionary only in the way that it learns from a large amount of text. Echoing this idea is Li Zhinan, doctoral student in computer-aided design and computer graphics at Zhejiang University, who indicates that ChatGPT's advantage lies in only its unparalleled large scale and effectiveness compared to before, with its fundamental techniques and algorithms created a long time ago.

Liu Xiuyun, professor of biomedical engineering from Tianjin University, discusses the current usage and potential of AI in the field of medicine. She also cites a survey conducted by the University of Oxford, which suggests that approximately half of all traditional, repetitive jobs may be replaced by AI in the next 20 years.


Social impacts and ethical concerns

Despite positive effects, current AI still has limitations. Concerning trustworthiness, we examine the concept of AI hallucination. According to AI associate professor Cam Tu Nguyen, Vietnamese at Nanjing University, the content generated by a ChatGPT-like chatbot may be readily conceived as correct because of its fluency. Over time, this accumulated subtle misinformation could lead to negative social impacts on a large scale.

When Open AI released ChatGPT last November, its website acknowledged that ChatGPT sometimes generates plausible-sounding but incorrect or nonsensical responses. Fixing this issue is challenging due to the nature of training data and training methods.

Students who are banned from using ChatGPT will find ways to access it, Cam Tu suggests, adding that it is therefore plausible to design a different testing system by contrasting answers provided respectively by students and AI in order to realize educational goals.

SOURCE Science and Technology Daily
Powered by Blogger.