Українська правда

Artificial intelligence does not pose a threat to humanity because it cannot learn on its own

Artificial intelligence does not pose a threat to humanity because it cannot learn on its own
Digital artificial intelligence text hologram 3D rendering
0

Artificial intelligence has been repeatedly shown as a threat to all of humanity, such as the Skynet uprising in the Terminator franchise. However, a new study argues that there is no such threat now. According to Ukrainian professor of computer science Iryna Gurevych, who led the study, we should be more concerned with how artificial intelligence is used. This was reported by Neuroscience News.

A joint study by the University of Bath and the Technical University of Darmstadt in Germany, which was published at the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024), the leading international conference on natural language processing, shows that large language models (LLMs) lack the ability to learn on their own. LLMs can follow instructions and tasks, but they are unable to learn something new on their own.

The researchers concluded that LLMs as they are now can be released without security concerns. The only threat that artificial intelligence can pose is malicious use by real people.

As part of the study, an experiment was conducted to test the ability of AI to perform difficult tasks that these models had never performed before. The result was that artificial intelligence was able to answer questions about social situations without being specially trained or programmed to do so. However, the researchers say that the AI doesn't know about these situations, instead it uses contextual learning based on a few examples that it is presented with.

“The fear has been that as models get bigger and bigger, they will be able to solve new problems that we cannot currently predict, which poses the threat that these larger models might acquire hazardous abilities including reasoning and planning. This has triggered a lot of discussion – for instance, at the AI Safety Summit last year at Bletchley Park, for which we were asked for comment – but our study shows that the fear that a model will go away and do something completely unexpected, innovative and potentially dangerous is not valid,” said Dr. Harish Tayyar Madabushi, a computer scientist at the University of Bath and co-author of the new study.

Another conclusion of the researchers was that it would be a mistake to try to use artificial intelligence to perform complex tasks that require complex reasoning without clear instructions. Instead, if users want to get the best result, they should describe the task as broadly as possible and with all the details.

Iryna Gurevych says that instead of worrying about artificial intelligence posing a threat, we should be worried about its malicious use.

“Our results do not mean that AI is not a threat at all. Rather, we show that the purported emergence of complex thinking skills associated with specific threats is not supported by evidence and that we can control the learning process of LLMs very well after all. Future research should therefore focus on other risks posed by the models, such as their potential to be used to generate fake news,” Gurevych said..

Share:
Посилання скопійовано
Advert:
Advert: