Українська правда

Mark Zuckerberg says that Llama 3.1 will be able to compete with the most powerful models of OpenAI and Google

Mark Zuckerberg says that Llama 3.1 will be able to compete with the most powerful models of OpenAI and Google
Meta
0

Meta has introduced a new artificial intelligence model Llama 3.1. The company claims that the new neural model will be able to compete with the most powerful offerings from competitors such as OpenAI and Google, Bloomberg reports.

Meta CEO Mark Zuckerberg said that Meta's chatbot, based on Llama, has "hundreds of millions" of users. He expects this chatbot to become the most widespread in the world by the end of the year. In addition, Zuckerberg believes that other companies will also use Llama to train their own AI models.

“I think the most important product for an AI assistant is going to be how smart it is,” says Mark Zuckerberg in an interview. “The Llama models that we’re building are some of the most advanced in the world.” Meta is already working on Llama 4, Zuckerberg added.

Zuckerberg said that training the Llama 3 models cost "hundreds of millions of dollars [USD]" in computing power, and he expects future models to cost even more.

"Going forward it’s going to be billions and many billions of dollars of compute power," he said.

He believes that other AI companies are spending too much money and growing too fast. In 2023, Meta tried to cut some of its costs by cutting thousands of jobs. Zuckerberg called it the "year of efficiency."

However, the other side of such moderate spending is lagging behind competitors.

“If AI is going to be as important in the future as mobile platforms are, then I just don’t want to be in the position where we’re accessing AI through a competitor”, emphasized Zuckerberg.

Meta's CEO expresses his disappointment that the company has to rely on Google and Apple to distribute its apps on smartphones.

Despite the promise to make Llama open, Zuckerberg and other top executives have kept the datasets used to train Llama 3.1 secret.

“Even though it’s open we are designing this also for ourselves,” he explained.

According to Zuckerberg, Meta uses publicly available user posts from Facebook and Instagram, as well as other "proprietary" data sets that the company has licensed without disclosing details.

He also rejected the idea that training Llama on data from Facebook and Instagram posts is a key advantage.

“A lot of the public data on those services we allow to be indexed in search engines, so I think Google and others actually have the ability to use a lot of that data, too,” he noted.

In April, Meta told investors that it plans to spend billions of dollars more this year than originally expected, and the main reason for this is its investment in AI.

The company is expected to receive about 350,000 NVIDIA Corp. H100 GPUs by the end of the year. H100 chips have become the underlying technology used to train large language models such as Llama and ChatGPT, and can cost tens of thousands of dollars apiece.

Critics of Meta's open-source approach to AI point to the possibility of abuse or fears that tech companies from geopolitical rivals such as China will use Meta's technology to keep up with their American counterparts.

Zuckerberg is more concerned that closing off access to technology from other parts of the world could ultimately be harmful.

“There’s one string of thought which is like, ‘Ok well we need to lock it all down,’” he said. “I just happen to think that that’s really wrong because the US thrives on open and decentralized innovation. I mean that’s the way our economy works, that’s how we build awesome stuff. So I think that locking everything down would hamstring us and make us more likely to not be the leaders.”

He also added that it is unrealistic to think that the US will ever be years ahead of China when it comes to AI development, but noted that even a small, multi-month advantage can eventually "add up" and give the US a clear advantage.

Share:
Посилання скопійовано
Advert:
Advert: