laitimes

Jensen Huang talks to Zuckerberg: Will the next wave of AI be the big model of open source robots?

On the morning of July 30, at the SIGGRAPH conference held in Denver, United States, NVIDIA founder and CEO Jensen Huang sat down with Lauren ·Goode, senior writer for Wired magazine in United States, and Mark ·Zuckerberg, founder and CEO of Meta, a metaverse company, for two fireside chats, imagining the future of generative AI.

In the first half of the conversation, Huang and Zuckerberg had a conversation about generative AI, open source technologies, and more, and emphasized the importance of open source models in driving the development of AI. In the second half of the dialogue, they discussed AR/VR computing platforms and the wave of robots, pointing out that the next wave of AI is robots, and AR glasses may replace smartphones in the future. Toward the end of the fireside chat, Huang and Zuckerberg exchanged coats again, causing a heated discussion.

Jensen Huang talks to Zuckerberg: Will the next wave of AI be the big model of open source robots?

Generative AI will enter a new wave of applications

Before the conversation began, Huang had already shown off a series of Nvidia's latest products at the conference, announcing that "Nvidia will begin sending samples of Blackwell this week, the company's first chip architecture this year." According to NVIDIA, the Blackwell GPU has improved its training performance by four times compared with the previous generation Hopper H100 GPU, the inference performance has been improved by up to 30 times, and the energy efficiency has been improved by 25 times.

In addition, Zuckerberg kicked off by announcing a new tool called AI Studio, built on the company's latest large model, Llama 3.1, which allows users to create, share, and design personalized AI chatbots, while allowing Instagram creators to use AI characters "as an extension of themselves" to handle simple autoresponders.

In the conversation, Zuckerberg said that social media is becoming more and more informational, and how to help users filter content has become a big problem, and generative AI can help improve social media's recommendation system. He believes that in the future, every enterprise will have its own AI Agent, which can independently complete complex tasks that take a long time.

"AI will be used not only for content recommendation systems, but also for instant content generation and the integration of new content from existing content, which will revolutionize the flow of information and recommendation systems on platforms like Instagram and Facebook." Zuckerberg said.

Zuckerberg shared Meta's progress on generative AI and noted that the development of foundational models is accelerating. "Even if progress on the foundational model stalls now, it will take at least five years for the industry to innovate its products to explore how to most efficiently leverage everything that has been built so far. But in reality, the progress of basic models and basic research is accelerating.

While appreciating Meta's exploration of AI, Huang said, "I think very few people realize that the recommender system is one of the largest computing systems ever designed in the world." "However, AI as a chatbot is still in the "one question and one answer" stage, and I am super excited that AI may be able to generate decision tree thinking for users in the future. ”

In the generative AI ecosystem, AI Studio is an important step for Meta to promote the iterative improvement of AI chatbots, and the Llama 3.1 model behind it is crucial. Previously, Huang said that generative AI is undergoing a fundamental shift in every field, and the Llama 3.1 open-source model marks a critical moment for global enterprises to adopt generative AI, and Llama 3.1 will set off a wave of advanced generative AI applications for businesses and industries.

The open source model will win

With the further evolution of large models, open source large models are bursting out with greater potential.

Just last week, Meta officially released the Llama 3.1 series of models, which include three sizes: 8B, 70B, and 405B. The Llama 3.1 405B is known as the world's most powerful open-source model, and its strength is on par with mainstream closed-source models such as GPT. Although the "openness" of these models is controversial, they have become a relatively easy AI performance standard to meet in the relevant field.

Huang repeatedly praised Zuckerberg's open-source approach and said that the emergence of Llama has activated every company and every industry, and is helping more developers and companies gain access to AI model technology.

He also said that the open-source strategy has benefited more people, and that Nvidia has also created new NVIDIA NIM and AI Foundry services for users inspired by Llama, allowing developers to create smaller, custom Llama 3.1 models for generative AI applications. Enable enterprises to run Llama-powered AI applications on more infrastructure, such as PCs.

In fact, Meta has been an important customer of Nvidia since the development of large models. Huang revealed in the conversation that "the number of Nvidia GPUs owned by Meta may have reached 600,000." In addition, in January this year, Zuckerberg announced that Meta plans to buy 350,000 H100 GPU chips from Nvidia by the end of the year.

Talking about why the Llama model is open source, Zuckerberg said, "Open source is to build a more affordable and convenient technology platform for developers, which can facilitate developers to modify code to create their own applications, and at the same time, it can also save a lot of development costs." At the same time, open source is also conducive to building a more diverse ecosystem. ”

He mentions, "Developers who stick to both open source and closed source technology routes have their own reasons, and there is no good or bad thing. But overall, open source is valuable for the computing platforms that the industry is building. In an open operating system, we are able to work with different software and hardware companies to create more applications. ”

Zuckerberg firmly believes and is optimistic that open source will win for Meta in the next stage of competition.

However, Zuckerberg once burst into foul language about the impact of rival Apple's closed system, saying that "the closed-source platform is somewhat selfish." "One of the things I'm going to do in the next 10 or 15 years, after creating Meta for a while, is just to make sure that we can build the underlying technology of the social experience," he added. This requires not only building an AI software, but also an ecosystem around it. Open source not only makes what we're building the best it can be, but it's also great for the ecosystem. ”

The next wave of AI is robotics

As for the end application of AI technology, in conversations with Lauren · Goode and Zuckerberg, Huang emphasized that the next wave of AI will be "physical AI," that is, to make AI better understand the physical world.

Huang said the first wave of AI is accelerated computing, which will not only reduce energy consumption, but also serve enterprise customers, and hopefully give every organization the opportunity to create their own AI. "The next wave is physical artificial intelligence. It requires three computers, one to create artificial intelligence, another to send instructions to the robot, and a third to handle transactions. ”

In other words, Huang believes that the next wave of AI is humanoid robots. As stated in the physical AI video presentation, "We are entering the era of AI-powered humanoid robots. ”

At the conference, NVIDIA detailed how to accelerate the development of humanoid robots and announced some new products, including NIM microservices and frameworks for robot simulation and learning, OSMO orchestration services for running multi-stage robot workloads, remote operations workflows that support AI and simulation, and an AI-driven customer service agent.

Among them, the MimicGen NIM microservice can generate synthetic motion data based on remote operation data recorded by spatial computing devices such as Apple's Vision Pro. The Robocasa NIM microservice generates robot tasks and simulation-ready environments in OpenUSD. Remote action workflows, on the other hand, allow developers to train bots with small amounts of human demo data, for example.

In addition, NVIDIA announced that it will provide a suite of services, models, and computing platforms to the world's leading robotics manufacturers, AI model developers, and software manufacturers to develop, train, and build the next generation of humanoid robots. The first batch of companies to join the NVIDIA Humanoid Robot Developer Program include 1x, Boston Dynamics, ByteDance Research, FieldAl, Figure, Fourier, Galaxy General Motors, and Chase Power.

"Computer vision is a current research focus, and NVIDIA is using video to train AI models so that they can better understand the world," Huang said. In the future, robots will also be better able to operate in the physical world. ”

At the same time, Zuckerberg believes that virtual worlds will be a source of potential to drive a new wave of AI and robotics in the future. "Smart glasses will be the mobile version of the next generation of computing platforms, and mixed reality headsets will be more like your workstation or game console." The development of these devices will enable people to interact with the virtual world in a more natural and intuitive way, driving advances in AI and robotics.

AR glasses or an alternative to smartphones

Driven by the development of AI and display light technology, virtual reality (VR) and mixed reality (MR) are ushering in an emerging stage of development.

Huang said there is huge market potential for smart glasses and mixed reality headsets for different use cases. Among them, AI devices that do not need to be worn continuously will be welcomed.

When he asked about the future trend of smart glasses, Zuckerberg admitted that he had always thought that holographic AR glasses would appear before the arrival of the AI era, but they have not been mass-produced until now. However, "smart glasses will be an important form of computer development in the future, and the next generation of computing platforms will be a combination of smart glasses and virtual reality (VR)/mixed reality (MR) headsets." Among them, smart glasses are used for mobile use, and VR/MR headsets are used for compute-intensive workstation tasks. ”

"We're still some time away from having holographic glasses," he added. I don't think it's too far off to achieve this in a sleek, thicker-framed pair of glasses, though. "In the future, a video conversation may no longer be just an image of the interlocutor, two people can interact, play cards, and have a face-to-face meeting, which sounds crazy, but it is still the direction of the effort."

Previously, Meta has cooperated with Ray-Ban to launch two generations of smart glasses, of which the second-generation Ray-Ban Meta has achieved sales that have exceeded expectations. Since the beginning of this year, Meta's first AR glasses are rumored to be unveiled in 2024.

Zuckerberg said, "We have been building what we believe is the technology needed for the ideal holographic AR glasses, and in the future, there will be a series of smart glasses products at different price points and different levels of technology, among which the $300 price point will become the most popular style, which is expected to usher in tens or hundreds of millions of consumers." At the same time, he is also optimistic about integrating AI with the real world through glasses, and mentioned Meta's partnership with eyewear manufacturer Luxotic, which he believes can help transform education, entertainment and work.

"Five years ago, I thought that AR would develop faster than AI, but now the rapid development of AI has reversed this fact and has played an important role in the development of virtual reality." Zuckerberg said that in the future, AR glasses may replace smartphones.

From the perspective of the development path of Nvidia and Meta, to some extent, they have not foreseen the development trend of AI, but now they have achieved rapid development in the new stage by virtue of the layout around AI, especially Nvidia has soared to the top three in the global market capitalization.

At the end of the fireside chat, Huang lamented that from the mobile era to virtual reality to AI, he and Zuckerberg are both witnesses and witnesses of technological change, and they know that this journey has not been easy. "In the process of technological change, the two of us have been kicked out many times, but we both persevered, and that's what it takes to be an innovation pioneer."

Read on