laitimes

SIGGRAPH 2024 Jensen Huang & Zuckerberg's Latest Conversation: Information Flow and Recommender Systems Worth Redoing with Generative AI

At SIGGRAPH 2024, NVIDIA CEO Jensen Huang and Meta founder and CEO Mark Zuckerberg had an in-depth conversation to reveal the future of generative AI in information flow and recommendation systems. The conversation not only covered the cutting-edge developments of AI technology, but also delved into innovative breakthroughs in virtual worlds and robotics. This article will take you into this feast of science and technology to find out.
SIGGRAPH 2024 Jensen Huang & Zuckerberg's Latest Conversation: Information Flow and Recommender Systems Worth Redoing with Generative AI

At 6 a.m. on July 30, NVIDIA CEO Jensen Huang and Meta founder & CEO Mark Zuckerberg sat down at SIGGRAPH 2024 to discuss accelerated computing, generative AI, and research driving the next wave of innovation, virtual worlds, and robotics breakthroughs.

Huang highlighted NVIDIA's leadership in computer graphics, image processing, AI and robotics, particularly at the intersection of AI and simulation.

He mentioned that NVIDIA uses AI to help simulations become larger and faster, and uses the simulation environment to generate synthetic data. The combination of these technologies is driving the convergence of AI and simulation technologies.

Zuckerberg noted that the future of AI will be used not only for content recommendation systems, but also for instant content generation and the integration of new content from existing content, which will revolutionize the flow of information and recommendation systems on platforms like Instagram and Facebook.

Zuckerberg also said that smart glasses will be the mobile version of next-generation computing platforms, while mixed reality headsets will be more like workstations or game consoles, and that Meta has partnered with EssilorLuxottica to launch Ray-Ban smart glasses that integrate camera, microphone, and AI conversation......

The following is a quick summary of the content of this conversation, enjoy~

Jensen Huang

Do you know? 90% of PhD students. So, what's really great about SIGGRAPH is that it's a showcase of computer graphics, image processing, AI, and robotics combined. Some companies have shown and revealed amazing things here over the years, from Disney, Pixar, Adobe, Epic Games, and of course, Nvidia, where we've done a lot of work. This year, we presented 20 papers at the intersection of AI and simulation. We're using AI to help simulations become larger and faster, such as differentiable physics. We use simulations to create a simulated environment for synthetic data generation for AI.

So those two areas are really coming together, and we're really proud of the work that we've done here. At Meta, you guys do a lot of amazing AI work. I find it interesting that when the media writes about Meta jumping into AI over the last few years, it's like FAIR (Meta's AI research lab) has been working, we're all using Meta's PyTorch, and your work on computer vision, language models, real-time translation is groundbreaking. My first question is, how do you see Meta's progress in generative AI today, and how do you apply it to enhance your operations or introduce new features that you're providing?

Mark Zuckerberg

There's a lot to unpack here. First of all, it's a pleasure to be here. Meta has done a lot of work and has been at SIGGRAPH for eight years. So, I mean, we're new to you. But that was in 2018.

We show off some of the early hand tracking efforts on our VR and mixed reality headsets. I think we've talked a lot about the progress we've made with Codec Avat AR s, which are photorealistic avatars that we want to be able to drive with consumer headsets, and we're getting closer to that goal, so we're very excited about that. And also a lot of work that we've done on display systems.

So we're showing a lot of future prototypes and research in order to make mixed reality headsets very thin, but with fairly advanced optical stacking and display systems, integrated systems. Usually these are the first things we show here. So, it's exciting to be here. And here I'm not just talking about the metaverse stuff, but all the AI part, like you said, when we started FAIR (AI Research Center), it was Facebook then, and now it's Meta, and we've been working on this for a while before we started the reality lab. All about generative AI, it's an interesting revolution.

It will ultimately make a difference in all of our different products in an interesting way. So, I can name the main product lines that we already have. Like the feed and recommendation system on Instagram and Facebook, we've started with this journey from just connecting with your friends. Rankings are always important because even if you're just following friends, if someone does something very important, like your cousin has a baby, you want it to be at the top. If we bury it in your stream, you'll be angry.

So rankings are important, but in the last few years, it's become more about coming from different public content. Referral systems are very important because now, not just a few hundred or a few thousand potential candidate posts from friends, but millions of pieces of content. This turns into a very interesting referral question. As generative AI evolves, we're soon moving into an area where not only is most of the content you see on Instagram today being recommendations from around the world that match your interests, regardless of whether you follow those people or not.

A lot of things will be created using these tools in the future. Some creators use tools to create new content, which will end up being content created on the fly for you, or content that is integrated and synthesized from different existing content. So, this is just one example of how the core parts of what we're doing are going to evolve. And this has evolved over the past 20 years.

Jensen Huang

One will realize that one of the largest computing systems in the world is a recommender system.

Mark Zuckerberg

But this is a completely different path. It's not the kind of generative AI that people talk about, but like all Transformer architectures, it's a similar thing, just building more and more generic models that embed unstructured data into features.

I mean, one of the big things that drives quality improvement is that you used to have different models for different types of content. A recent example is that we have a model to rank and recommend reels and another model to rank and recommend videos in longer formats. Then it takes some product work to enable the system to display anything in a row. But the more generic recommendation models you create, the better they'll be.

So, part of my dream is that one day, you can pretty much imagine that all of Facebook or Instagram is a single AI model that unifies all these different types of content and systems, and actually has different targets in different time frames. One part is to show interesting content that you want to see today, and the other part is to help you build your network over the long term, such as people you may know or accounts you might want to follow.

Jensen Huang

These multimodal models tend to perform better in recognizing patterns, weak signals, and so on. So what's interesting is that AI is so deep in your company, you've been building the GPU infrastructure to run these large recommender systems.

But once you start getting into this field, you'll get into it. You get into it and you're very engaged. Nowadays, when I use WhatsApp, I feel like I'm working with WhatsApp. I like to imagine that when I type, it generates images. I go back and change my words, and it generates other images.

Mark Zuckerberg

That was last week. Very excited. Now imagine me, spending a lot of time with my daughters, imagining them mermaids, and it's been fun for the past week. I mean, that's the other half. A lot of generative AI stuff, on the one hand, it's going to be a major upgrade to all of our workflows and products that we've been working on for a long time.

But on the other hand, these completely new things can now be created. So, the idea of Meta AI is to have an AI assistant that can help you with different tasks in our world and be very creative. Like you say. They are very versatile. So you don't need to be limited to that. It will be able to answer any question.

Over time, as we evolve from the Llama3 model to Llama4 and its successors, it will no longer feel like a chatbot where you give it a hint and it responds, and then you give it another hint and it responds. It's you who give it an intention. It can actually work on multiple time frames. and it will admit in advance that you gave it an intention. I mean, there are things that kick off computing tasks that can take weeks or months to complete. And then come back and tell you what's going on, it's going to be very powerful.

Jensen Huang

Today's AI, as you know, is somewhat monotonous. You say something, and it will give you something in return. But obviously, when we think about a task or a problem, we think about multiple options, or maybe we come up with a decision tree and go along that decision tree, simulating in our minds, the different outcomes that each decision might produce. So, we're planning. In the future, AI will do the same. I'm really excited when you talk about your vision for creator AI, it's a really great idea. Let's tell us about Creator AI and AI Studio.

Mark Zuckerberg

Actually, that's exactly what we're talking about, and today we're rolling it out more widely. Our vision is, I don't think there will only be one AI model. This is something that some other companies in the industry are doing, and they're building a central agent. We'll have a Meta AI assistant that we can use, but our vision is that we want to empower everyone who uses our products to create agents for themselves.

That's where many creators or millions of small businesses are created on the platform. Ultimately, we want to be able to quickly integrate all of your content and quickly create a business agent that can interact with customers, do sales, and customer support. We're just starting to roll out right now with what we call AI Studio. It's basically a suite of tools that will eventually enable each creator to build their own version of AI as an agent or assistant that their community can interact with.

There is a fundamental problem here, and that is that there is not enough time. If you're a creator, you want to interact more with the community, but time is limited. Similarly, your community wants to interact with you, but with limited time. So, the next step is to give people the ability to create these artifacts. It's an Agent that you train to represent you and behave the way you want it to. It's a very creative endeavor, almost like a piece of art or content that you're posting.

Of course, it is clear that it is not interacting with the creator himself, but it will be another interesting way, just like the creator posts content on these social systems, to be able to have an Agent do the same thing. Similarly, there will be a situation where people basically create their own Agents for a variety of different purposes. Some are custom utilities that they want to accomplish tasks that they want to fine-tune and train the Agent. Some of them are entertainment, some people create things that are just funny and funny in different ways, or have a funny attitude, and these may not be built into Meta AI as an assistant, but people are very interested in it and want to interact with it.

And then, an interesting use case is for people to use these Agents for support. And what surprised me a little bit is that one of the main use cases for Meta AI is that people basically use it to simulate social scenarios, whether it's a professional scenario, like "I want to ask my manager how do I get a promotion or a raise?" "Whether it's an argument with a friend, or a difficult situation with a girlfriend, simulate that conversation, see how the conversation goes, and get feedback.

A lot of people don't want to interact with the same agent, whether it's Meta AI or ChatGPT or something someone else uses. They want to create something of their own. That's the general direction of AI Studio. But it's all part of our larger vision, and we don't think there should be just one big AI for people to interact with. We think the world would be better and more interesting if there were all kinds of different things.

Former a16z Partner: ChatGPT From Research to 100 Million Users Can Be A Trap, Silicon Valley's Collective Bet Will Require Traditional Markets to Step Over PMF.

Jensen Huang

You can be really cool. If you're an artist and have your own style, you can take your style, all your work, and fine-tune a model.

Mark Zuckerberg

This then becomes an AI model that can make prompts.

Jensen Huang

You can ask me to create something that matches my art style. You can even give me a painting to use as inspiration and I can generate something for you. You come to my AI to do this. In the future, every restaurant, every website will probably have these AIs.

Mark Zuckerberg

I think in the future, every business will have an AI agent to interact with customers, just like they have an email address, a website, and social media accounts. Historically, these things have been difficult to do. If you think about any company, there might be a customer support department, which is separate from the sales department, and as a CEO you don't want that because it's a different skill set.

Jensen Huang

I'm your customer support, just for work. Apparently I am. Every time Mark needs something, I don't know if it's a chatbot or art, but he's just my chatbot.

Mark Zuckerberg

I guess that's what you have to do when you're the CEO. But when you're building layers of abstraction in an organization, a lot of times those organizations are separate because they're optimized for different goals. Ideally, they are a whole. As a customer, you don't care if you take a different path when buying something and when you have a problem. You just want a place where your questions can be answered and the business can be interacted with. This also applies to creators. For consumers, these interactions with customers,

Jensen Huang

Complaints, in particular, will make your company better. Totally agree. All interactions with AI capture institutional knowledge, which can be fed into analytics to further improve the AI, and so on.

Mark Zuckerberg

The commercial version could be more integrated, but we're still in the early stages. With AI Studio, people can create their UGC Agents and different things and get started on this flywheel. I'm very excited about it.

Jensen Huang

So I can use AI Studio to fine-tune my images, my image collections?

Mark Zuckerberg

We'll do it.

Jensen Huang

So can I give it everything I write, use it as my RAG? That's basically it. Good. Then every time I come back to it, it loads the memories from the last time and we can continue our conversation as if nothing had happened.

Mark Zuckerberg

Just like any product, it gets better over time, and so do the training tools. It's not just what you're trying to say, usually creators and businesses also have topics they want to avoid. They're getting better at it. Ideally, it's not just text, you almost want to be able to video chat, which intersects with some of the codec avatar work we're doing. We will make that happen. These things are not far from us, and the flywheel spins fast. It's exciting. There's a lot of new stuff to do.

Even though progress on the base model has now stopped, we have a five-year period of product innovation to figure out how to use what is already available to the most effective. But in fact, the progress of basic models and basic research is accelerating. It's a pretty crazy time.

Jensen Huang

Your vision is that everyone can have an AI, and every business can have an AI. In our company, I want every engineer and every software developer to have one AI, or even multiple AIs. I love your vision that everyone and every company can make their own AI, you open-sourced Llama2.1, which, by the way, was the biggest event in the AI space last year.

Mark Zuckerberg

There is also the H100, but this is a chicken-and-egg problem.

Jensen Huang

It's a chicken-and-egg problem. Which comes first? H100。 Llama2 is not actually an H100, but an A100. So, it's the biggest event because when it comes out, it activates every company, every business, and every industry.

All of a sudden, every healthcare company is building AI, every company is building AI, every big company, every small company, startup is building AI. It enables every researcher to re-engage with AI because they have a starting point.

Now that Llama 3.1 is out, the excitement is very high. Together with our partners, we deploy Llama 3.1 to bring it to businesses around the world. The excitement is beyond imagination. It will enable various applications.

But tell me about your open source philosophy. Where did it come from? You have open-sourced PyTorch. Now that's the framework for doing AI. You've open-sourced Llama 3.1 or Llama again, and you've built an entire ecosystem around it, but where did all this come from?

Mark Zuckerberg

There's a lot of history to this. We do a lot of open source work. Part of the reason is that, frankly, we didn't start building distributed computing infrastructure and data centers until after some other tech companies. So when we build these things, they're no longer a competitive advantage. We thought, in that case, why don't we open source it so we can benefit from the ecosystem. So we have a lot of projects like this.

Probably the biggest project is open computing, where we publish server design, network design, and ultimately data center design. By making it an industry standard, the supply chain is also organized around it, which saves costs for everyone. By exposing these designs, we're basically saving billions of dollars.

Jensen Huang

Open compute allowed us to design Nvidia HGX to make it suddenly possible in a data center.

Mark Zuckerberg

It works in every data center. It's great to work in every data center. So we had a great experience. And then we also used some infrastructure tools like React, PyTorch. I would say that before Llama came along, we were already positive about this kind of thing.

I have some opinions on AI models. First of all, it's been a lot of fun to build things over the last 20 years. One of the most difficult things was that we had to go through a competitor's mobile platform to release our app. On the one hand, mobile platforms are a huge boost for the industry.

On the other hand, it is challenging to release a product through a competitor's platform. I grew up in a time when the first version of Facebook was on the web, and it was open. And then with the shift to mobile, the benefit is that everyone now has a pocket computer.

The bad part is that what we can do has become more limited. When you look at these generational changes in computers, there's a bias that people only look at mobile devices and think it's a closed ecosystem because Apple basically wins the market and sets the standard. I know there are more Android phones on the technology, but Apple basically occupies the entire market, and all the profits, Android basically follows Apple in terms of development, and Apple obviously wins this generation.

Even if you look back at the previous generation, Apple did closed things, but Microsoft is relatively more open, Windows runs on different OEMs and hardware, it's a more open ecosystem, and Windows is the leading ecosystem. In the PC era, the open ecosystem wins. I hope that in the next generation of computing, open ecosystems will once again be the leader. There will always be two systems, closed and open, both of which have a reason to exist and each has its own advantages. I'm not a fanatic and we also do closed-source stuff, but not everything that is released is open.

But in general, for the industry as a whole, there's a lot of value if the software is especially open. This really shaped my philosophy. For Llama AI and the work that we're doing in AR and VR, we're basically building an open operating system, like Android or Windows, that basically allows us to work with a lot of different hardware companies to make a wide variety of devices.

We're basically just hoping to get the ecosystem back to that level, and I'm optimistic that the next generation of open systems will win. For us, I just want to make sure that we can build the underlying technology that we're going to build the social experience on top of, because there are so many things that I'm trying to build that I've been rejected by the platform provider, so my goal for the next generation is to build everything from the ground up.

Jensen Huang

It's a great world where people are committed to building the best AI possible and making it available to the world as a service. However, if you want to build your own AI, you can still build your own. So, using the capabilities of AI, there's a lot of things I don't want to make this jacket myself, I prefer someone to make this jacket for me.

Do you understand what I mean? So the leather is open source and it doesn't make much sense to me. But having a great service, an incredible service, and an open service, an open usability is a good concept.

The 3.1 that you guys did was really great, with 4.5 B, 70 B of APIs that can be used to generate synthetic data, using larger models to teach small models.

Although the larger model is more versatile, it is less fragile. You can still build small models that fit any operational domain or operational cost. So now that the way you're building the model is transparent, you have a world-class security team, a world-class ethics team, and you can build it right in a way that everybody knows, and I really like that.

Mark Zuckerberg

I've digressed from that earlier, but I'll add one thing. We do this because we want this kind of thing to exist, and we don't want to be left out by some closed model. But it's not just a piece of software that can be built, you need an ecosystem.

If we weren't open source, it would hardly work very well. We don't do it because we're altruists, and while it's helpful for the ecosystem, we do it because we think it's going to make what we're building the best because there's a strong ecosystem.

Jensen Huang

See how many people have contributed to the PyTorch ecosystem? Hundreds of engineers. Nvidia alone has hundreds of engineers focused on making PyTorch better, more scalable, more efficient, and so on.

Mark Zuckerberg

And when something becomes an industry standard, others work around it. So all the silicon and the system will eventually be optimized to run this thing well, which will benefit everybody, but will also work well with the system that we've built. This is just one example of how it can become very effective. So, an open source strategy would be a good business strategy. People don't fully understand it yet.

Jensen Huang

I recognize important things, and Llama really matters. We built a concept around it called AI Factory, AI Foundry so that we could help everybody build. A lot of people have a desire to build AI, and it's important for them to have AI because once they incorporate it into their data flywheel, their company's knowledge is encoded and embedded in AI. So they can't let the AI flywheel, the data flywheel, the experience flywheel go somewhere else. Open source allows them to do just that. But they don't know how to turn the whole thing into AI.

So we created this thing called AI Foundry, where we provide the tools, we provide the expertise, the Llama technology, and we have the ability to help them turn the whole thing into an AI service. When we're done, they own it. The output is what we call NIM, it's neural microservices. You can download it and take it to run it anywhere they like, including locally.

We have a whole ecosystem of partners, from being able to run NIM OEMs to training and working with them to create GSIs based on Llama NIM and pipelines. Now we're helping businesses do that around the world. It's really exciting, and it's all about Llama open source.

Mark Zuckerberg

In particular, it will be a very valuable new thing to help people train their own models from large models. As we talked about on the product side, I don't think there's going to be a major AI Agent that everybody will talk to. I also don't think there will be a model that everyone uses.

Jensen Huang

We have chip design AI, we have software coding AI. Our software-coded AI understands USD because we write USD for Omniverse. We have software AI to understand Verilog, our Verilog. We have software AI that understands our bug database and knows how to help us triage bugs and send them to the right engineers.

Each AI is fine-tuned based on Llama. We fine-tune them and set up guards for them. If we had an AI for chip design, we didn't want it to answer political and religious questions. So we set up guards for them. Basically, every company has AI for every function. They need help to do that.

Mark Zuckerberg

A big question in the future is to what extent people will use larger, more complex models instead of training their own models for their specific uses. At the very least, there will be widespread adoption of a variety of different models.

Jensen Huang

We use the largest model. The reason for this is that our engineers' time is precious. We are now optimizing the performance of the 405B model. The 405B model is not suitable for any GPU, no matter how big. That's why MV Link performance is very important. Every GPU is connected through this non-blocking switch. In HGX, for example, there are two such switches. We've made all of these GPUs efficient to run 405B models. We do this because engineer time is very valuable to us, and we want to use the best model possible. In fact, this cost-effectiveness doesn't matter. So we just want to make sure that we are providing them with the best quality results.

Mark Zuckerberg

The inference cost of the 405B is about half that of the GPT-4o model. So I'm going to add that. That's pretty good already. When people do something on their device or want to use a smaller model, they simplify it. So that's a completely different set of services.

Jensen Huang

AI is running. Let's say we hire AI to design a chip, which can cost $10 an hour. If you're using it constantly, and you're sharing it with many engineers, each engineer probably has an AI next to them. It's not expensive. We pay engineers a lot of money. So for us, a few dollars an hour can empower someone who is very valuable.

If you haven't hired AI yet, do it now. That's what we're going to say. Let's talk about the next wave. I really like the work you guys do, computer vision. One model that we often use internally is "divide everything". We are now training AI models to understand videos so that we can better model the real world.

In our use case, it's primarily used in robotics and industrial digitalization, and connecting these AI models to Omniverse to better model and represent the physical world. I have some bots that work better in these Omniverse worlds. Your Rayband Meta glass app, the vision of bringing AI to the virtual world is really interesting. Tell us about it.

Mark Zuckerberg

Well, we're actually showing the next version of the model, SIGRAF Segment Anything 2, here. It's faster now. It also works in videos. Because it's open, many of the more serious applications can also be used in a wide range of industries. Scientists use this to study the evolution of coral reefs and natural habitats. But being able to do that in a video, and being able to zero samples and interact with it, tell it what you want to track, is pretty cool research.

Jensen Huang

For example, why do we use it? For example, you have a warehouse with a lot of cameras in it, and the AI of the warehouse monitors everything, let's say there's a bunch of boxes that fall or someone spills water on the ground, or any accident is about to happen. The AI recognizes it, generates text, sends it to someone, and helps solve the problem. Here's one way to use it. Instead of recording everything, if an accident happens, start recording every nanosecond of video and replay that moment, it only records the important parts because it knows what is watching. So there's a video understanding model, a video language model, that's very useful for all these interesting applications. What are you still working on?

Mark Zuckerberg

There are all the smart glasses. Let's divide the next computing platform into mixed reality headsets and smart glasses. Smart glasses are easier to understand because almost everyone is wearing glasses and will eventually upgrade to smart glasses, and more than a billion people in the world wear glasses. So it's going to be a very big market. VR MR headsets, some people find it suitable for gaming or other uses, and some people are not interested yet. My point is that both will exist in the world. Smart glasses will be the mobile version of the next generation of computing platforms, and mixed reality headsets will be more like your workstation or game console, where you will sit down and use more computing power when you want to have a more immersive session. Glasses are small in size and have a lot of limitations, just like you can't do the same level of calculations on your phone.

Jensen Huang

It's just in time for all the breakthroughs in generative AI.

Mark Zuckerberg

For smart glasses, we approach the problem from two different directions. On the one hand, we're working on the technology needed to build the ideal holographic AR glasses. We're doing all the custom silicon work, the custom display stack work, all the technology that makes it work. And it's glasses, not a headset, unlike a VR MR headset. They look like glasses. But there's still a big gap from the glasses you wear today.

Ray-Bans, while good, can't quite accommodate all the technology needed to enable holographic AR. We are getting closer, and we will be closer in the coming years. The price will still be high, but this will start to be a product. Another angle is to start with eyeglasses that look good. Working with the best eyewear manufacturers in the world, like EssilorLuxottica, who make all the big brands like Ray-Ban, Oakley, Oliver Peoples, etc., it's basically the world of EssilorLuxottica.

So we teamed up with them, and Ray-Ban glasses are in their second generation. The goal is to confine yourself to a well-looking glass, putting as much technology as possible. Understand that we cannot fully achieve our ideals. But it will end up being glasses that look good. Now it has cameras to take photos and videos, can live stream to Instagram, can make WhatsApp video calls, and show each other what you see. It has microphones and speakers, open-back headphones, and many people find it more comfortable than earbuds.

You can listen to music like a private experience. Can answer the phone. But the sensor package happens to be able to talk to AI. It was an accident. If you had asked me five years ago, would we have holographic AR or AI first, I would say holographic AR. Because display technology and virtual and mixed reality technologies are constantly advancing.

But the breakthrough of LLMs changed that. We have high-quality AI and it's getting better at a very fast rate, which is earlier than holographic AR. We're lucky because we're working on these products. Eventually, there will be a range of different eyewear products, with different prices and skill levels. I guess AI glasses without a display, about $300, would be a big product that thousands or even millions of people would have. So you'll have super interactive AI talking to you.

Jensen Huang

You demonstrate visual language understanding. You have real-time translations. You can speak to me in one language and I will hear in another.

Mark Zuckerberg

The display will obviously be great too, but this will add a bit of weight to the glasses and will also make them more expensive. So there will be a lot of people who want that kind of holographic display, but there will also be a lot of people who want to end up with really thin glasses.

Jensen Huang

In industrial applications and some work applications, we need that kind of holographic display.

Mark Zuckerberg

The same is true in consumer goods.

Jensen Huang

Do you think so?

Mark Zuckerberg

I thought a lot about it during the pandemic, when everyone was working remotely. It's good, it's great that we have this, but in the future, we're not far from the time when we're able to have virtual meetings, like I'm not really here, it's my holograms. It feels like we're there, physically present. We can work and collaborate together. This is especially important for AI.

Jensen Huang

I can accept a device that doesn't need to be worn all the time.

Mark Zuckerberg

But we'll get to that point. In glasses, there are thin and thick frames, and there are various styles. So I don't think we're far from having a form of holographic glasses. But achieving this goal in a stylish, slightly thick-rimmed pair of glasses is not far off.

I try to be a style leader in order to influence glasses before they hit the market. But I see a lot, it's still early. I feel like if a big part of the future of business is creating fashionable eyewear that people want to wear, then I should start paying more attention to that.

So we may need to say goodbye to the version where I wear the same thing every day. It's the same with glasses. Unlike a watch or a mobile phone, people really don't want to look the same. So it's going to be an open ecosystem because there's going to be a huge demand for styles and styles. It's not like everyone wants to wear the same pair of glasses, that won't work.

Jensen Huang

You're right, Mark, these are incredible times when the whole computing stack is being redefined. How do we look at software, from the first generation of software to the second generation of software, and now we are basically entering the third generation of software. Computational methods, capabilities, and applications, from general-purpose computing to generative neural network processing are now unimaginable in the past.

This technology, whether it's general purpose or VI (visual intelligence), I can't remember a technology that has impacted consumers, businesses, and the scientific world so quickly. Able to cross various scientific fields such as climate technology, biotechnology, physical science, etc. Generative AI is making a fundamental shift in every field we encounter. In addition, generative AI will have a profound impact on society, impacting the products we make.

I've been asked if there's going to be a Jensen AI? That's exactly what you're talking about creative AI, where we can build AI ourselves, and load everything I've written. Then fine-tune it the way I answer the questions. Hopefully, over time, through the accumulation of use, it will become a truly excellent assistant and companion. It's non-judgmental, so you can interact with it at any time. These are really incredible things. We've been writing a lot.

Imagine just providing three or four topics and it would write in my tone and serve as a starting point. So there's really so much we can do right now. It's really great to work with you. I know it's not easy to build a company, you go from desktop to mobile, to VR, to AI, all these devices. The video landscape has also shifted many times, and I know how hard it can be. We've all had a lot of setbacks over the years, but that's exactly what it takes to be pioneers and innovators. So, it's really great to look at you.

Mark Zuckerberg

If you continue to do what you did before, you're not sure if it's a transformation. But it's good. But that's a little bit of an increase. There are many more chapters for all this. And I think it's the same for you – it's been fun to see the journey you've taken. We went through a phase where everybody felt like everything was going to turn to these devices. It's just computing power to get super cheap. And you just keep going. In fact, you're going to need these large systems that can be processed in parallel.

Jensen Huang

We went the other way, and instead of making smaller and smaller devices, we made a computer. We started manufacturing graphics chips, GPUs. Now, when you deploy a GPU, you still call it the Hopper H100. So you know, when Mark called it the H100, he had H100 in his data center, and you were about to reach 600,000, and we were great customers.

Mark Zuckerberg

One day, you say, a couple of weeks later, we're doing this at SIGGRAPH. I said, I didn't have anything on the agenda that day, it sounded funny.

Jensen Huang

Exactly. I was fine that afternoon. And you show up. It's so incredible, these systems that you've built, these huge systems, it's hard to coordinate, it's hard to run. You say you entered the GPU space later than most. But you're operating on a larger scale than almost anyone. It's incredible. Congratulations on all you do.

This article was published by Everyone is a Product Manager by [Jiang Tian Tim], WeChat public account: [There is a new Newin], original / authorized Published on Everyone is a product manager, without permission, it is forbidden to reprint.

The title image is from the screenshot of the SIGGRAPH 2024 conference

Read on