laitimes

The Rise of OpenAI Revealed: Born Out of Thin Air, Is AI Heaven or Hell

The Rise of OpenAI Revealed: Born Out of Thin Air, Is AI Heaven or Hell

Tencent Technology

2024-06-28 06:30Official account of Tencent News Technology Channel

Highlights:

  • 1

    Altman's vision of artificial intelligence, in stark contrast to his own sister Annie's homeless life, highlights the importance of considering the well-being of society while pursuing innovation.

  • 2

    OpenAI's internal disagreement over the direction of AI development has sparked in-depth discussions about AI ethics and safety.

  • 3

    With the rapid development of artificial intelligence technology, security issues have become the focus, attracting huge investments, including many tech tycoons.

  • 4

    Ultraman talks about the positive potential of AI in front of the public, but behind the scenes, he builds bunkers to stock up on supplies to prepare for the AI apocalypse.

The Rise of OpenAI Revealed: Born Out of Thin Air, Is AI Heaven or Hell

Tencent Technology News reported on June 28 that according to foreign media reports, since OpenAI released the artificial intelligence chatbot ChatGPT at the end of 2022, its CEO Sam Altman (Sam Altman) has quickly emerged and become a star in the technology industry. OpenAI is now among the world's most valuable startups, and Altman himself spends most of his time as the future spokesperson for artificial intelligence, frequently traveling around the globe to meet with dignitaries from around the world. So, how did he amass such a huge amount of influence?

Foreign media recently launched the "Foundering: The OpenAI Story" audio program series, which explores in detail the rise of Ultraman and how he stood out in the fierce race to build leading AI technology. At the same time, the show also touches on the threat that artificial intelligence technology may bring to the survival of mankind, as well as a "palace fight" that almost subverts everything, which once put Ultraman in a desperate situation, but he finally stood up again.

In these programs, reporters interview a number of leading figures in the field of artificial intelligence, trying to decipher the truth behind the hype and delve into whether artificial intelligence is a tool to improve human life or a potential destroyer. But more importantly, it's also the story of Ultraman, who is at the heart of it all. Foreign media spoke with Ultraman's friends, family and associates to try to demystify him and explore how he stepped up to the center of power.

The program has a total of five episodes, and Tencent Technology will launch it one after another.

This is the third episode, which delves into OpenAI's internal controversy and the rise of Ultraman as an AI hero. The show is divided into two parts, the first part (episode 3) focuses on OpenAI's internal conflicts and Sam's personal branding; The second part (episode 4) will focus more on the life of Ultraman's sister Annie Alman, as well as the debate around poverty and the AI economy.

Straight to the past

The rise of OpenAI is revealed: a master of power, Ultraman has accumulated huge influence by relying on his connections

The second bomb of OpenAI's rise was revealed: Musk withdrew from the insider exposure, and Ultraman took the opportunity to take the position

The following is the text of the third episode:

01 AI Utopia: The Clash of Ideals and Reality

In the lush jungles of Hawaii's North Shore, our story unfolds. Today we will embark on an in-depth driving tour through dense forests and past the Twin Waterfalls. In this vibrant land, Annie Altman, Sam Altman's sister, sat in the passenger seat and walked with us.

Anne's life is one of constant migration. For the past two years, she has been searching for a stable place to live, but has been repeatedly frustrated. From the newly built dwelling with four walls, to the floor of a friend's house, to the living room of a stranger, her life is full of uncertainty. In just one year, she moved 22 times, sometimes only for a day or two. Her story is an important addition to Ultraman's story, showing the hidden side behind the technological boom.

Meanwhile, thousands of miles away in San Francisco, her older brother Altman had a brilliant year in 2023. The huge success of ChatGPT has catapulted OpenAI to fame, and Ultraman himself was named CEO of the Year by Time magazine. He has spent months traveling the world, discussing the future of artificial intelligence with the world's biggest names, confidently predicting the coming of a utopian world without poverty. Ultraman's remarks are full of infinite longing for the future. He believes that the development of artificial intelligence will bring unprecedented abundance to mankind – with adequate food, healthy health care, and home ownership. His voice has become a clarion call for the era of artificial intelligence, leading people towards a better world.

However, the reality of Annie's life contrasts sharply with Ultraman's vision. She struggles on the brink of homelessness while her brother talks about the possibilities of technology on stage. This huge contrast makes people wonder: can technological progress really benefit everyone? Can the artificial intelligence utopia advocated by Ultraman be reflected in reality?

This is a story about the interweaving of dreams and reality, technology and humanity. The life trajectories of the Ultraman siblings let us see the two sides of light and darkness in the era of artificial intelligence. With the birth and popularity of ChatGPT, we can't help but ask: how will artificial intelligence define our future? In this story, we may be able to find some answers. And Anne's life is the most profound footnote to the future. This trip to Hawaii not only crossed the jungle, but also passed through the complex relationship between reality and ideals, technology and life. The story of the Ultraman siblings is the most profound reflection on this era and the most sincere inquiry into the future.

02 The debate between the ideal and the reality of the AI revolution

With the establishment and development of OpenAI, its employees have gradually felt the profound meaning of the AI technology they have studied. Artificial intelligence is beginning to exhibit behaviors that even its creators don't fully understand, and this unpredictability has caused divisions in the team. Some, including Altman, believe that almighty computer intelligence will have a positive impact on the world; Others fear that this force could go the opposite way, with irreparable consequences. These visions are extreme and polarizing, as distinct as heaven and hell.

Along with the disagreements within OpenAI, ChatGPT was officially released. This is not only a defining moment for OpenAI, but also a milestone for the entire field of artificial intelligence. During this time, two predictions have been hotly debated: that artificial intelligence will make humanity extinct or that it will completely change the definition of work. The first prediction sparked fierce controversy, while the second was relatively easy to accept – jobs would change and the economy would have to adapt to the new situation.

In 2021, about a year and a half before OpenAI released ChatGPT, OpenAI's technology has been rapidly gaining new capabilities. At the time, OpenAI was building increasingly powerful models, such as GPT-2 and GPT-3, which were already able to train on textual content and produce increasingly human-like text output. However, this rapid progress, while beneficial to Ultraman and OpenAI, has also made some OpenAI employees nervous, even those at the top.

Dario Amodei, then vice president of research at OpenAI, expressed his anxiety on Logan Bartlett's podcast. He recalls how he felt when he first saw GPT-2 in 2018 or 2019: "I was scared. This is crazy. This is unprecedented in the world. He looked at the model he helped build at OpenAI and felt a sense of fear: "Why am I afraid? This thing is going to be very powerful. It can destroy us, and if we don't understand such a model, if it wants to create chaos and destroy humanity or whatever, I don't think there's basically anything we can do to stop it. ”

This unease culminated in 2021, when Amodi left the company along with six of his colleagues at OpenAI to start a rival large language model development company, Anthropic, which focuses on building secure artificial intelligence. They believe that these models need to be secured within an organization that truly implements security principles from start to finish. Their departure is a major blow to OpenAI, casting a shadow of distrust and hinting at their doubts about whether Ultraman can make the right decisions.

At the end of 2022, OpenAI released ChatGPT, but it didn't immediately cause a huge response at the time. Gradually, people started to try and embrace ChatGPT. People are slowly starting to notice and talk about the technology, and ChatGPT is getting more and more interested. ChatGPT is not a completely new technology, but is based on the GPT-3.5 model that already exists. It stands out because it presents AI technology in an innovative way – as a free and user-friendly chat-style AI tool. The interactive experience of this tool is smooth and natural, giving the feeling of communicating with a real person who can create answers on the fly. Although these AI-generated responses sometimes require further verification, people still show great interest and fascination for this novel way of interacting.

ChatGPT reached 100 million users in just two months, setting a record for the fastest user growth at the time. In comparison, it took TikTok nine months and Instagram two and a half years to reach the same user size. ChatGPT appears intelligent even when it makes mistakes, and it has prompted many people to start thinking about how AI will change their lives. The technology immediately found some practical applications, and people began to use it to write code faster, translate documents more fluently, and draft emails. Students use it to help with homework, although this sometimes leads to problems with cheating as well. All this excitement and attention is a huge boost for OpenAI.

OpenAI's brand awareness and success skyrocketed, and people immediately saw it as a hot AI company, a leader in the field of artificial intelligence. At the same time, Ultraman's public image has also risen, artificial intelligence has become a hot topic in 2023, and Ultraman is the protagonist among them, and he has become a household name. The entire tech industry and investors are desperate to pour money into AI companies, and billboards are touting new AI startups. The ChatGPT craze has not only sparked interest and awareness among various people about cutting-edge AI technology, but has also sparked concerns among a small group of people who have been following AI for a long time, believing that AI will become more and more powerful and may eventually get out of control and threaten us, and that we must find a way to save humanity from AI.

03 AI security is becoming mainstream

Before leaving OpenAI and founding Anthropic, Dario expressed his fear that AI would destroy the world, which may sound ridiculous, but it is an extremely serious issue for many. In Silicon Valley, the subculture that thinks AI could soon destroy us is growing and becoming more influential. These ideas are now a major driving force in the field of artificial intelligence, and many scientists working on artificial intelligence feel that they are working on what may be the most important thing for humanity, the survival of the entire planet. This belief influences the flow of billions of dollars, determining which issues are addressed and which are ignored.

ChatGPT has brought these concerns into the public consciousness. A UK survey shows that the number of people who believe AI may be the main cause of human extinction has more than doubled in the year since ChatGPT was released. To be clear, this view is highly controversial, and even in Silicon Valley, many people believe that AI annihilation is a religious ideology, but there are also many who believe in it.

Chiau Yuen, a tech person, has been persuaded by a growing belief that artificial intelligence may one day threaten the very existence of humanity. He had this idea because he had read the writings of Elezar Owski, an influential AI doomsday prophet, who convinced him that a future dominated by super-intelligent artificial intelligence might soon come.

"I realized that this was happening, and we were approaching that moment, which could come in my lifetime, even before I retired," Chiau Yuen said. Based on this belief, he made a deliberate decision: he no longer set up a retirement account for himself. For those who think this is stupid, Chiau has his own opinion: "I really think that by the time I retire, money will lose its value. By then, the world may have changed beyond recognition, and we may no longer exist, or live in an unimaginable post-singularity era. At that point, both money and traditional values become irrelevant. "The singularity is the moment when a robot becomes smarter than a human.

For Chiau, it's not just about skipping retirement accounts. Once he really starts believing this, the rest of his life doesn't seem to matter anymore. He dropped out of the PhD program in mathematics, and after 5 years of study, he began to alienate his old friends, who he believed did not understand the gravity of the situation. "It's like a black hole deep inside me that is quietly changing my perception of the importance of things," Chiau said. The more I looked into it, the more I felt that everything else was insignificant. I'm starting to realize that everything else is inconsequential. Perhaps, this is the only major issue that deserves attention. ”

It may sound frightening, extreme, or even absurd. However, if you talk to experts in the field of artificial intelligence and people in Silicon Valley, you will find that this view is not uncommon. Sometimes it comes in the form of a whisper, and sometimes it is discussed openly and loudly. They may use jargon such as AI security, AI consistency, or AI risk. But at the end of the day, they focus on the same core issues.

In the world of artificial intelligence, many influential figures are convinced that in 20 years the world will be very different from what it is today. Some expect it to be better than ever, while others are worried about the catastrophe it could bring. They argue that we should devote ourselves to the cause of avoiding the catastrophic end of artificial intelligence. This belief is driving their commitment to ensuring that the future of AI is safe and manageable.

Over the past 5 to 10 years, many well-known and wealthy tech figures have poured money into this emerging field. Facebook's co-founder, Dustin Moskovitz, and Sam Bankman-Fried, a controversial figure in the cryptocurrency space, have pledged to pour hundreds of millions of dollars into AI security projects. Leading figures in the tech world have signed public statements warning of the risks that AI may pose to human civilization. Chiau and his colleagues believe it is important to make the threat known to the wider population, and like missionaries, they go to great lengths to spread the word about AI in a variety of activities. People who were skeptical at first, after some in-depth discussion, were either thoroughly persuaded or deeply frightened.

Chiau added: "It's a scary thought. Imagine what a terrible prospect it would be if all of them ceased to exist in ten years. Psychologically, if someone really asks you to seriously consider this possibility, it seems like a form of self-preservation. Although at first it was too ridiculous to think it was, how could it happen? But if somebody spends four hours and says to you with extreme seriousness, 'What if that's true?' And then the fear came over me. At this point, you can't help but ask yourself, what should I do? ”

Chiau works in organizations dedicated to AI safety and actively participates in rational workshops in an attempt to stimulate more young people's interest and commitment to AI safety. In their view, when it comes to what is at stake, it is about the survival of our entire galaxy. It is a point of view that the most important task at the moment is to ensure that the future development of AI is safe and controllable. However, there are also some extremely intelligent people, including academics and researchers, who find this frenzy of AI apocalypse both frustrating and misleading, such as Emily Bender and Mic, who specialize in computational linguistics. They believe that this frenzy is a distraction because of the tangible harms that are happening in the real world, including labor exploitation, data theft, discrimination, and bullying. The more we indulge in those exciting, action-movie-like fantasy risks, the less time and resources we have to actually address and deal with the actual harm that is currently happening.

04 The double-edged sword of individual heroism in the AI era

Emily and other experts warn that the discussion of the apocalypse of artificial intelligence is a huge distraction, like a constantly flashing siren, screaming that the end of the world is coming, grabbing all attention. This panic makes it easy to ignore the practical problems posed by current AI technologies, such as the racial bias of AI in the criminal justice system, the illegal use of artists' copyrighted works to train models, etc. Not to mention how these technologies are used in surveillance applications, how they are overused to regulate communities, and the inaccuracy of facial recognition technology in identifying dark-skinned women. A Google search for "black girl" often brings up indecent content. Our ability to find and trust reliable information is at stake, and these issues span multiple areas, from public health to democracy, one after the other.

Emily reminds us that these are looming issues and that AI is hurting people in these ways. Yet, we're wasting our time discussing those fantastical scenarios just because some rich people are starting to pay attention to them. At the same time, the frenzy of AI apocalypse is also seen as another AI hype. If these systems are really so powerful that they could destroy the world, they must be very powerful. This hype is key because it makes the super-powerful AI system attractive to both investors and employees. Emily argues that beliefs about the apocalypse of AI are not only harmful, but also misleading. Still, she acknowledges that many people who care about AI safety are motivated by sincerity, which is a true belief for some. Those who are deeply rooted in these beliefs, believing that they are the heroes of the story, will try to prevent super-intelligent artificial intelligence from taking over the world. This belief is sincere, although it may be somewhat misplaced.

Emily touches on a real issue here: the urge to be a hero has become one of the driving forces in the AI industry. Chiau again mentions this very masculine desire, the desire to be a hero. Most people don't stand a chance to be heroes on any meaningful level. The idea is not just "if I can be a hero", but "if I can be a hero through my intelligence". For some, it can be difficult to admit because it sounds like they're a little immature.

Chiau is blunt about the motivations of many people in this field: they want to feel important and that their work has great significance in the universe. People want to feel like they have made a difference in their lives, and that's one of the great attractions, like "if this is the most important era in human history, the choices we make now will affect the future". Those who start to dive deeper into these issues on social media X will use very exaggerated language, "We're going to conquer the stars!" "We're going to create a galactic civilization!" Some may think these exaggerated claims are a joke, but that's what they believe. They sincerely believe that since we are building things with artificial intelligence, this is the most important era in human history.

Altman's personal speeches again and again convey a message: the artificial intelligence we are working on will be of epoch-making significance. In an interview with foreign media, Altman emphasized the opportunity that humanity is currently facing, believing that this kind of opportunity is rare in centuries. He believes that humanity has an opportunity to reconstruct the socio-economic contract to ensure that everyone is included in the system and becomes a winner; All while avoiding self-destruction in the process. He talked about what it means to create artificial intelligence that surpasses human capabilities, and how this will affect our humanity, what the world will look like in the future, and humanity's place in that world. He questioned how this powerful technology would be shared equitably, and how humanity would ensure that not just a few people in San Francisco would make decisions and reap all the benefits. In Ultraman's vision, the artificial intelligence he is building will reshape the human world, humanity, and the social contract. There is a heroism in his words, which echoes what Emily and Chiau point out.

05 Ultraman privately prepares for the apocalypse

In the American tech world, it's not uncommon for someone like Altman to think he's smart enough to solve big problems. Tech billionaires who plan to build a new city near San Francisco, or the crypto industry that thinks they have invented a better financial system. Ultraman may also see himself as a hero, and he understands the motivating power of a compelling story. In 2019, he even hired a novelist to write a science fiction short story for OpenAI. The novelist's name is Patrick House, a neuroscientist and author. Although he's not sure if OpenAI is still using his work, it shows that they recognize the value of stories.

Ultraman is clearly influenced by certain genres of fiction, and many startups are story-driven, often derived from science fiction. A powerful story inspires employees to work harder. How can a secular city like San Francisco inspire people to devote themselves to causes that might avert the end of the world? Perhaps by providing a foundational document and an apocalyptic myth, they are trying to avoid catastrophe. This narrative is a well-known motivator throughout history. All of this makes us wonder if Ultraman really believes that artificial intelligence could destroy humanity.

In fact, Altman's views on artificial intelligence have changed significantly over the years. In 2015, he made clear his concern that advanced artificial intelligence could be the greatest threat to humanity's persistence. In his early conversations, he had spoken in a flirtatious tone that artificial intelligence had the potential to lead to the end of the world. This attitude is reflected in a character bio in The New Yorker, in which Ultraman is portrayed as a doomsday preparer.

He told The New Yorker that he had stockpiled guns, gold, ammunition, antibiotics and IDF gas masks. He also mentioned owning a piece of land in Big Sir, California, USA, to ensure that he could take refuge in the event of the end of the world. To those familiar with Big Sur, this may sound rather ironic, as the land there is actually experiencing severe erosion, resulting in rock slides, road closures, and difficulties with food and supplies. It seems to be an unstable location, far from the ideal place for the Doomsday Bunker.

When questioned by foreign media, Ultraman tried to downplay it, saying that it was nothing more than a form of entertainment that satisfied his desire for the fantasy of survival for a young boy. After those years, Altman conspicuously avoided mentioning the property of Big Sail or his gun stockpile. His sister, Anne, says that even though she hasn't seen the place, it fits her knowledge of Ultraman, a man who is extremely safety-conscious and prepared to hoard resources in the worst-case scenario. It can be speculated that Ultraman does possess these reserves, only to choose not to discuss them publicly.

These actions and remarks of Ultraman not only reveal his deep awareness of the potential risks of artificial intelligence, but also reflect a deep survival instinct and personal preparation for the uncertainty of the future. Although his views have changed over time, his concerns about the far-reaching impact that AI technology can have have have remained constant. In front of the public, Ultraman may focus more on the positive potential of AI and its positive change in society. But his personal vigilance and preparedness for potential risks has never relaxed.

Altman's insight and vision helped build the world's leading artificial intelligence company, and as the company's reputation grew, so did his personal popularity. He has gradually moved away from extreme rhetoric about the potential for AI to destroy humanity in favor of a more cautious approach to how society will be profoundly transformed by AI, optimistically predicting that this change will generally lead to a positive future.

Since Altman's rise to fame with the introduction of innovative ChatGPT technology, he has begun to portray himself and OpenAI as a more neutral, rational image. Although his views seem to have shifted with time and circumstances, some people have privately expressed their dissatisfaction with Ultraman, who they believe sometimes says what they want to hear based on the expectations of the audience. If being more gentle will help him gain more support and recognition, he will adjust his rhetoric accordingly.

Ultraman understands the power of a compelling apocalyptic story. We tend to be more easily seduced by those dramatic fears and ignore the slow and continuous risks. At the 2015 Airbnb conference, Altman had an in-depth discussion using nuclear energy as an example, which also applies to the risk assessment of artificial intelligence. We can sense a sense of confidence, even a hint of arrogance, in his tone, as if pointing to his own awareness of the rational things that others have overlooked. He stressed that people are often more sensitive to dramatic extremes and often underestimate incremental risks.

Altman pointed out that nuclear energy is actually safer, thousands or even tens of thousands of times safer than coal. However, most people still prefer to live near coal power plants rather than nuclear power plants. They would rather die of lung cancer thirty years later than face the dramatic death of a nuclear catastrophe. This miscalculation of risk is the norm in humans, who tend to underestimate slow and persistent problems, such as misinformation, racial bias, and overestimate dramatic events that can have a big impact but have a low probability of occurring.

Perhaps, it is precisely because of this penchant for dramatic disasters that the AI industry has been telling us all sorts of catastrophic stories. Ultraman's two-sided personality, on the one hand, shows the infinite brilliance brought by artificial intelligence technology, and on the other hand, it also reveals his deep understanding and hidden worries about the potential risks of this technology. (Compilation/Mowgli)

View original image 3.63M

  • The Rise of OpenAI Revealed: Born Out of Thin Air, Is AI Heaven or Hell

Read on