laitimes

The American AI circle is shaking! Sutskevi, the core figure of "OpenAI Gongdou", officially announced his entrepreneurship

The American AI circle is shaking! Sutskevi, the core figure of "OpenAI Gongdou", officially announced his entrepreneurship

Finance Associated Press

2024-06-20 07:10Published on the official account of Cailianshe under Shanghai Poster Industry Group

Finance Associated Press, June 20 (edited by Shi Zhengcheng) On Wednesday, local time, the co-founder and (former) chief scientist of the company who resigned from OpenAI last month, and Ilya Sutskever, an authoritative expert in the field of deep learning who voted a key vote on the board of directors of OpenAI last year to kick Altman out of the company and regretted inviting him back, officially announced a new AI startup project after flying solo.

The American AI circle is shaking! Sutskevi, the core figure of "OpenAI Gongdou", officially announced his entrepreneurship

(Source: X)

To put it simply, Sutskevy founded a new company called Safe Superintelligence (SSI), with the goal of creating a safe superintelligence "in one step."

Safety first, straight to the ultimate goal

Sutskvi intends to create a safe and powerful AI system directly in a purely research institute, while not launching any commercial products or services in the short term. "This company is special because its first product will be a security superintelligence that won't do anything until the day [it happens], completely isolated from external pressures and not having to deal with big, complex products or get caught up in fierce competition," he told the media. ”

Safety first, no commercialization, ignoring external pressure, Sutskwei did not mention OpenAI once in the whole paragraph, but the meaning behind it is self-evident. Although OpenAI's "Gong Dou" incident ended with Altman's big victory and quick victory, the battle between accelerationism and securityism behind the whole thing has not ended.

Although the concepts are different, the two sides still maintain a decent relationship in private. On May 15 this year, when announcing his departure from OpenAI, which had worked for ten years, Sutskvi also released a group photo of the management and expressed his belief that under the leadership of Altman and others, OpenAI will build a safe and beneficial AGI (Artificial General Intelligence).

The American AI circle is shaking! Sutskevi, the core figure of "OpenAI Gongdou", officially announced his entrepreneurship

(Source: X)

Altman responded that he was "deeply saddened" by Sutskeve's departure, and said that without Sutskevi, OpenAI would not be where it is today, and "deeply saddened" that the two sides parted ways.

Since the end of last year's "palace fight", Sutskove has been silent about the whole thing, and it remains so to this day. When asked about his relationship with Altman, Sutzkvi simply replied "very good", and when asked about his experience over the past few months, he also simply said "very strange".

"As safe as nuclear safety"

In a sense, Sutskwe is currently unable to accurately define the dividing line between the safety of AI systems and the safety of AI systems, and can only say that there are some different ideas.

Sutskvi hinted that his new company would try to use "engineering breakthroughs embedded in AI systems" to achieve safety, rather than "guardrails" that are technically ad hoc. "By security, we mean security like nuclear security, not something like 'trust and security,'" he stressed.

He said that he has spent many years thinking about the security side of AI, and he already has several ways to implement it in his mind. "At the most basic level, secure superintelligence should have the property that it does not cause massive harm to humans," he said. After this, we can say that we want it to be a force for good, a force built on key values. ”

In addition to the well-known Sutskevey, SSI has two founders - Daniel Gross, a former head of machine learning at Apple and a well-known tech venture capitalist, and Daniel Levy, an engineer who trained large models at OpenAI with Sutskeve.

"My vision is exactly the same as that of Sutskovi: a small but capable team, each focused on a single goal of security superintelligence," Levy said.

While it's unclear why SSI dared to shout "one step at a time" (how many investors and how much money they invested), Gross made it clear that the company does face many problems, but finding money will not be one of them.

Return to the original intention of OpenAI

From a series of visions, it is not difficult to find that the so-called "security superintelligence" is essentially the concept of OpenAI in the early days of its establishment. But as the cost of training large models skyrocketed, OpenAI had to partner with Microsoft in exchange for funding and computing power to support their business.

The question also arises in the future of SSI – are the company's investors really willing to invest a lot of money and see that the company produces nothing until the ultimate goal of "superintelligence" is achieved?

By the way, "superintelligence" itself is also a theoretical concept, referring to an artificial intelligence system that is beyond the human level, more advanced than what most of the world's super tech companies are pursuing. But there is no consensus in the industry on whether such intelligence can be achieved, or how to build such a system.

What is interesting is that in the first announcement released by SSI, the first sentence is "superintelligence is close at hand".

Attached: SSI announcement

安全超级智能公司 (Safe Superintelligence Inc.)

Superintelligence is already within reach.

Building secure superintelligence is the most important technical issue of our time.

We've launched the world's first direct-to-target SSI Lab with only one goal and one product: security superintelligence.

It's called the Security Supersmart Company.

SSI is our mission, our name, and our entire product roadmap because that's our sole focus. Our team, investors, and business model all work together to make SSI a reality.

We see the parallel of security and capability as a technical problem that needs to be solved through revolutionary engineering and scientific breakthroughs. We plan to ramp up capabilities as quickly as possible while ensuring that security remains a priority.

In this way, we can expand in a state of calm.

Our focus means no distractions from managing matters or product cycles, and our business model means that security and technological advancements are not compromised by short-term business pressures.

We're a U.S. company with offices in Palo Alto and Tel Aviv, where we're rooted and where we can recruit top tech talent.

We are building a lean, world-class team of top engineers and researchers focused solely on SSI.

If you're one of those people, we offer an opportunity for you to complete the work of your life, helping to solve the most important technical challenges of our time.

Ilya Sutskevir, Daniel Gross, Daniel Levy

June 19, 2024

The American AI circle is shaking! Sutskevi, the core figure of "OpenAI Gongdou", officially announced his entrepreneurship

(Finance Associated Press Shi Zhengcheng)

View original image 84K

  • The American AI circle is shaking! Sutskevi, the core figure of "OpenAI Gongdou", officially announced his entrepreneurship
  • The American AI circle is shaking! Sutskevi, the core figure of "OpenAI Gongdou", officially announced his entrepreneurship
  • The American AI circle is shaking! Sutskevi, the core figure of "OpenAI Gongdou", officially announced his entrepreneurship

Read on