laitimes

The Sam Altman-backed AI processor company hired a former Apple executive to lead hardware development

author:The semiconductor industry is vertical
The Sam Altman-backed AI processor company hired a former Apple executive to lead hardware development

THIS ARTICLE IS SYNTHESIZED BY THE SEMICONDUCTOR INDUSTRY (ID: ICVIEWS).

Jean-Didier Allegrucci 加入Rain AI从事硬件研发工作。

The Sam Altman-backed AI processor company hired a former Apple executive to lead hardware development

Rain AI, developer of AI hardware processors backed by OpenAI's Sam Altman and investment bank, has hired former Apple silicon executive Jean-Didier Allegrucci to lead its hardware engineering. This high-profile hiring shows that Rain AI has serious plans for its processors.

Jean-Didier Allegrucci, who has not updated his LinkedIn profile, has been with Apple's system-on-chip (SoC) division for more than 17 years since June 2007 and has overseen the development of more than 30 processors for iPhone, Mac, iPad and Apple Watch. According to a blog post by Rain AI, Allegrucci has been instrumental in building Apple's world-class SoC development team, overseeing areas such as SoC methodology, architecture, design, integration, and verification, so his experience will be invaluable to Rain AI. Prior to joining Apple, J-D Allegrucci worked at Vivante and ATI Technologies, both of which were developers of graphics processing units.

"We are very excited to have a hardware leader like J-D Allegrucci oversee our chip efforts," said William Passo, CEO of Rain AI. Our novel in-memory computing (CIM) technology will help unlock the true potential of today's generative AI models and bring us closer to the fastest, cheapest, and most advanced AI models running anywhere.

At Rain AI, Jean-Didier Allegrucci will work with Amin Firoozshahian, Chief Architect of Rain AI, who transitions from Meta Platforms after a five-year tenure that combines deep industry experience and innovative thinking to drive the company's ambitious goals. However, it will take quite a long time for Amin Firoozshahian and Jean-Didier Allegrucci to build their first system-on-chip at Rain AI, a process that often takes many years.

Rain AI's focus is on in-memory computing technology, which processes data in storage locations, mimicking the human brain. It is expected to significantly improve energy efficiency compared to traditional AI processors. At present, there is no mass-produced memory computing technology on the market, and NVIDIA's H100, B100/B200 and AMD's Instinct MI300X are the main application products, but these GPU processors require a lot of memory to work well with them.

Earlier this month, Rain AI licensed Andes' AX45MPV RISC-V vector processor for ACE/COPILOT instruction customization, and partnered with Andes' Custom Computing Business Unit (CCBU) to accelerate the development of its in-memory generative AI solutions. The collaboration aims to enhance Rain AI's product roadmap and deliver scalable AI solutions by early 2025.

Considering that it usually takes time to develop a complex processor from scratch, and that Rain AI is tasked with helping it build its first SoC in early 2025, it looks like the processor development led by Jean-Didier Allegrucci will be at least a few years away.

In-memory computing strategies for startups

With the boom in artificial intelligence (AI), power resources, which are a core component of AI infrastructure, are attracting more and more attention. There is no doubt that AI is a high-energy-consuming field, and data centers and super computing centers can be called "power-swallowing monsters". Some experts estimate that by 2027, the annual electricity consumption of the AI industry may be between 85~134 TWh, which is almost equivalent to the annual electricity demand of a country in the Netherlands. A previous report by the International Energy Agency (IEA) also pointed out that the electricity consumption of data centers will increase significantly in the near future due to the demand for artificial intelligence and cryptocurrencies.

Many industry bigwigs have also warned of the energy crisis that AI may face. In January, OpenAI CEO Sam Altman admitted that the AI industry is facing an energy crisis. He warns that AI will need energy breakthroughs in the future, as AI will consume far more electricity than people expect. Tesla CEO Elon Musk also said at a conference at the end of February that the chip shortage may be over, but AI and electric vehicles are expanding at such a rapid rate that the world will face a tight supply of power and transformers next year. Nvidia CEO Jensen Huang said bluntly: "The end of AI is photovoltaic and energy storage!" If we only consider the calculations, we need to burn 14 Earths of energy, and the super AI will become a bottomless pit of electricity demand. ”

A key reason is that AI requires a lot of data, and the process of transferring data back and forth between memory chips and processors consumes a lot of power. So, for at least a decade, researchers have been trying to save electricity by making chips that can process data where it's stored. This process is often referred to as "in-memory computing".

In-memory computing is still facing technical challenges and is just emerging from the research phase. And with AI's high energy consumption raising serious questions about its economic viability and environmental impact, technologies that can improve AI's energy efficiency could pay off dramatically. This makes in-memory computing an increasingly exciting topic. Major chipmakers such as TSMC, Intel, and Samsung Electronics are all working on in-memory computing. Individuals such as the CEO of OpenAI, companies such as Microsoft, and quite a few government-affiliated entities have invested in startups working on this technology.

Whether this technology will become an important part of the future of AI is uncertain. In general, in-memory computing chips are sensitive to environmental factors such as temperature changes, which can lead to calculation errors. Startups are looking at ways to improve this. However, replacing chips with new technology tends to be expensive, and customers are often hesitant unless they are confident in significant improvements. Startups must also convince customers that the benefits of new technology are worth the risk.

At the moment, in-memory computing startups have yet to address the most difficult part of AI computing, which is training new models, a process that is largely handled by AI chips designed by companies like Nvidia. In-memory computing startups don't seem to be planning to compete directly with Nvidia, and their goal is to base their business on inference, which is to take prompts and output content using existing models. Inference isn't as complex as training, but it's also massive. This means that there may be a promising market for chips specifically designed to improve inference efficiency.

In-memory computing companies are still figuring out the best uses for their products. Axelera, an in-memory computing startup based in the Netherlands, is targeting computer vision applications in automotive and data centers. Austin, Texas-based proponents of Mythic argue that in-memory computing is ideal for applications such as AI security cameras in the short term, but ultimately hopes they can be used to train AI models.

*Disclaimer: This article was created by the original author. The content of the article is his personal point of view, and our reprint is only for sharing and discussion, and does not mean that we agree or agree, if you have any objections, please contact the background.

Read on