Ilya Sutskever, co-founder and former chief scientist of OpenAI, and Eng.

Ilya Sutskever, co-founder and former chief scientist of OpenAI, and former OpenAI engineer Daniel Levy joined forces with Daniel Gross, investor and former partner at startup accelerator Y Combinator, to create Safe Superintelligence , Inc. (SSI). The goal and product of the new company are clear from its name.

SSI is an American company with offices in Palo Alto and Tel Aviv. It will advance artificial intelligence (AI) by developing security and capabilities side by side, the three founders said in an online announcement on June 19. They added:

“Our single-minded focus means we are not distracted by administrative costs or product cycles, and our business model means safety, security and progress are all shielded from short-term commercial pressures. »

Sustkever and Gross already worried about AI safety

Sutskever left OpenAI on May 14. He was involved in the firing of CEO Sam Altman and played an ambiguous role at the company after resigning from the board following Altman’s return. Daniel Levy was among the researchers who left OpenAI a few days after Sutskever’s departure.

Related: OpenAI makes ChatGPT ‘less granular,’ blurring the distinction between writer and AI.

Sutskever and Jan Leike were leaders of OpenAI’s Superalignment team, created in July 2023 to study how to “guide and control AI systems smarter than us.” These systems are called artificial general intelligence (AGI). OpenAI allocated 20% of its computing power to the Superalignment team at the time of its creation.

Leike also left OpenAI in May and is now a team lead at Anthropic, an AI startup backed by Amazon. OpenAI defended its security measures in a lengthy X-rated message from company president Greg Brockman, but disbanded the Superalignment team after its researchers left in May.

Other tech bigwigs are also worried

Former OpenAI researchers are among many scientists interested in the future direction of artificial intelligence. Ethereum co-founder Vitalik Butertin described artificial general intelligence as “risky” amid staff turnover at OpenAI. He added, however, that “such models also pose far less risk of disaster than corporate and military paranoia.”

Source: Ilya Sutskever

Tesla CEO Elon Musk, an OpenAI supporter, and Apple co-founder Steve Wozniak were among more than 2,600 technology leaders and researchers who called for training of AI systems to be suspended for six months while humanity considered the “risk” of the depths. “that they represent.

The state security investigation announcement says the company is hiring engineers and researchers.

Magazine: How to Get Better Cryptocurrency Predictions from ChatGPT, Drag Pin Humane AI: AI Eye

Leave a Reply

Your email address will not be published. Required fields are marked *