Cryptocurrencies

Leopold Aschenbrenner, former security researcher at OpenAI, creator of ChatGPT, doubled down,

Leopold Aschenbrenner, former security researcher at OpenAI, creator of ChatGPT, focuses on artificial general intelligence (AGI) in his latest series of articles on artificial intelligence (AI).

Dubbed “Situational Awareness,” the series offers an overview of the state of AI systems and their promising potential over the next decade. The full series of articles has been collected into a 165-page PDF file updated on June 4.

In the articles, the researcher paid particular attention to artificial general intelligence (AGI), a type of artificial intelligence that matches or exceeds human abilities in a wide range of cognitive tasks. AGI is one of several types of artificial intelligence, including artificial narrow intelligence, or ANI, and artificial superintelligence, or ASI.

Types of artificial intelligence. Source: InnovateForge

“AI by 2027 is remarkably plausible,” Aschenbrenner said, predicting that AGI machines would surpass college graduates by 2025 or 2026. He wrote:

“By the end of the decade, AGI machines will be smarter than you and me; We will have superintelligence, in the truest sense of the word. Along the way, national security forces will be unleashed like we have not seen in half a century (…).

According to Aschenbrenner, AI systems will likely possess intellectual abilities similar to those of a professional computer scientist. He also made another explicit prediction that AI labs would be able to train general-purpose language models in minutes, saying:

“To put this in perspective, let’s say the GPT-4 training lasted 3 months. By 2027, a leading AI lab will be able to train a GPT-4 level model in one minute.

Anticipating AGI’s success, Aschenbrenner called on the community to confront the reality of AGI. According to the researcher, the “smartest people” in the AI ​​industry have converged on a perspective he calls “AI realism,” which is based on three fundamental principles related to national security and development of AI in the United States.

about: Former employees of OpenAI and Anthropic demand a “right to be warned” about the dangers of AI

Aschenbrenner’s AGI series comes sometime after he was fired for an alleged “leak” of information from OpenAI. Aschenbrenner was also reportedly an ally of OpenAI’s chief scientist Ilya Sutskever, who was allegedly involved in a failed attempt to oust OpenAI CEO Sam Altman in 2023. Aschenbrenner’s latest series is also dedicated to Sutskever.

Aschenbrenner also recently founded an investment firm focused on artificial general intelligence, with key investments from figures including Patrick Collison, CEO of Stripe, he said on his blog.

review: Crypto Voters Have Already Disrupted the 2024 Elections – and It’s Set to Continue

Leave a Reply

Your email address will not be published. Required fields are marked *