OpenAI has disbanded the team created to prevent rogue AIs

Something earthshaking happened at OpenAI last week that you probably didn't read about.

OpenAI has disbanded the team created to prevent rogue AIs
Photo Credit: Unsplash/Anton Nazaretian

Something earthshaking happened at OpenAI last week that you probably didn't read about.

And no, it's not the release of GPT-4o, slyly unveiled on Monday, the day before Google I/O.

Amid the flurry of news around GPT-4o and Google's own litany of AI announcements was a development that is arguably of far greater import.

Superalignment team no more

Last week, OpenAI disbanded a team it set up specifically to allay concerns about supersmart AI systems going rogue on humans.

  • OpenAI chief scientist, Ilya Sutskever left on Tuesday.
  • Jan Leike announced he was leaving later that day.
  • Superalignment team dissolved days later.

Both Ilya and Jan led the Superalignment team, and this happened less than a year after it was set up with much fanfare in July 2023.

Was it personal?

All looked good on the surface. On X, CEO Sam Altman praised Ilya for his brilliance, calling him a dear friend.

If you recall, Ilya was part of efforts of an attempted move to remove Sam from OpenAI, purportedly due to concerns about AI safety.

The move to fire Sam didn't take, and the OpenAI board was eventually replaced instead. Ilya backpaddled in the opening days and it seemed like things returned to normal. (Read about it here)

Not enough focus on safety

On Friday, Jan broke ranks despite OpenAI's perpetual, non-disparagement clause that he undoubtedly signed to share publicly on X.

He wrote:

  • Had disagreed with OpenAI leadership's core priorities.
  • Getting harder last few months to do crucial research.
  • Safety culture and process have taken a backseat.
  • His team struggled for computing resources.

FYI: OpenAI said last year that it will dedicate 20% of its computing over the next four years to solve the problem of superalignment.

What happens next?

So what happens now? My prediction: Nothing.

Already, OpenAI has moved to assure jittery observers with various statements that, upon reading, are just... nice sounding but meaningless words.

In the meantime, expect OpenAI to keep working to develop AGI, or artificial general intelligence. But without a superalignment team.