Is OpenAI melting down because it secretly created something really scary?

“If OpenAI is doing something potentially dangerous to humanity, the world needs to know”

Everyone’s still scrambling to find a plausible explanation as to why OpenAI CEO Sam Altman was suddenly fired from his position last Friday, a decision that has led to absolute carnage at the company and beyond.

Beyond some vague language accusing him of not being “consistently candid in his communications with the board, hindering its ability to exercise its responsibilities,” the company’s nonprofit board has stayed frustratingly quiet as to why it sacked Altman.

And at the time of writing, the company’s future is still in the air, with the vast majority of employees ready to quit unless Altman is reinstated.

See Also:

Israeli satire show absolutely destroys the BBC over soft handling of Hamas terrorists

While we await more clarity on that front, it’s worth looking back at the possible reasoning behind Altman’s ousting. One particularly provocative possibility: there’s been plenty of feverish speculation surrounding Altman’s role in the company’s efforts to realize a beneficial artificial general intelligence (AGI) — its stated number one goal since it was founded in 2015 — and how that may have led to his dismissal.

Continue here: Futurism