Since its founding in 2015, its leaders have said their top priority is making sure artificial intelligence is developed safely and beneficially. They’ve touted the company’s unusual corporate structure as a way of proving the purity of its motives. OpenAI was a nonprofit controlled not by its CEO or by its shareholders, but by a board with a single mission: keep humanity safe.
But this week, the news broke that OpenAI will no longer be controlled by the nonprofit board. OpenAI is turning into a full-fledged for-profit benefit corporation. Oh, and CEO Sam Altman, who had previously emphasized that he didn’t have any equity in the company, will now get equity worth billions, in addition to ultimate control over OpenAI.
In an announcement that hardly seems coincidental, chief technology officer Mira Murati said shortly before that news broke that she was leaving the company. Employees were so blindsided that many of them reportedly reacted to her abrupt departure with a “WTF” emoji in Slack.
WTF indeed.
CEO Sam Altman, who had previously emphasized that he didn’t have any equity in the company, will now get equity worth billions, in addition to ultimate control over OpenAI.
what! You mean he stands to profit billions after lying about his intentions?! A techbro would never!!
comedy goldmine :
They could get up to 100 times what they put in, but beyond that, the money would go to the nonprofit, which would use it to benefit the public. For example, it could fund a universal basic income program to help people adjust to automation-induced joblessness.
“If OpenAI were to retroactively remove profit caps from investments, this would in effect transfer billions in value from a non-profit to for-profit investors,” Jacob Hilton, a former employee of OpenAI who joined before it transitioned from a nonprofit to a capped-profit structure.
I’m sure the investors weren’t selling him on the idea that if they got a bigger return he would as well, surely.
I think that over the next few years Sam Altman is going to learn the same lessons that events have been trying to teach Elon Musk since circa 2021.
- You didn’t build that. The people that work for you did.
- Being a big hero is contingent on you and your behavior, and can change.
- Those people who are giving you all this money aren’t your comrades. When your usefulness is at its end, they won’t give you a second thought.
Elon Musk is doing fine though
Yeah, if anything; Musk is likely an example he’s aspiring to be.
Can’t sell you out if you never bought in
About time they rebrand as ClosedAI.
shocked pikachu face
Tech obsessives, tech obsessing
Captitalists capitalising.
It’s WeWork and Adam Neumann all over again.
You couldn’t pay me to invest in this shit and it feels a little insane that seemingly intelligent VC’s are doing so.
Don’t give them your data folks!
You don’t know what you inputs will be used for in the future but nobody also was thinking that Facebook posts from 2000 would be a large piece of a training data for these llms lol
Definitely.
Also, don’t invest in companies that hand total control to one person. That’s a recipe for having that one idiot blow all of your money, like Adam Neumann did. (Fun fact: Toward the end of WeWork’s heyday, Neumann was burning $3k in cash a minute.)
i want them trained on me so that our future robot overlords will respect me… maybe create some simulacrum of my consciousness to live on forever
You think they would expend resources recreating nobodies like us? Sam gets his digital construct immortality and we squat.
i’ve been obsessively commenting on reddit for years… i’ll live on forever
What, founder of cryptoscam Worldcoin is going to cash out of a project sold primarily on hype. Say it ain’t so. /s
Gotta get out while the gettin is good. Otherwise, if you lose the copyright lawsuits… RIP
Generative AI has reached its peak after all.
just sold you out
They been sellin us out since the start. And they never even paid for us!
I don’t know whether Altman or the board is better from a leadership standpoint, but I don’t think that it makes sense to rely on boards to avoid existential dangers for humanity. A board runs one company. If that board takes action that is a good move in terms of an existential risk for humanity but disadvantageous to the company, they’ll tend to be outcompeted by and replaced by those who do not. Anyone doing that has to be in a position to span multiple companies. I doubt that market regulators in a single market could do it, even – that’s getting into international treaty territory.
The only way in which a board is going to be able to effectively do that is if one company, theirs, effectively has a monopoly on all AI development that could pose a risk.
I am SHOCKED.