Regardless of who is right here, I think the enormous egos at play make OpenAI the last company I’d want to develop AGI. First Sam vs. the board, now Elon vs. everyone … it’s pretty clear this company is not in any way being run for the greater good. It’s all ego and money with some good science trapped underneath.
Serious question. Does anyone trust Sam Altman at all anymore? My perspective from the outside is his public reputation is in tatters except that he's tied to Open AI. Im curious what the internal rep is and the greater community?
At least, I trust him as much as any other foreign[0] CEO.
For all the stuff people complain about him doing, almost none of it matters to me, except for the stuff which isn't proven (such as his sister's allegation) where I would change my mind if evidence was presented. What I don't trust is that Californian ethics don't map well enough onto my ethics, which also applies to basically all of Big Tech…
…but I'm not sure any ethics works too well when examined. A while ago I came up with the idea of "dot product morality"[1] — when I was a kid, "good vs evil" was enough, then I realised there were shades of grey, then I realised someone could be totally moral on one measure (say honesty) and totally immoral on another (say you're a vegan for ethical reasons and they're a meat lover), and I figured we might naturally simplify this inside our own minds my saying another person is "morally aligned" (implicitly: with ourselves) when their ethics vectors are pointing the same way as ours.
But more recently I realised that in a high dimensional space, there's a huge number of ways for vectors to be almost the same and yet 90° apart[2].
he is a salesman selling midly working snake oil, he was into nft a few year ago and jumped to the new snake oil. everything that elon is claiming now is real. they manipulated opinion and rode open source to train on public open data sources and once they had something they could,market they closed everything. its not up;for debate that is a fact
Closer alignment with my world view. Due to having grown up in the same milieu for the home country, due to choosing the country in part for its idea of what "good" looks like for the one I moved to.
Also, it tickles me to tell Americans that they are foreign. :P
Yes. I liked him when he was head of YC, I liked him when he was head of reddit for a few days, I like him now. I've never had any issue with him. When they made a capped-profit portion of OpenAI, they explained their reasoning, and I think it's clear we wouldn't have GPT-4 today (or in the foreseeable future) if they stayed purely non-profit.
Hell, capped-profit is more than you can say for any other tech company.
The funniest part of the OpenAI post is where someone comes in breathlessly and says "hey have you read this ACX post on why we shouldn't open source AGI" to the guy who's literally been warning everybody about AGI for decades and Elon is like: "Yup." Someone was murdered that day. There is nothing for dismissive than a yup.
You're generally correct, but what really stings is Claude 3 Opus released right at the same time. It's superior to GPT-4 in pretty much every way I've tested. Center of gravity has shifted across a few streets to Anthropic seemingly overnight.
The really depressing thing is that the board anticipated exactly this type of outcome when they were going "capped profit" and deliberately restructured the company with the specific goal of preventing this from happening... yet here we are.
It's difficult to walk away without concluding that "profit secondary" companies are fundamentally incompatible with VC funding. Would that be a pessimistic take? Are there better models left to try? Or is it perhaps the case that OpenAI simply grew too quickly for any number of safeguards to be properly effective?
I think the fact that a number of top people were willing to actually leave OpenAI and found Anthropic explicitly because OpenAI had abandoned their safety focus essentially proves that this wasn’t a thing that had to happen. If different leaders had been in place things could have gone differently.
Ah, but isn't that the whole thing about corporations? They're supposed to outlast their founders.
If the corporation fails to do what it was created to do, then I view that as a structural failure. A building may collapse due to a broken pillar, but that doesn't mean we should conclude it is the pillar's fault that the building collapsed -- surely buildings should be able to withstand and recover from a single broken pillar, no?
My sweet summer child. Do you really belive this Anthropic story and that Anthropic will go any other way? Under late-stage capitalism, there is no other way. Everyone has ideals until they see a big bag of money in front of them. It doesn't matter if the company is a non-profit, for-profit or whatever else.
At the top level, there is the non-profit board of directors (i.e.: the ones Sam Altman had that big fight with). They are not beholden to any shareholders and are directly bound by the company charter: https://openai.com/charter
The top-level nonprofit company owns holding companies in partnership with their employees. The purpose of these holding companies is to own a majority of shares in & effectively steer the bottom layer of our diagram.
At the bottom layer, we have the "capped profit" OpenAI Global, LLC (this layer is where Sam Altman lives). This company is beholden to shareholders, but because the majority shareholder is ultimately controlled by a non-profit board, it is effectively beholden to the interests of an entity which is not profit-motivated.
In order to raise capital, the holding company can create new shares, sell existing shares, and conduct private fundraising. As you can see on the diagram, Microsoft owns some of the shares in the bottom company (which they bought in exchange for a gigantic pile of Azure compute credits).
Except Altman has the political capital to have the entire board fired if they go against him, which makes the entire structure irrelevant. The power is where the technology is being developed -- at the bottom where they can threaten to walk out with plush jobs from the major shareholders at the bottom. The power is not where the figureheads sit at the top.
That's a good question, and I agree it was a clever design. I don't know that there is a way to modify the org structure to prevent what happened. As much as I dislike them, a clear non-compete clause after the structure was in place might have helped, but I'm not sure that's even an option in CA. And having employees re-sign for noncompete would be fraught itself (better from the start). But this does seem like the most relevant application of non-compete. I'm not a lawyer and I'm sure they had top notch lawyers review the structure. If OpenAI played the non-compete card it wouldn't make retention easier if employees were willing to walk (they wouldn't exactly have trouble finding jobs anywhere). Do you know of anything that might have prevented it?
Well, if you squint a little bit, this all looks kind of like a military coup. Through the cultivation of personal loyalty in his ranks, General Altman became able to ignore orders, cross the Rubicon, and subjugate the senate. It's an old but common story.
I point out this similarity because I suspect that the corporate solution to such "coups" will mirror the government solution: checks and balances. You build a system which functions by virtue of power conflicts rather than trying to prevent them. I won't pretend to know how such a thing could be implemented in practice, however.
And what was this structure supposed to achieve? At the top we have board of directors not accountable to anyone, except, as we recently discovered, to the possibility of a general rebellion from employees.
That's not clever or innovative. That's just plain old oligarchy. Intrigue and infighting is a known feature of oligarchies ever since antiquity.
Whether or not it is “clever”, the idea of a non-profit or charity owning a for-profit company isn’t original. Mozilla has been doing it for years. The Guardian (in the UK) adopted that structure in 1936
That's not entirely true. As a 501(c)(3) organization, they are bound to honor their founding bylaws on pain of having their tax-exempt status revoked w/ retroactive consequences. I won't comment on whether this is the fault of the bylaws or the IRS... but in the end I think we can agree that this was evidently not an effective enforcement mechanism.
As for the whole "cleverness" topic... it wasn't designed as an oligarchy, that's merely what it has devolved into. The saying "too clever by half" exists with good reason
I assure you that we all got the allusion that you're making, but given the quote that you're replying to I think that perhaps you personally should not be allowed to own a dog.
I'm aware. The quote just so happened to appear in a Youtube program called Code Report yesterday, so I thought you might've been a viewer. I didn't mean to imply anything beyond that, sorry for the confusion.
Surely, anyone taking on and enduring the pain of running a company does so for egoistic reasons.
Your implicit assumption is that altruism exists. In the limit, every living being is egoistic. Anything you do is ultimately for egoistic reasons - even if you do it “for others” at first sight, it ultimately benefits you in some way, even only to make you feel better.
A common misconception is that “egoism is bad”. Egoism doesn’t have to be bad. If the goals align it’s a net benefit for both sides. For example, a child might seek care, while parents seek happiness. Both are egoistic, but both benefit from each other.
Elon Musk is renowned for being an attention seeker and doing these stunts as a proxy to relevance. It's touring Texas borders wearing a hat backwards, it's messing with Ukraine's access to starlink alongside making statements on geopolitics, it's pretending that he discovered railway and the technology for digging holes in the ground as a silver bullet for transportation problems, it's making bullshit statements on cave rescue submarines followed by attacking actual cave rescuers who pointed out the absurdity of it of being pedophiles... Etc etc etc.
I think it makes no sense at all to evaluate the future of an organization based on what stunts Elon Musk is pulling. There must be better metrics.
Did you also take him at his word when he said 5+ years ago that Teslas can drive themselves safer than a human "today"? Or that Cybertruck has nuclear explosion proof glass (which was immediately shattered by a metal ball thrown lightly at it)?
Musk has a long history of shamelessly lying when it suits his interest, so you should really really not take him at his word.
Pointing out Elon Musk's claims regarding free speech and the shit show he's been forcing upon Twitter, not to mention his temper tantrum directed at marketing teams for ditching post-Musk Twitter due to fears their ads could showcase alongside unsavoury content like racist posts and extremism in general, should be enough to figure out the worth of Elon Musk's word and the consequences of Elon Musk's judgement calls.
I seem to remember that being only partially true? Or the license was weird and deceptive? Also as other replies have stated why isnt "Grok" open source? Musk loves to throw around terms like open source to generate good will but when it comes time to back those claims up it never happens. I wouldnt take Musk at his word for literally anything.