Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Regardless of who is right here, I think the enormous egos at play make OpenAI the last company I’d want to develop AGI. First Sam vs. the board, now Elon vs. everyone … it’s pretty clear this company is not in any way being run for the greater good. It’s all ego and money with some good science trapped underneath.


Serious question. Does anyone trust Sam Altman at all anymore? My perspective from the outside is his public reputation is in tatters except that he's tied to Open AI. Im curious what the internal rep is and the greater community?


Yes, me.

At least, I trust him as much as any other foreign[0] CEO.

For all the stuff people complain about him doing, almost none of it matters to me, except for the stuff which isn't proven (such as his sister's allegation) where I would change my mind if evidence was presented. What I don't trust is that Californian ethics don't map well enough onto my ethics, which also applies to basically all of Big Tech…

…but I'm not sure any ethics works too well when examined. A while ago I came up with the idea of "dot product morality"[1] — when I was a kid, "good vs evil" was enough, then I realised there were shades of grey, then I realised someone could be totally moral on one measure (say honesty) and totally immoral on another (say you're a vegan for ethical reasons and they're a meat lover), and I figured we might naturally simplify this inside our own minds my saying another person is "morally aligned" (implicitly: with ourselves) when their ethics vectors are pointing the same way as ours.

But more recently I realised that in a high dimensional space, there's a huge number of ways for vectors to be almost the same and yet 90° apart[2].

[0] I'm not American, so to me he's foreign

[1] I really need to move my blog to github, the only search results on Google are the previous times I posted this to HN: https://kitsunesoftware.wordpress.com/2019/05/25/dot-product...

[2] Via trying to calculate the dot product of two Markov chains: https://github.com/BenWheatley/MarkovChain-Dot-Product-compa...


he is a salesman selling midly working snake oil, he was into nft a few year ago and jumped to the new snake oil. everything that elon is claiming now is real. they manipulated opinion and rode open source to train on public open data sources and once they had something they could,market they closed everything. its not up;for debate that is a fact


I'm old enough to remember GPT-2, which they didn't release because they "wanted to set a precedent" for safety and responsibility.

GPT-2 wasn't marketable. They were mocked for it, even.

So it is, in fact, up for debate. So is much of the rest, that I care about.


you might want to google gp2 and github


They were closed source from the start. And he invested in one NFT startup (among many other startups).


Curious why “foreign” was worth mentioning? Do you trust your countrymen CEOs more or less?


Closer alignment with my world view. Due to having grown up in the same milieu for the home country, due to choosing the country in part for its idea of what "good" looks like for the one I moved to.

Also, it tickles me to tell Americans that they are foreign. :P


Yes. I liked him when he was head of YC, I liked him when he was head of reddit for a few days, I like him now. I've never had any issue with him. When they made a capped-profit portion of OpenAI, they explained their reasoning, and I think it's clear we wouldn't have GPT-4 today (or in the foreseeable future) if they stayed purely non-profit.

Hell, capped-profit is more than you can say for any other tech company.


The funniest part of the OpenAI post is where someone comes in breathlessly and says "hey have you read this ACX post on why we shouldn't open source AGI" to the guy who's literally been warning everybody about AGI for decades and Elon is like: "Yup." Someone was murdered that day. There is nothing for dismissive than a yup.


You're generally correct, but what really stings is Claude 3 Opus released right at the same time. It's superior to GPT-4 in pretty much every way I've tested. Center of gravity has shifted across a few streets to Anthropic seemingly overnight.


I have had homework questions (functional analysis and commutative ring theory) that GPT-4 is good enough for but Claude3 has been strikingly better.


> It's superior to GPT-4 in pretty much every way I've tested.

They are just catching up to GPT-4 which was released a year ago (March 14, 2023). Meanwhile GPT-5 has been through a year of development.


meanwhile claude is not globally available...


The really depressing thing is that the board anticipated exactly this type of outcome when they were going "capped profit" and deliberately restructured the company with the specific goal of preventing this from happening... yet here we are.

It's difficult to walk away without concluding that "profit secondary" companies are fundamentally incompatible with VC funding. Would that be a pessimistic take? Are there better models left to try? Or is it perhaps the case that OpenAI simply grew too quickly for any number of safeguards to be properly effective?


I think the fact that a number of top people were willing to actually leave OpenAI and found Anthropic explicitly because OpenAI had abandoned their safety focus essentially proves that this wasn’t a thing that had to happen. If different leaders had been in place things could have gone differently.


Ah, but isn't that the whole thing about corporations? They're supposed to outlast their founders.

If the corporation fails to do what it was created to do, then I view that as a structural failure. A building may collapse due to a broken pillar, but that doesn't mean we should conclude it is the pillar's fault that the building collapsed -- surely buildings should be able to withstand and recover from a single broken pillar, no?


My sweet summer child. Do you really belive this Anthropic story and that Anthropic will go any other way? Under late-stage capitalism, there is no other way. Everyone has ideals until they see a big bag of money in front of them. It doesn't matter if the company is a non-profit, for-profit or whatever else.


How did they structure it to prevent this? Is it in the statutes of the company or smth?


It's actually a very clever structure! Please open the following image to follow along as I elaborate: https://pbs.twimg.com/media/F_PGOPOacAApU8e.jpg

At the top level, there is the non-profit board of directors (i.e.: the ones Sam Altman had that big fight with). They are not beholden to any shareholders and are directly bound by the company charter: https://openai.com/charter

The top-level nonprofit company owns holding companies in partnership with their employees. The purpose of these holding companies is to own a majority of shares in & effectively steer the bottom layer of our diagram.

At the bottom layer, we have the "capped profit" OpenAI Global, LLC (this layer is where Sam Altman lives). This company is beholden to shareholders, but because the majority shareholder is ultimately controlled by a non-profit board, it is effectively beholden to the interests of an entity which is not profit-motivated.

In order to raise capital, the holding company can create new shares, sell existing shares, and conduct private fundraising. As you can see on the diagram, Microsoft owns some of the shares in the bottom company (which they bought in exchange for a gigantic pile of Azure compute credits).


Except Altman has the political capital to have the entire board fired if they go against him, which makes the entire structure irrelevant. The power is where the technology is being developed -- at the bottom where they can threaten to walk out with plush jobs from the major shareholders at the bottom. The power is not where the figureheads sit at the top.


Right. That's the thing which I found to be so depressing. Just because I think that it was clever does not mean that I think it was successful.

Out of curiousity, with the benefit of hindsight; what would you have tried doing differently to prevent such a coup?


That's a good question, and I agree it was a clever design. I don't know that there is a way to modify the org structure to prevent what happened. As much as I dislike them, a clear non-compete clause after the structure was in place might have helped, but I'm not sure that's even an option in CA. And having employees re-sign for noncompete would be fraught itself (better from the start). But this does seem like the most relevant application of non-compete. I'm not a lawyer and I'm sure they had top notch lawyers review the structure. If OpenAI played the non-compete card it wouldn't make retention easier if employees were willing to walk (they wouldn't exactly have trouble finding jobs anywhere). Do you know of anything that might have prevented it?


Well, if you squint a little bit, this all looks kind of like a military coup. Through the cultivation of personal loyalty in his ranks, General Altman became able to ignore orders, cross the Rubicon, and subjugate the senate. It's an old but common story.

I point out this similarity because I suspect that the corporate solution to such "coups" will mirror the government solution: checks and balances. You build a system which functions by virtue of power conflicts rather than trying to prevent them. I won't pretend to know how such a thing could be implemented in practice, however.


Ultimately, the people performing the actual work will always have a collective veto power.


Like the rest of the company, it's very clever but not in any way positive for anyone but them.


And what was this structure supposed to achieve? At the top we have board of directors not accountable to anyone, except, as we recently discovered, to the possibility of a general rebellion from employees.

That's not clever or innovative. That's just plain old oligarchy. Intrigue and infighting is a known feature of oligarchies ever since antiquity.


> That's not clever or innovative

Whether or not it is “clever”, the idea of a non-profit or charity owning a for-profit company isn’t original. Mozilla has been doing it for years. The Guardian (in the UK) adopted that structure in 1936


That's not entirely true. As a 501(c)(3) organization, they are bound to honor their founding bylaws on pain of having their tax-exempt status revoked w/ retroactive consequences. I won't comment on whether this is the fault of the bylaws or the IRS... but in the end I think we can agree that this was evidently not an effective enforcement mechanism.

As for the whole "cleverness" topic... it wasn't designed as an oligarchy, that's merely what it has devolved into. The saying "too clever by half" exists with good reason


If AGI is as transformative as its proponents make it out to be, would it both attract and create those enormous egos though?


Which is why one might create a mechanism, say a non-profit, that has an established, codified mission to combat such obviously foreseeable efforts.


OpenAI clearly rejected Elon Musk's advances and kept him out. Isn't it working already in its current form?


Touch grass


“I visualize a time when we will be to robots what dogs are to humans. And I am rooting for the machines.” — Claude Shannon


A robot stamping on a human face—forever.


I assure you that we all got the allusion that you're making, but given the quote that you're replying to I think that perhaps you personally should not be allowed to own a dog.


Pray tell me, sir, whose dog are you?


A fellow Code Report viewer, I assume?


who? never heard of it. Prof Shannon said this a while ago, in the '50s I believe.


I'm aware. The quote just so happened to appear in a Youtube program called Code Report yesterday, so I thought you might've been a viewer. I didn't mean to imply anything beyond that, sorry for the confusion.


Who runs a company “for the greater good”?

Surely, anyone taking on and enduring the pain of running a company does so for egoistic reasons.

Your implicit assumption is that altruism exists. In the limit, every living being is egoistic. Anything you do is ultimately for egoistic reasons - even if you do it “for others” at first sight, it ultimately benefits you in some way, even only to make you feel better.

A common misconception is that “egoism is bad”. Egoism doesn’t have to be bad. If the goals align it’s a net benefit for both sides. For example, a child might seek care, while parents seek happiness. Both are egoistic, but both benefit from each other.


It's not supposed to be a company, it is supposed to be a non-profit organization with an altruistic goal.

If it's not possible maybe they shouldn't have set it up this way, but they did and here they are.


I was with them, sort of, until they had this bit of Comms-major corporate BS:

> We’re making our technology broadly usable in ways that empower people and improve their daily lives, including via open-source contributions.


Bingo. Pure, unadulterated ego


> First Sam vs. the board, now Elon vs. everyone

Elon Musk is renowned for being an attention seeker and doing these stunts as a proxy to relevance. It's touring Texas borders wearing a hat backwards, it's messing with Ukraine's access to starlink alongside making statements on geopolitics, it's pretending that he discovered railway and the technology for digging holes in the ground as a silver bullet for transportation problems, it's making bullshit statements on cave rescue submarines followed by attacking actual cave rescuers who pointed out the absurdity of it of being pedophiles... Etc etc etc.

I think it makes no sense at all to evaluate the future of an organization based on what stunts Elon Musk is pulling. There must be better metrics.


Ad hominem makes for a damn good argument, especially if the person in case doesn’t try to appease everyone.


When you describe it, it sounds a lot like an average Hacker News commenter.

Enjoys hearing themselves speak so they can't help themself but to share a speculative idea or opinion on something they have little familiarity with.

High tolerance for being part of the out-group or sharing unpopular takes.

Placing logic (or what is logical by their own judgment) above all else including social niceties.


Yes and no. Elon has ego, but I also take him at his word when he says he wants to open source AI. He did the same thing with Tesla's patents.


Did you also take him at his word when he said 5+ years ago that Teslas can drive themselves safer than a human "today"? Or that Cybertruck has nuclear explosion proof glass (which was immediately shattered by a metal ball thrown lightly at it)?

Musk has a long history of shamelessly lying when it suits his interest, so you should really really not take him at his word.


Pointing out Elon Musk's claims regarding free speech and the shit show he's been forcing upon Twitter, not to mention his temper tantrum directed at marketing teams for ditching post-Musk Twitter due to fears their ads could showcase alongside unsavoury content like racist posts and extremism in general, should be enough to figure out the worth of Elon Musk's word and the consequences of Elon Musk's judgement calls.


I seem to remember that being only partially true? Or the license was weird and deceptive? Also as other replies have stated why isnt "Grok" open source? Musk loves to throw around terms like open source to generate good will but when it comes time to back those claims up it never happens. I wouldnt take Musk at his word for literally anything.


Why isn't gronk open source?


Was it ever promised to be open source?


Didn't the stipulation for the patents require other automakers to share their patents as a condition of using Tesla patents?

"I'd like some of your turkey please and in exchange I will offer this half eaten chicken bone."


You mean like the GPL open source license?


Did you read the article? According to it, Elon Musk agreed with AGI being closed source, but he wanted controlling interest.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: