This saddens me as well, because that's the type of thing that happens every day where I live, but...
> I don’t understand why it is allowed to continue.
The answer is even sadder. It's even worse. And it is as follows: because there's not enough people who are taking action, and from those taking action there's not enough people in power to change something significantly. At least that's how I see it. And... I can't even blame those who don't take action - because many people feel completely powerless, they feel like "what you can do to stop this war/other thing if you're just a regular human?"
There's also a huge cost for taking action about this especially in the US. You can easily get thrown out of school, have your career destroyed or be deported.
This. There are entire groups dedicated to rooting out any sign of deviation from per-authorized storyline and verbiage. It is particularly striking given that US considers itself 'free speech' bastion.
This is mostly a US thing. Netanyahu and Putin are two war criminals according to International Court of Justice. Although Trump threatened the ICJ, this doesn't change that basic fact.
Already in 2002 US passed the "American Service-Members' Protection Act" that allows USA to deploy military to prevent U.S. or allied officials and military personnel from being prosecuted or detained by the ICC.
It passed via bypartisan vote well in time before US launched the illegal invasion of Iraq in which it committed various war crimes.
This goes beyond direct action by individuals, it’s completely obvious what’s happening and it happens because the US political system has been captured.
If everyone here is so concerned with all of this, why is nobody suggesting doing anything? Do you all prefer rolling around on the floor crying and screaming rather than actually doing anything?
You don’t make policy proposals, you don’t try to form organised groups to foment change, you don’t put forwards collective demands. Instead you bitch and moan and spew performative rhetoric.
Actions not words. Do something or shut the fuck up.
Maybe so, but people talk like they’re very concerned and I see no evidence these concerns are genuine. Is nobody doing anything constructive or even trying to? Maybe I missed it and they are? People can peacefully campaign and advocate for things they think are important, I don’t see it happening.
Your opinion appears to be people shouldn’t share opinions unless they achieve a certain level of activity behind. So, like, idk dude. I don’t see you campaigning to restrict free speech so shut up?
I’m telling people maybe they should organise politically if they’re so concerned about all of this.
I don’t know if I really believe all the AI doom myself or if it’s just the hype train. Sometimes I think I do, sometimes I think it’s a bit bullshitty. I wonder if others actually think the same and that’s why nobody takes action, because people don’t really believe the various apocalyptic scenarios enough to take action on them.
I haven’t paid any attention to the mission, and there’s something about the framing of this article that I don’t like, as if it’s talking about a soap opera or reality TV or something. It just rubs me up the wrong way.
I agree. Even though I thought this mission was interesting, to me the article massively overstates everything. NASA and the crew is SO amazingly competent, the world in recent years is SO totally devoid of competency, everyone has been thirsting for the sense of AWE that we are ALL feeling (or should be feeling now, let me list the reasons!), etc.
To me, this was irritating. True competency and things that inspire real awe encapsulate “res ipsa loquitur” — they speak for themselves. Having some internet influencer try to hype me into getting awed, and implying that “we all” are feeling a certain way as she channels our collective zeitgeist is tiresome.
And personally, IMO although the mission was nice, it wasn’t groundbreaking technically or particularly awe-inspiring.
Ironically, I left feeling a tiny bit disappointed: if everyone is truly thinking this mission is the height of awesomeness or competency, we have a low-ish bar.
I bet that when the old-timers with their starched white shirts, pocket protectors, and horn-rimmed glasses that did the 60s missions got together to watch 2026 Artemis they privately had a good laugh about how little state-of-the-art has progressed.
For what it’s worth Dan, you’re probably the best moderator I’ve ever encountered, and without you HN likely wouldn’t be worth visiting. As it is it’s one of the best places for online discourse. That’s directly because of you and your efforts.
It’s not easy to be a cop, and that’s basically what you are around here, but thank you for doing it.
Just take a second to consider this: if HN, probably one of the less reactionary places on the internet, and one of the most capitalist-friendly, is this angry at this point, before the mass job losses even start, what in the name of God do you think the general public is going to be like when they’ve been going on for years?
If nothing else there’s a serious self-preservation incentive for AI CEOs to sort something out that doesn’t get them lynched, because it’s not looking good.
Maybe HN is particularly upset because they feel targeted, given that overpaid tech executives have been giddily making the claim that programming jobs will disappear any minute now. What makes it even worse is that it's very obvious that said tech executives haven't programmed in over 10 years, if ever, and don't know anything about the technology they are selling. They are putting jobs at risk purely for the sake of personal enrichment.
This is probably combined with a general sense of AI fatigue. The population as a whole is getting tired of "AI slop" and companies trying to shoehorn "AI" into everything. Personally I'm also tired of every startup needing to be an AI startup. As if there was nothing else worth building or investing in. It's sucking the air out of the room.
Nobody has one. If labor stops having value the economy will stop working and society will break down far in advance of building the infrastructure necessary for the promised AI abundance.
I like the idea of being ”post-scarcity” as much as the next guy, but I don’t understand how we get there. It’s a project in itself, it doesn’t just happen by magic, and nobody is actively trying to make it happen or has any logistical idea of what it involves.
We’ll also lose a huge number of jobs as soon as true AGI comes on stream, by which I mean the kind of AI that no longer acts like somebody who has read all the world’s books but can’t figure out that you always need to drive to the carwash.
We’ll lose these jobs and there will be no super abundance at that point, and not even government support.
There is the option of passing laws requiring companies to retain human employees. That to me is about the only viable stopgap measure.
It is not impossible to think that many people will just be served an UBI and don't expect much more in life, after all, if we have AI+Family+Housing+Food (assuming gov robots would take care of providing us free food in some form), I bet millions of people would be contented with it.
PS: I include AI as an important one in the future because it will be a direct way to get educated and replace college for example without having to pay (or very cheap).
You’ve addressed a different question, which is how satisfied with life will people be post scarcity. That’s a fine conversation to have, but it’s not the one I was having. My point is: how do we get there?
Seeing as how austerity governments campaigned on reducing social benefits and achieved considerable success over the past few decades, I don't see how your solution consisting of granting people even more social benefits will ever happen. Unless there law and order is about to break down, there is no reason for the rich to leave all of that money "on the table".
It made me kind of angry when I saw Dario repeatedly claiming that AI would be taking all the programming jobs any minute now. His company supposedly is working for a better future, but he's giddily talking about something that could cause millions of people to lose their homes if it were true.
Our governments have a habit of being reactive rather than proactive. People have floated the idea of UBI, but if UBI happens, it will probably mean it's the only way to avert a crisis, and the amount that people will get might only be enough to rent a bedroom and eat processed food.
I think in the medium term, the reaction is overblown. Even though LLMs can make software engineers more productive, you still have a competitive advantage in having more software engineers. Medium to long term though, the goal is obviously to replace human jobs.
I'm not a communist, but Karl Marx understood that the labor force gets its bargaining power because they are necessary to produce value. What do people imagine happens when the human labor force becomes essentially completely replaceable? They imagine the government will be forced to take care of the population to prevent an uprising, but they forget that the police and the army can be replaced by machines too.
You can look up what tends to happen when human labor isn't needed anymore by reading about the resource curse - that one is also about not needing human labor. Only the least corrupt countries seem to be able to resist it. None of these countries have a very large population, so chances are that you don't live in one of them.
We use a lot of euphemisms and have a number of myths around political violence. The fact of the matter, so far as I can see, seems to be that political violence is extremely effective, however also extremely destabilising if used at scale.
Force just works a lot of the time, assuming you can win, and often even if you can’t, as even imposing a cost on your opponent often gets you a better deal. There’s a reason we keep having wars.
Also realise that the government monopoly on force is ultimately the only reason that anybody follows laws. That following laws is good for us is beside the point - force must be threatened and used in order to maintain control.
So, force, a euphemism for violence, is ultimately the way anything gets done, and we all have an incentive to lie about this just for the sake of stability.
I don’t know if this answers your question, but it’s what comes to mind on the subject for me.
I think what you’re describing is a more general race to the bottom where everyone loses, including the AI companies.
This won’t happen because the AI companies will collude to prevent it from happening, meaning they’ll drop out of that race leaving the rest of us to claim victory.
No Im not describing a race to the bottom. Im saying that its in Google's best interest to ensure Anthropic and OAI do not continue to operate as a going concern and generate enough cash flows to finance reinvestment - by providing a very competitive offering.
Price of tokens is one competitive-instrument for them to achieve that but not the only one - they offer a whole lot more to enterprises that OAI and Anthropic don't.
By doing so Anthropic and OAI's valuations go crashing into the ground along with future prospects of raising funding externally.
It’s not about the mechanism: responsibility is a social construct, it works the way people say that it works. If we all agree that a human can agree to bear the responsibility for AI outputs, and face any consequences resulting from those outputs, then that’s the whole shebang.
Sure we could change the law. It would be a stupid change to allow individuals, organizations, and companies to completely shield themselves from the consequences of risky behaviors (more than we already do) simply by assigning all liability to a fall guy.
Imagine your a factory owner and you need a chemical delivered from across the country, but the chemical is dangerous and if the tanker truck drives faster than 50 miles per hour it has a 0.001% chance per mile of exploding.
You hire an independent contractor and tell him that he can drive 60 miles per hour if he wants to but if it explodes he accepts responsibility.
He does and it explodes killing 10 people. If the family of those 10 people has evidence you created the conditions to cause the explosion in order to benefit your company, you're probably going to lose in civil court.
Linus benefits from the increase velocity of people using AI. He doesn't get to put all the liability on the people contributing.
Why would I put much effort into responding to a post like yours, which makes no sense and just shows that you don't understand what you're talking about?
Right now it's very easy not to infringe on copyrighted code if you write the code yourself. In the vast majority of cases if you infringed it's because you did something wrong that you could have prevented (in the case where you didn't do anything wrong, inducement creation is an affirmative defense against copyright infringement).
That is not the case when using AI generated code. There is no way to use it without the chance of introducing infringing code.
Because of that if you tell a user they can use AI generated code, and they introduce infringing code, that was a foreseeable outcome of your action. In the case where you are the owner of a company, or the head of an organization that benefits from contributors using AI code, your company or organization could be liable.
So it's a bit as if Linux Organization told its contributors you can bring in infringing code but you must agree you are liable for any infringement?
But if a lawsuit was later brought who would be sued? The individual author or the organization? In other words can an organization reduce its liability if it tells its employees "You can break the law as long as you agree you are solely responsible for such illegal actions?
It would seem to me that the employer would be liable if they "encourage" this way of working?
A human has to willingly violate the law for that to happen though. There is no way for a human to use AI generated that doesn't have a chance of producing copyrighted code though. That's just expected.
If you don't think this is a problem take a look at the terms of the enterprise agreements from OpenAI and Anthropic. Companies recognize this is an issue and so they were forced to add an indemnification clause, explicitly saying they'll pay for any damages resulting in infringement lawsuits.
They don’t produce enough similar code to infringe frequently. And if they did independent creation is an affirmative defense to copyright infringement that likely doesn’t apply to LLMs since they have the demonstrated capability to produce code directly from their training set.
You have shifted from "very easy not to infringe" to "don't infringe frequently", which concedes the original point that humans can and do produce infringing code without intent.
On independent creation: you are conflating the tool with the user. The defense applies to whether the developer had access to the copyrighted work, not whether their tools did. A developer using an LLM did not access the training set directly, they used a synthesis tool. By your logic, any developer who has read GPL code on GitHub should lose independent creation defense because they have "demonstrated capability to produce code directly from" their memory.
LLM memorization/regurgitation is a documented failure mode, not normal operation (nor typical case). Training set contamination happens, but it is rare and considered a bug. Humans also occasionally reproduce code from memory: we do not deny them independent creation defense wholesale because of that capability!
In any case, the legal question is not settled, but the argument that LLM-assisted code categorically cannot qualify for independent creation defense creates a double standard that human-written code does not face.
> You have shifted from "very easy not to infringe" to "don't infringe frequently", which concedes the original point that humans can and do produce infringing code without intent.
Practically speaking humans do not produce code that would be found in court to be infringing without intent.
It is theoretically possible, but it is not something that a reasonable person would foresee as a potential consequence.
That’s the difference.
> LLM memorization/regurgitation is a documented failure mode, not normal operation (nor typical case).
Exactly. It is a documented failure mode that you as a user have no capacity to mitigate or to even be aware is happening.
Double standards are perfectly fine. LLMs are not conscious beings that deserve protection under the law.
>not settled.
What appears to likely be settled is that human authorship is required, so there’s no way that an LLM could qualify for independent creation.
And that's not an infringement. Actual copying is the infringement, not having the same code. The most likely way to have the same code is by copying, but it's not the only way.
Responsibility is an objective fact, not just some arbitrary social convention. What we can agree or disagree about is where it rests, but that's a matter of inference, an inference can be more or less correct. We might assign certain people certain responsibilities before the fact, but that's to charge them with the care of some good, not to blame them for things before they were charged with their care.
And I’m not just talking about Apple Maps.
reply