Hacker Newsnew | past | comments | ask | show | jobs | submit | jrochkind1's commentslogin

Honestly I think their status page just got more honest -- and they are graphing this in such a way that any partial outage to any service looks really bad on teh chart.

There were definitely partial outages to services inside that row of horizontal green dots, that the status page just wasn't advertising.


No way that holds up in court when they are marketing it for things other than entertainment.

AI makes this all go exponential.

We've had for a few years now almost universal video surveillance of all public spaces, but pre-AI it's just not realistic to monitor or search it all. Well, it is now. Video just being one example of the surveillance data firehouse that will become legible to the state -- or anyone else that can centralize their access to it all.

I think this will end up being actually the most impactful element of AI on society, and it's not going to be great.


> Republican Rep. Chris Richardson, an Elbert County Republican, argued that the bill is too broad and could regulate standard analytic usage in the workplace, such as a human resources software that recommends a pay band for employees based on performance.

He does not think this is is just selling it further? Oh no, it might prohibit software automatically determining my wages, how could we even have a society if we don't let computers figure out the least they can pay me without me quitting.


Oh man the Colorado GOP is a complete mess these days.

One of the frontrunners for the governorship is just spouting straight antisemitic garbage: https://www.9news.com/article/news/politics/gop-gubernatoria...

Edit: He withdrew this morning and is running for the GOP chair now.

Bobert is quiet these days but I'm sure she'll ramp up after her primary closes.

The various school boards are perennial sources of my idiocy. My (former) board would go into public meetings and just openly and freely admit to crimes.

The county commissioners in DougCo recently decided to fine the victims of shoplifting from r not reporting it. No, you didn't read that wrong.

So, in summary, the GOP and many, but not all, of their state level membership aren't really sending their best these days.


How do you get from "pay band [...] based on performance" to "least they can pay me without quitting"?

Why would corporate software be incentivized to recommend a pay band any higher than the least the employee would take? The incentives are not aligned

Choosing a pay band based on performance and setting the pay bands as low as they can losing all their employees are orthogonal.

Suppose you are an employer and you have 5 junior engineers. You wish to promote one to senior engineer, which includes a move to a higher pay band. How do you decide which one gets the promotion?

Most companies are going to decide which one to promote at least partly based on performance data. Do they consistently finish things on time? What is the defect rate in their work? Do they work well with others? Do they need a lot of help compared to their peers or are the who their peers turn to when the peers need help? Does their work show skill above what would normally be found in junior engineer work?

From what has been quoted by or about the objects that one representative had it is that he thinks the bill has been written too broadly and could be construed as prohibiting using job performance data like that in deciding promotions.


My teamlead decides to promote me, not robots or algorithms.

I call bs. Please stop posting LLM comments.

How was my comment an LLM comment?

Its quite obvious. Reported to mods.

OK, 6 months old throwaway account.

Most companies want not the least an employee will take right now, but the least that will keep that employee around rather than jumping ship.

And when an industry at large is using RealPage for Wages, those two numbers may become increasingly similar

I think most people getting a paycheck get there on their own, and this guy is accidentally helping to sell the bill.

current and historical capitalist trends, that's how

“I absolutely agree that consumers and wage earners should not be exploited by the use of their data,” he said. “But it’s still overly broad and it’s still overly vague in very important parts. And I believe it’s overly simplistic in its definition of wage setting.”

That's standard conservative speak for "don't interfere with business practices". "Overly broad" is just a way to shut down discussion.

That’s juicy coming from the current Republican Party.

If you didn't have a sibling to do it for you free/cheap, I wonder how many months of a human receptionist (or service) the fee to build (and maintain) such a thing would cover.

"For a reviewer, it’s demoralizing to communicate with a facade of a human."

This is so important. Most humans like communicating with other humans. For many (note, I didn't say all) open source collaborators, this is part of the reward of collaborating on open source.

Making them communicate with a bot pretending to be a human instead removes the reward and makes it feel terrible, like the worst job nobody would want. If you spent any time at all actually trying to help the contributor underestand and develop their skills, you just feel like an idiot. It lowers the patience of everyone in the entire endeavor, ruining it for everyone.


Already back in ye olde times, "let me google that for you" which I see so often posted on Reddit. Sometimes you just wanna exchange with a human, and absorb some of their wisdom, which is the whole point of asking a question. Not so different than wanting to shop at a butcher you can establish a relationship with, rather than a faceless supermarket meat counter.


This is a similar hot issue in academia right now. The ability to generate content in papers via llm is much easier than the ability to thoughtfully review them. There are now two tracks, at least in ICML that I saw, one for AI submitted papers and one for non AI submitted papers. And it works the same respectively for reviewers. However even for AI submitted papers, you cannot have only AI review it. Of course it needs human analysis, but still its tricky what you are going to get. And they are reviewing whether anonymity can still stand or if tying your credibilty to the review process is now necessary.

As for open source PRs, I wonder if for trust's sake you would need to self identify the use of AI in your response (All AI, some AI, no AI). And there would need to be some sort of AI detection algorithm flag your response as % AI. I wonder if this would force people to at least translate the LLM responses to their own words. It would for sure stop the issue of someone's WhatsApp 24/7 claw bot cranking out PR slop. Maybe this can lessen the reviewers burden. That being said, more thought is needed to distinguish helpful LLM use that enhances the objective vs unhelpful slop that places burden on the reviewer.

For instance I copy pasted the above to gemini and it produced an excellent condensing of my thoughts, "It is now 10x easier to generate a "plausible" paper or Pull Request (PR) than it is to verify its correctness."


It’s probably already too late to put these horses back in the barn, but having an “allow AI commits / PRs” would have probably been a good idea for GitHub to make available to projects. Even better might have been something like a robots.txt for repos with rules that could be auto-evaluated and PRs auto-rejected if they weren’t followed.

Then again, we see how well robots.txt was honored in practice over the years. As with everything in late-stage capitalism, the humans who showed up with good intentions to legitimately help typically did the right things, and those who came to extract every last gram of value out of something for their own gain ignored the rules with few consequences.


omg I hadn't discovered the reverse direction yet!


Can you connect the dots for me, why would reduced reporting requirements allow more startups to go public earlier?


Some people argue that the requirements placed on public companies (like mandatory quarterly reporting) add operational overhead that might cause a company to postpone an IPO until they're larger or more established.

In practice, companies like Stripe, OpenAI, etc have stayed private because they've been able to access the cash they need at valuations they're happy with and because no one wants to open their books unless they have to. They aren't staying private because being a public company is hard.


Combining this with a SPAC a startup would be able to have a six month runway as a public company before having to disclose finances. I imagine that would be attractive to some firms.


Weird, why wouldnt this fantastic startup want to report on their performance in a standardized and accountable manner for six months after collecting public money to pay out insiders and “sponsors”?

Surely they wouldnt mind bragging about their fantastic GAAP P&L in their filing docs. Maybe its the pesky quiet period theyre trying to avoid, so they can be even more transparent about finances and equity holders.


John Gruber used it for a day and found it was actually totally adequate, or better, for his actual daily work.

> But just using the Neo, without any consideration that it’s memory limited, I haven’t noticed a single hitch. I’m not quitting apps I otherwise wouldn’t quit, or closing Safari tabs I wouldn’t otherwise close. I’m just working — with an even dozen apps open as I type this sentence — and everything feels snappy.

https://daringfireball.net/2026/03/the_macbook_neo

I think people are assuming it's going to be a worse experience than it actually is. I don't know how it does it with 8GB of RAM either, but apparently it does; i suppose my guess would be SSD and bus are fast enough that swapping on app change is no longer so disruptive? (I don't know if improvements in virtual memory swap logic could also be a thing that matters or not, this is not my area).


> If source code can now be generated from a specification, the specification is where the essential intellectual content of a GPL project resides.

Our foreparents fought for the right to implement works-a-like to corporate software packages, even if the so-called owners did not like it. We're ready to throw it all away, and let intellectual property owners get so much more control.

The implications will not end up being anti-large-corporation or pro-sharing. If you can prevent someone from re-implementing a spec or building a client that speaks your API or building a work-a-like, it will be the large corporations that exersize this power as usual.


We should be removing IP law entirely, not strengthening it to cover entire classes of problem even when implemented entirely differently. Same for anyone trying to claim "colorful monster creatures" as innately Pokemon IP. Just because someone climbed a mountain first doesn't mean they own it forever. Nobody should be honouring any of these claims.

Nor should we be treating AI models themselves as respected IP. They're built on everyone else's data. Throw away this whole class of law, it's irrelevant in this new world.


> own it forever

Well we could try fixing the forever part. Copyright is out of control. I’d like to see a world with much less power given to IP. Sometimes I even say I want it eradicated entirely. But realistically we should start by cutting things back. Maybe give software an especially short copyright period.


Reset it back to 20 years and make that a hard limit for both patents and copyright. No renewals. Zero exceptions. Let the market sort the rest out.

There's always going to be downsides and edgecases when granting any party a monopoly over anything. At least if it's limited to 2 decades any unintended consequences, philosophical objections, and etc are hopefully kept within reason.


That would be insane for aerospace software, where you might spend most of that time getting the code certified (required to break the $0 revenue threshold), let alone paying back your costs and then making an actual profit.

Meanwhile, there are cases where copyright of more than 2 years is overkill.

I don't know what, but it seems like we need some sort of mechanism for variable-length IP duration is needed.


Is copyright meaningful for aerospace software? I'm largely unfamiliar with that domain but I have trouble imagining that (for example) Boeing cares much about people redistributing or hacking on the control software for a 777. How would that impact their bottom line?

I could understand for medical devices maybe but even then it seems like the software is a tiny part of the overall cost of a given design. A competitor could already do a clean room reimplementation in that case.

But I guess it wouldn't be all that bad if there were a carefully crafted extension for government certified software that was explicitly tied to the length of the certification process.


The only problem with this certified software exception is I foresee they'll write the law as "expiration timer starts when software has finished certification" then some lobby group will get the regulatory departments to adopt a new process of partial certification where said software is usable in devices but the 'finished certification' never gets reached so the copyright gets dragged out forever.


Nope, it falls more under trade secrets than copyright.

If you do something that requires stealing the code (publishing it, selling it, etc) the company can legally fuck you up.

Now, once it's in tbe wind, it becomes almost impossible to pursue from a practical point of view, as any implementer can claim trade secrets to avoid showing you the code.


I think the point is more that many kinds of software (presumably including aerospace software) doesn't really need any kinds of protections from redistribution because it is effectively only useful for a specific design and much of the effort in creating it is not the algorithms that a competitor could steal without copyright or alternative protection but certifying that the software fits the rest of the system, which any competitor making use of the software would have to do again.

Also remember that the original point of copyright and patent protections is to encourage people to create the protected works in the first place but Boeing isn't just going to stop making aerospace software without copyright because their hardware will be useless without it. So if anything, any software that is needed for hardware made by the same company to function doesn't really have any right to be copyrightable at all.


If certification is the actual cost, you don't need copyright, at all. SQLite is in the public domain. Your moat is the certification itself, not the code.


Certification isn't a moat; either the software is certified as safe/bug-free or it isn't. If it's safe, that just makes it more valuable to pirates.


That's absurd.

I can't use SQLite for aviation even though it was certified.

I can't even claim FIPS compliance for my software without going through an expensive process, even though I only use FIPS approved primitives.

Building on certified/compliant libraries helps, but their vendors can certainly contractually make me pay for it.

All OSS libraries have a warranty disclaimer; using them according to even those licenses automatically excludes "fitness for a particular purpose."

Why would public domain software be any different?

The moat is the certification process, not the code itself. "I copied this from somewhere after it was already certified" might fast track something, but it's not gonna fly with "certification was good, done."


> some sort of mechanism for variable-length IP duration is needed

I've always liked the idea of a Harberger tax-style patent enforcement fee:

The patent owner declares the value of their patent on an annual basis and pays 1-5% of that declared value per year for the privilege of relying on the government to enforce their exclusive ownership of the patent. At any point, another party can buy the patent at its declared value, which discourages patent-holders from declaring artificially low values. The annual fee discourages artificially high valuations for indefinite periods of time -- as the patent yields less return over time it makes less sense to keep paying a high annual fee, encouraging owners to lower the declared valuation or end the patent protection altogether when it's no longer profitable.

To discourage hoarding patents indefinitely one could either set a hard upper limit (e.g. 60 years) or increase the fee over time, for example every few years the fee increases by 1% until at some point the patent is effectively publicly owned.


Wait for the great new times when an AI will certify aerospace, automotive and medical SW. Waiting for that. It will be 1000x better and faster than the existing processes


Or maybe it shouldn't take 10+ years to certify aerospace software.


Have you seen the quality of regular software though? And the failure rate of regular physical items? The only reason I trust aircraft is because of the process.

Consider if you will that if some guy were to fly a drone the size of a car that he knocked together in his garage over a residential area people would not accept that. Yet private pilots in cessnas fly over neighborhoods constantly.


Good news! LLM output cannot be copyrighted. Everything that an LLM produces is automatically, irrevocably, in the public domain.


Not quite in my opinion. The output of an LLM from a simple prompt falls into the public domain, but if you also give a copyrighted work as input, the mechanistic transformation performed will not alter the original license (same as encoding a video does not change its license).


Are training data counted as input?

It would be interesting to see a court ruling that the output of LLMs trained on copyleft code are licensed under the GPL ... and all other viral licenses simultaneously


> Are training data counted as input?

It is quantum legality, to use copyright input is legal or illegal depending on the observer.


Schrodinger's Chat


Unless your llm works by quoting large parts of copyrighted works, reinterpretations of them aren't copyrighted. Because it's not a copy.


What if the output regurgitates some other legal entity’s boilerplate licence agreement? Is the output automatically licensed to that entity?


No, the copyright is the colour of the bits, and red bits with a comment saying "these bits are blue" are not blue bits, but you may be prosecuted for fraud.


It's wild to me that there haven't been more court cases to answer questions like those being asked in this thread.

No one knows.


It's new, fast-moving technology, and the courts are slow and expensive.

It would take two stubborn businesses with a lot of money deciding that it is better to battle it out than focus on their business. Something like IBM v SCO or Oracle v Google.


But we also know from other research that LLMs don't actually do mechanistic translations. Even when they are asked to and say that they did, they're basically rewriting the code from their training data


If the LLM output is already someone else's copyrighted work, the LLM doesn't change that?


If that occurs and it’s a substantial enough body of output that it is itself copyrightable and not covered by fair use. Confluence of those conditions is intentionally rare.


The LLM cannot produce copyrighted work.

If the LLM reproduces a human's copyrighted work, then that copyright still stands. This is, in effect, the same as photocopying someone else's writing. The LLM was trained on the copyrighted work, is incapable of producing new copyrightable work, so if it duplicates the original work then the original author's copyright still stands.

I am not a lawyer


Same as it ever was: Either trade secrets or license files that are treated as suggestions.


What if you used the LLM to generate works that were already copyrighted?


There was a recent case that everyone has been describing as "LLM output can't be copyrighted" but what it actually said was you can't register the AI as the author.


This is not true, and I'd love to see some actual citation here.

The courts have repeatedly said that copyright only applies to human creativity. The Supreme Court explicitly said this when they refused to hear the appeal:

https://en.wikisource.org/wiki/Thaler_v._Perlmutter,_Refusal...

> "We affirm our decision to refuse registration for the Work because it lacks the human authorship necessary to be eligible for copyright protection."

So they're saying that the LLM cannot be the author, because LLMs cannot claim copyright.

The related case about patents is more supportive of the narrative that AIs cannot be authors (see https://www.cafc.uscourts.gov/opinions-orders/21-2347.OPINIO...), specifically: "Here, there is no ambiguity: the Patent Act requires that inventors must be natural persons; that is, human beings."

The patent situation is that the Act says that inventor must be an individual, which the courts are interpreting to mean a human, so the LLM cannot be named as the inventor. So, in this case, yes, this is just saying that an LLM cannot be named as the inventor of a patent. That's not the same thing as the courts are saying with copyrights.


> So they're saying that the LLM cannot be the author, because LLMs cannot claim copyright.

They're saying that the LLM can't be the author.

Now suppose you supply the LLM with a prompt that contains human creativity, it performs a deterministic mathematical transformation on the prompt to produce a derivative text, and you want to copyright that, claiming yourself as the author. What happens then?

If you think the answer is that you can't, how do you distinguish that from what happens when someone writes source code and has a compiler turn it into a binary computer program? Or do you think that e.g. Windows binaries can't be copyrighted because they were compiled by a machine?


> Now suppose you supply the LLM with a prompt

My understanding was that they did in fact do just that, but the court somehow misunderstood what they were doing, and assumed that the LLM was working completely autonomously without any human input at all, which isn't really possible IMO. Someone told it what to do.

They also argued that you couldn't copyright an output that you can't explain how it came to be, i.e. if they had been able to articulate how an LLM works, the outcome might have been quite different, which I found surprising.

If art in general (human-made or otherwise) is always derived from existing influences... should we really be forced to explain how or why we created a piece of art in order to defend it?

The usual bar for copyright infringement of a derivative work is, from what I have seen, "how much did you copy from the original, and how obvious is it", which is of course a subjective determination that would be made by each individual judge or jury of a case.


> What happens then?

The part that the human created, the prompt, can be copyrighted.

The part that the LLM created, cannot be.

Copyright in code works exactly the same way: the source code is copyrighted. The binary code is only copyrighted to the extent that it is derived from the source code. This is well-established.


Maybe I am just misunderstanding something, but I feel like you might be contradicting yourself here... why can LLM output not be copyrighted, but compiler output can be?


No, that's the point - the compiler output is only copyrighted to the extent that it is derived from the source code. The compiler itself cannot create anything copyrightable, but because there is a deterministic link between the source code and the binary code, and the source code was the product of a human, the binary code is covered by the source code copyright.

It's like a photocopier. If you photocopy a page from a book, that page is still covered by the copyright of the book author, even if the page is 2x larger or otherwise transformed by the machine.


Powerful interests want it to be true.


IMO the bigger question is how would you even tell if a work was generated by an LLM? There's a ton of code being written out there; the folks who generated it are going to claim they authored it for copyright purposes, and those who want to use it are going to claim it was LLM-generated. So what happens?


The alleged author, when bringing a copyright infringement suit, will submit testimony claiming they wrote it. Parties to the suit will have a chance to present arguments and evidence. Then, the claim will be adjudicated by a judge and/or jury.


That code isn't going to be open source. And if you use someone else's closed source code you are violating laws that have nothing to do with copyright.


I'm not sure I understand. I'm not talking about stolen/leaked code here. I'm saying: imagine you claim you're the author of some piece of code. You may or may not have written it with an LLM, but even if so, assume you have the full rights to all the inputs. You post it publicly on GitHub. You don't attach a license, or perhaps you attach a restrictive license that doesn't permit much beyond viewing. Someone comes across your code, finds it brilliant, and wants to use it. If that code was non-copyrightable (such as generated via an LLM), then they're fine doing it without your permission, no? But if that code was copyrightable, then they're not permitted to do so, correct?

So now consider two questions:

1. You actually didn't use an LLM, but they believe & claim you did. Who has the burden of proof to show that you actually own the copyright, and how do they do so?

2. They write new code that you feel is based on yours. They claim they washed it through an LLM, but you don't believe so. Who has the burden of proof here and how do they do so?


Good questions.

My take on the answers (I am not a lawyer):

1. You copy their code. They bring a copyright claim (let's assume this isn't a DMCA thing and they're actually bringing a claim to court). Your defence is "the LLM wrote it so no copyright attaches". Since they're asserting their copyright claim, they would have to provide evidence for that claim (same as in any other copyright case), including providing evidence that a human wrote it (which is new, and required to defeat your defence).

2. They copy your code. You bring a copyright case. Their defence is "I used an LLM to wash the code without copying". Since they're not disputing your copyright claim to the original code, you don't have to defend or prove your copyright. But you do have to prove that their code infringes on your copyright, which would mean proving that the LLM copied your code when creating the new code. This has been done before by demonstrating similarity.


Can you expand on that, please? Which other laws are infringed if you use someone else's closed source code?


You used an illegal leak to train your llm


What makes the leak illegal other than copyright?

The occasional piece of software might be a trade secret, but a person downloading a preexisting leak isn't affected by those laws.


> What makes the leak illegal other than copyright? The occasional piece of software might be a trade secret, but a person downloading a preexisting leak isn't affected by those laws.

I think 18 U.S.C. § 1832 (a) (3) might answer your question? https://www.law.cornell.edu/uscode/text/18/1832


To qualify as a trade secret, you have to actually register it as a trade secret.

Closed-source code is not automatically a trade secret.


That's completely false as far as I'm aware. Where did you see this? A simple web search shows numerous sources to the contrary. Are you confusing them with patents by any chance? https://en.wikipedia.org/wiki/Trade_secret


Huh, TIL something new. I was sure they had to be registered. Thanks for the correction :)


Is Pierre Menard really the author of his Quixote?


I think it can be copyrighted or is a very complex legal issue. Coding support is used in commercial apps where copyrights are fully reserved. I cannot be feasibly determined if any output is purely LLM or not.


I would be okay with just keeping it but limiting it severely. If you release music and you can't sell enough albums in 20 years, that's not societies problem. A lot of artists release albums every 1 - 3 years anyway, so they're always selling some records, or were before streaming became the way to "own" music. Most make their money off of concerts anyway.

For movies and shows, charge and increasing fee to renew the copyright. Eventually studios will give up certain movies. The older the movie the more you pay.


We could also just have some of the rights go away after X amount of years. Maybe after so much time it's still not legal to copy the original work, but it is legal to make a cover song, or a derivative work using the same character. At another point maybe it's no longer to illegal to copy for free, but it is still illegal to sell without permission.

I personally think we should have shorter limits for non-creator owners of copyright, and for creators it should be like 20 years or death whichever comes last. I also think compulsory licensing should exist for everything.


The problem here is that large companies can do whatever they want and regular people cannot. Don't worry, they won't be allowing you the same rights as these companies.


But some people designed their entire lives around the assumption of IP protections.

If we remove IP laws, we should remove all private property laws!


Yeah, I really don't think we want APIs to be protected by IP. But in this case it isn't just the API, there were also tests involved. I think you could make a pretty strong argument that if you used a test suite to get an agent to implement some code, the code is a derivative product of the test code.


I really don't think a book is a derivative work of the AI model you used to proofread it.


[dead]


What exactly is the difference between "a machine-readable contract for what the output has to be" and "source code"?

What is the difference between an "agent" and a "compiler"?

For that matter, what is the difference between "I got an agent to provide a high level description" and a decompiler?

What is the difference between ["decompiling" a binary, editing the resulting source, recompiling, and redistributing] and [analyzing the behavior of a binary, feeding that description into an LLM, generating source code that replicates that behavior, editing that, recompiling and redistributing]?

Takeaway: we are now in a world where software tools can climb up and down the abstraction stack willy nilly and independently of human effort. Legal tools that attempt to track the "provenance" of "source code" were already shaky but are now crumbling entirely.


One of the interesting things here is that copyright is for expression and patent is for function. You can't copyright function.


Sounds very similar to that whole API lawsuit with oracle.


And the whole Adobe pdf thing and the whole Microsoft word thing. And the whole ibm pc thing. Imagine if we were forced to keep using ibm from when they lost their way until now simply because anti-ai luddites were able to scare monger


I would wager that the vast majority of people commenting here about the pitfalls of AI, especially as it relates to governance and laws, are heavy users of AI, recognize the import and value it brings, and find ways to utilize it more themselves, so not sure using an ad-hominem dismissal of very valid objections are going to be effective.

(side bar: the phrase "anti-<whatever> luddites" is way, way overused, especially here. Let's get more creative, people!)


And yet the term luddite seems to fit the anti-ai crowd perfectly. They are largely concerned about employment (and more generally economic stability) and to that end seek measures intended to protect workers.

There's also some environmentalist concerns which the term luddite again fits perfectly. You just have to generalize, transferring laterally from economic wellbeing to environmental wellbeing.

So I don't think GP qualified as an ad hominem dismissal but rather an accurate description of the situation. Take what's being discussed (restrictions on specifications and interoperability), project it backwards in history, and imagine what an alternate present day would look like. I think it would be pretty bad.


>They are largely concerned about employment (and more generally economic stability) and to that end seek measures intended to protect workers.

Pffft no. Most of us think that AI is being used as a political trick - like firing unionized workers "to replace them with AI" and then hiring new un-unionized workers to replace them, 2 weeks later. Replace the AI with an empty cardboard box labeled "AI" in black marker, and nothing changes.

See also: using AI to launder pirated material, for big businesses.


>a political trick - like firing unionized workers

1. Since when have companies needed trillions of dollars of AI to do that? In the US they've been able to get away with getting rid of unions for decades now.

2. Since when has HN given a shit about unions. Posting about unions, at least till recently has been a great way of getting your comment downvoted to [dead] in one easy step. For longer than LLMs have existed the HN answer to unions was "They are just there to keep me as an SWE from making as much money as I can". Only now do we see a little bit of pushback now that their heads may be next on the chopping block.


> and more generally economic stability

Who doesn't enjoy interesting times


HN comments on LLM agents are bi-modal now, as are views in the general population. The modes are adopters and non-adopters.

There isn’t much of a middle ground anymore.


It'd be interesting (earnestly!) to see someone make a solid case for AI reimplementation being bad but that the original (afaik) "clean room" project, Compaq's reimplementation of IBM's PC BIOS (something most people seem to see as a righteous move toward openness and freedom), was good.


What if there was a special exemption for using a specification if you open source (or open hardware) the result for some definition roughly (or exacactly) equivalent to the OSI definition of open source, or FSFs definition of free software?

Although I think the chance of that happening is effectively zero.


Copyright has always benefited those with power, down to the very first instance: Albrecht Durer bullying little children who wanted to make inferior copies of his prints so that their familities could enjoy the art. Durer insisted the art was only for nobles. Ab initio


It is not aout throwing the right to implement things away. As long as it is done according to the license of the works modified or copied, one can do that. What this is against is, that people wash away a license, that is meant to keep things open, transparent and free. It enables businesses to go back to completely proprietary systems, which will impact your rights.

I am for keeping the licenses in place, as long as there is any copyright at all on software. If we get rid of that, then we can get rid of copyleft licenses and all others too. But of course businesses and greedy people want to have their cake and eat it too. They want copyleft to disappear, but _their_ software, oh no, no one may copy that! Double standards at their best.


You're asking for exactly the same cake. You want for the GPL to pass through this process, but not the proprietary licenses that the original GNU tools were washing away.

(the paradox of copyleft is that it does tend to push free software advocates in a direction of copyright maximalism)


Our foreparents of FOSS (e.g. RMS) fought to destroy copyright. Copyleft was a subversive mechanism to "neutralize" copyright using the laws of copyright against itself.

Hypothetically, I think this trail of suggestion of treating specs as intellectual property would simply destroy copyright for software, which is what we (the people who believe in FOSS) want. There is already case law protecting specs (e.g. Java)


RMS fought for user freedom, not to destroy copyright. Copyleft wasn't to neutralize copyright, but to ensure user freedom for downstream users.


The specification of chardet, which started this all off, is essentially forced by the unicode statndard though.


SCOTUS ruled on this already when Google copied Sun’s Java wholesale.


Oracle's Java. Oracle bought Sun, including Java, then started throwing lawsuits over something they didn't even make. IP is absurd.


>Our foreparents fought for the right to implement works-a-like to corporate software packages, even if the so-called owners did not like it

Our "foreparents" weren't competing with corporations with unlimited access to generative AI trained on their work. The times, they're-a-changin'.

You're rehashing the argument made in one of the articles which this piece criticizes and directly addresses, while ignoring the entirety of what was written before the conclusion that you quoted.

If anyone finds themselves agreeing with the comment I'm responding to, please, do yourself a favor and read the linked article.

I would do no justice to it by reiterating its points here.


I believe the GP post is saying that if we react to the new AI-enabled environment by arbitrarily strengthening IP controls for IP owners, the greatest benefactors will almost certainly be lawyer-laden corporations, not communities, artists, or open source projects. That seems like a reasonable argument.

It seems like the answer is to adjust IP owner rights very carefully, if that's possible. It sounds very hard, though.


The article makes the same point; the quote was taken out of context.

The point the author was making was that the intent of GPL is to shift the balance of power from wealthy corporations to the commons, and that the spirit is to make contributing to the commons an activity where you feel safe in knowing that your contributions won't be exploited.

The corporations today have the resources to purchase AI compute to produce AI-laundered work, which wouldn't be possible without the commons the AI it got its training data from, and give nothing back to the commons.

This state of things disincentivizes contributing to the FOSS ecosystem, as your work will be taken advantage of while the commons gets nothing.

Share-alike clause of the GPL was the price that was set for benefitting from the commons.

Using LLMs trained on GPL code to x "reimplement" it creates a legal (but not a moral!) workaround to circumvent GPL and avoid paying the price for participation.

This means that the current iteration of GPL isn't doing its intended job.

GPL had to grow and evolve. The Internet services using GPL code to provide access to software without, technically, distributing it was a similar legal (but not moral) workaround which was addressed with an update in GPL.

The author argues that we have reached another such point. They don't argue what exactly needs to be updated, or how.

They bring up a suggestion to make copyrightable the input to the LLM which is sufficient to create a piece of software, because in the current legal landscape, creating the prompt is deemed equivalent to creating the output.

You can't have your cake and eat it too.

A vibe-coded API implementation created by an LLM trained on open source, GPL licensed code can only be considered one of two things:

— Derivative work, and therefore, subject to the requirement to be shared under the GPL license (something the legal system disagrees with)

— An original work of the person who entered the prompt into the LLM, which is a transformative fair use of the training set (the current position of the legal system).

In the later case, the input to the LLM (which must include a reference to the API) is effectively deemed to be equivalent to the output.

The vibe-coded app, the reasoning goes, isn't a photocopy of the training data, but a rendition of the prompt (even though the transformativeness came entirely from the machine and not the "author").

Personally, I don't see a difference between making a photocopy by scanning and printing, and by "reimplementing" API by vibe coding. A photocopy looks different under a microscope too, and is clearly distinguishable from the original. It can be made better by turning the contrast up, and by shuffling the colors around. It can be printed on glossy paper.

But the courts see it differently.

Consequently, the legal system currently decided that writing the prompt is where all the originality and creative value is.

Consequently, de facto, the API is the only part of an open source program that has can be protected by copyright.

The author argues that perhaps it should be — to start a conversation.

As for who the benefactors are from a change like that — that, too, is not clear-cut.

The entities that benefit the most from LLM use are the corporations which can afford the compute.

It isn't that cheap.

What has changed since the first days of GPL is precisely this: the cost of implementing an API has gone down asymmetrically.

The importance of having an open-source compiler was that it put corporations and contributors the commons on equal footing when it came to implementation.

It would take an engineer the same amount of time to implement an API whether they do it for their employer or themselves. And whether they write a piece of code for work or for an open-source project, the expenses are the same.

Without an open compiler, that's not possible. The engineer having access to the compiler at work would have an infinite advantage over an engineer who doesn't have it at home.

The LLM-driven AI today takes the same spot. It's become the tool that software engineers can and do use to produce work.

And the LLMs are neither open nor cheap. Both creating them as well as using them at scale is a privilege that only wealthy corporations can afford.

So we're back to the days before the GNU C compiler toolchain was written: the tools aren't free, and the corporations have effectively unlimited access to them compared to enthusiasts.

Consequently, locking down the implementation of public APIs will asymmetrically hurt the corporations more than it does the commons.

This asymmetry is at the core of GPL: being forced to share something for free doesn't at all hurt the developer who's doing it willingly in the first place.

Finally, looking back at the old days ignores the reality. Back in the day, the proprietary software established the APIs, and the commons grew by reimplementing them to produce viable substitutes.

The commons did not even have its own APIs worth talking about in the early 1990s. But the commons grew way, way past that point since then.

And the value of the open source software is currently not in the fact that you can hot-swap UNIX components with open source equivalents, but in the entire interoperable ecosystem existing.

The APIs of open source programs are where the design of this enormous ecosystem is encoded.

We can talk about possible negative outcomes from pricing it.

Meanwhile, the already happening outcome is that a large corporation like Microsoft can throw a billion dollars of compute on "creating" MSLinux and refabricating the entire FOSS ecosystem under a proprietary license, enacting the Embrace, Extend, Extinguish strategy they never quite abandoned.

It simply didn't make sense for a large corporation to do that earlier, because it's very hard to compete with free labor of open source contributors on cost. It would not be a justifiable expenditure.

What GPL had accomplished in the past was ensuring that Embracing the commons led to Extending it without Extinguishing, by a Midas touch clause. Once you embrace open source, you are it.

The author of the article asks us to think about how GPL needs to be modified so that today, embracing and extending open-source solutions wouldn't lead to commons being extinguished.

Which is exactly what happened in the case of the formerly-GPL library in question.


I think the article in fact reaches the exact opposite conclusion it should. I'm not really sure how useful it is to talk about sharing and commons and morals when the point raised was about what is possible. The prescription includes copyleft APIs. These are not possible under Oracle v Google. And you could point it out if I'm wrong but the article doesn't discuss what would happen if Congress acted to reverse Oracle v Google (IMO a cosmically bad idea).


Adding even more intellectual property nonsense isn't going to work. The real solution is to force AI companies to open up their models to all. We need free as in freedom LLMs that we can run locally on our own computers.


I agree. But IMHO that ship has sailed. This should have been stop it when OpenAI went for-profit.

If you want to build a new world with out this, we can't do it while we are supporting the very companies that are creating the problem. The more power you give them, the strong they get and the weaker we become.

I think focus needs to shift completely off of for-profit companies. Although, not sure how that is going to happen..lol


Force them to open (and host) all their training data. They stole it from the pubic to sell it back to us anyway.


>Adding even more intellectual property nonsense isn't going to work.

[citation needed]

Where does your confidence come from?

GPL itself was precisely the "intellectual property nonsense" adding which made FOSS (free as in freedom) software possible.

The copyright law was awfully broken in the 1980s too. Adding "nonsense" then was the only solution that proved viable.

Historically, nothing but adding "more IP nonsense" has ever worked.

>The real solution is to force AI companies to open up their models to all.

Sure. Pray tell how you would do that without some "intellectual property nonsense".

We don't exactly get to hold Sam Altman at gunpoint to dictate our terms.

>We need free as in freedom LLMs that we can run locally on our own computers

Oh, on that note.

LLMs take a fuckton of compute to train and to even run.

Even if all models were open, we're not at the point where it would create an equal playing field.

My home computer and my dev machine at work have the same specs. But I don't have a compute farm to run a ChatGPT on.


> Where does your confidence come from?

From the fact that copyright infringement is trivial and done at massive scales by pretty much everyone on a daily basis without people even realizing it. You infringe copyright every time you download a picture off of a website. You infringe copyright every time you share it with a friend. Everybody does stuff like this every single day. Nobody cares. It is natural.

> GPL itself was precisely the "intellectual property nonsense"

Yes. In response to copyright protection being extended towards software. It's a legal hack, nothing more. The ideal situation would have been to have no copyright to begin with. The corporation can copy your code but you can copy theirs too. Fair.

> Pray tell how you would do that without some "intellectual property nonsense".

Intellectual property is irrelevant to AI companies.

Intellectual property is built on top of a fundamental delusion: the idea that you can publish information and simultaneously control what people do with it. It's quite simply delusional to believe you can control what people do with information once it's out there and circulating. The tyranny required to implement this amounts to totalitarian dictatorships.

If you want to control information, then your only hope is to not publish it. Like cryptographic keys, the ideal situation is the one where only a single copy of the information exists in the entire universe.

AI companies are not publishing any information. They are keeping their models secret, under lock and key. They need exactly zero intellectual property protection. In fact such protections have negative value to them since it restricts the training of their models.

> We don't exactly get to hold Sam Altman at gunpoint to dictate our terms.

Sure you do. The whole point of government is to do just that. Literally pass some kind of law that forces the corporations to publish the model weights. And if the government refuses to do it, people can always rise up.

> Even if all models were open, we're not at the point where it would create an equal playing field.

Hopefully we will be, in the future.


> You infringe copyright every time you download a picture off of a website. You infringe copyright every time you share it with a friend.

respectfully yoy have no idea what you are talking about here.


Why don't they, there have been lawsuits over just these behaviors in the past. Hell, even the multiple representations of the picture in computer memory have had to have allowances.

Copyright is a gigantic fucking mess that the US has forced over a large chunk of the world.


> there have been lawsuits over just these behaviors in the past

How did they turn out?


It depends if you count the ones that were settled behind NDAs with large companies with unknown amounts being paid out that are ticking time bombs waiting to go off in the future.


let's just count the ones we know about? you sound evasive;)

Remember the original poster said that any time my browser downloads a picture on any website (which is a technical requirement to show it) I am infringing on those rights. If that is US court opinioon that would be absolutely stupid.

Of course if you reshare some work that actually is somebody's property you can be totally infringing. Which makes total sense. Except when big tech does the same to us (LLM and diffusion training) it's suddenly ok and that's insane


Copyright is the right to make copies. The creator of a work has a government granted monopoly on that right.

When I download a picture from a website and save it to my machine, I am making a copy of it. If the photographer has not given me explicit permission to do so, then I have infringed on their rights by making an unauthorized copy of their work.

The mere existence of licenses like the creative commons refutes your argument. They would not be necessary if you could just download whatever without infringing copyrights.


Downloading a picture needs to happen to show it. Without it it cannot be shown. I'm sure courts figured out that viewing a picture via browser is not infringing.


You forgot the "save it to my machine" part which lets me view the picture whenever I want without visiting its creator's website repeatedly. This means I don't need to be exposed to ads, which in turn lowers the creator's income. It also means other people can get the picture from me rather from the creator. Even less ads and payments.

Obviously the creators want more money and control. Thus copies are only allowed to be made if it benefits them. Viewing a copy of the picture via the website might be permitted, but saving it or sharing it might not.

The truth is nobody really cares what creators want. People will save and share and edit and meme it all up because they can. It is natural. It is their delusional belief that they can control what others do with information that is out of touch with reality.


I'm talking downloading that needs to happen to show the picture

you said

> You infringe copyright every time you download a picture off of a website

no, browser downloads a copy of the picture in order to show it and it is fine by courts

that's why I'm saying you don't know what you're talking about

> This means I don't need to be exposed to ads, which in turn lowers the creator's income

duh.


You might be thinking of fair use, but that's an affirmative defence. Every time someone has copied someone elses artwork and modified it into a meme, that's copyright infringement and remains so even if is eventually ruled as fair use. If you make a fair use claim, you don't deny infringement, you make the claim that you were allowed to infringe.


Try replacing "picture" with "music file".


So... People are going to rise up? What makes you think most of them have enough slack in their finances to pack up and haul off to D.C.? Only the Elites do, and they pay full time lobbyists to do exactly that to make sure laws like you mention never pass. Not saying it can't work. Just saying it the game is rigged against the very people you want to rise up and in favor of the ones who'd rather you stayed in bed.


If people don't rise up they will become soylent green. Over the long term, AI threatens to replace all human labor. It cannot remain locked away in corporate servers. This is an existential issue. The ultimate logic of capitalism is that unproductive people need not be kept alive since they add nothing but cost. So either we free AI, collapse the very idea of having an economy and transcend capitalism into a post-scarcity society, or we will be enslaved and genocided by those who control the AIs.


Hence why we see more and more pushes control communication on the internet. Going to be hard to free AI when a panopicon is turned against us to prevent exactly that.


I mean. Yeah. GPL's genius was that it used Copyright, which proprietary enterprise wouldn't dare dismantle, to secure for the public a permanent public good.

Pretty sure no one, (but me anyway) saw overt theft of IP by ignoring IP law through redefinition coming. Admittedly I couldn't articulate for you capital would skill transfer and commoditize it in the form of pay to play data centers, but give me a break, I was a teenager/twenty something at the time.


I agree with the comment and find the linked article motivated reasoning at best. It's easy to find something "morally good" when it aligns with what you wanted. But plenty of people at Oracle, at IBM, at Microsoft, at Nintendo, at Sony and plenty of other companies whose moats have been commoditized by open source knockoffs don't find such happenings to be "morally good". And even if in general you think that "more freedom" justifies these sorts of unauthorized clones, then Oracle V. Google was at best a lateral move, as Java was hardly a closed ecosystem. One also wonders how far the idea of "more freedom" = "good" goes. How does (did if Qualcom's recent acquisition changes the position) the author feel about the various chinese knockoff clones of the Arduino boards and systems? Undeniably they were a financial good for hobbyists and the maker world alike, and they were well within the "legal" limits, and certainly they "opened" the ecosystem more. But were they "good"? Was the fact that they competed and undersold Arduino's work without contributing anything back and making it harder financially for Arduino to continue their work a "moral good"?

If "more freedom" is your goal, then this rewrite is inherently in that direction. It didn't "close" the old library down. The LGPL version remains under its license, for anyone to use and redistribute exactly as it always has. There is just now also an alternative that one can exercise different rights with. And that doesn't even get into the fact that "increased freedom" was never a condition of being allowed to clone a system from its interfaces in the first place. It might have been a fig leaf, but some major events in the legal landscape of all this came from closed reimplementations. Sony v. Connectix is arguably the defining case for dealing with cloning from public interfaces and behavior as it applies to emulators of all kinds, and Connectix Virtual Gamestation was very much NOT an open source or free product.

But to go a step further, the larger idea of AI assisted re-writes being "good", even if the human developers may have seen the original code seems to broadly increase freedoms overall. Imagine how much faster WINE development can go now that everyone that has seen any Microsoft source code can just direct Claude to implement an API. Retro gaming and the emulation scene is sure to see a boost from people pointing AIs at ay tests in source leaks and letting them go to town. No our "foreparents" weren't competing with corporations with unlimited access to AI trained on their work, they were competing with corporations with unlimited access to the real hardware and schematics and specifications. The playing field has always been un-level which was why fighting for the right to re-implement what you can see with your own eyes and measure with your own instruments was so important. And with the right AI tools, scrappy and small teams of developers can compete on that playing field in a way that previous developers could only dream of.

So no, I agree with the comment that you're responding to. The incredible mad dash to suddenly find strong IP rights very very important now that it's the open source community's turn to see their work commoditized and used in ways they don't approve of is off-putting and in my opinion a dangerous road to tread that will hand back years of hard fought battles in an attempt to stop the tides. In the end it will leave all of us in a weaker position while solidifying the hold large corporations have on IP in ways we will regret in the years to come.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: