UBI only means you won't starve or die of exposure. It doesn't mean that people who are already rich today won't become so obscenely rich tomorrow they are above the law or can change the law (and decide who gets medical treatment or even take your UBI away).
Uhm... yes? The cost of downloading pirated music is essentially zero. The only reason why people use services like Spotify is because it's extremely cheap while being a bit more convenient. But jack up the price and the masses will move to sail the sea again.
The cost of stealing has always been essentially zero. Same argument can be made for streaming, and yet Netflix is neither cheap nor struggling for subscribers.
Funny, their army of lawyers seems incapable of stopping me from easily downloading pirated software or coding an open alternative to their closed-source software with AI if I wanted to..
You cannot keep a purely legally-enforced moat in the face of advancing technology.
> If the cli can access the secrets, the agent can just reverse it and get the secret itself.
What do you mean by this? How "reverse it"? The CLI tool can access the secure storage, but that does not mean there is any CLI interface in the tool for the LLM to call and get the secret printed into the console.
In principle it could use e.g. the `gdb` and step until it gets the secret. Or it can know ahead where the app stores the cerentials.
We could use suid binaries (e.g. sudo) to prevent that, but currently I don't think we can. Most anyone would agree that using a separate process, for which the agent environment provides a connection, is a better solution.
I mean definitely a good starting point is a share-nothing system, but then it becomes impossible to use tools (no shared filesystem, no networking), so everything needs to happen over connections the agent provides.
MCP looks like it would then fit that purpose, even if there was an MCP for providing access to a shell. Actually I think a shell MCP would be nice, because currently all agent environments have their own ways of permission management to the shell. At least with MCP one could bring the same shell permissions to every agent environment.
Though in practice I just use the shell, and not MCP almost at all; shell commands are much easier to combine, i.e. the agent can write and run a Python program that invokes any shell command. In the "MCP shell" scenario this complete thing would be handled by that one MCP, it wouldn't allow combining MCPs with each other.
That is fine, but you give up any pretence of security - your agent can inspect your tool's process, environment variables etc - so can presumably leak API keys and other secrets.
Other comments have claimed that tools are/can be made "just as secure" - they can, but as the saying goes: "Security is not a convenience".
Demand for top models is definitely not saturated, at least when it comes to programming. If I could afford to use 5x more Claude Opus 4.6 tokens, I would!
Demand is relative. How many Claude tokens would you buy if they had a 10x price hike?
The market has achieved it's current saturation level with loss-leader prices that remind me of the Chinese bike share bubble[0]. Once those prices go up to break even levels (let alone profitable levels), the number of people who can afford to pay will go down dramatically (and that's not even accounting for the bubble pop further constricting people's finances).
There is no evidence that labs are losing money on inference subscriptions. The labs have massive fixed costs, but as long as inference spend is higher than the datacenters they use for inference cost all they need to do to become profitable is scale up. Right now software engineers are basically the only ones actually paying for inference, the labs just need to create coding assistants for everything that are good enough that every white collar worker in the country(world?) is paying a $1000/yr subscription. Certainly theres a lot of risk, will models become commoditized and everyone switches to open models? can they actually get non software engineers to pay for inference in mass? But its not like theres no path
If they've already built themselves a loyal customer base (which is usually the point of fighting a price war) and the customers are happy with the technology they have, then if funding is tight and turning a profit is more important why wouldn't they pivot to optimizing inference by stopping further training, freezing the model versions, burning the weights into silicon and building better caching strategies and improving harnesses and tools that lower their cost and increase their margin?
If all they do is hike prices then they'll lose customers to competitors who don't or who find a way to serve a similar model cheaper.
The demand isn't going to go away purely through higher prices. Once people know something is possible they will demand it whether supply is constrained or not. That's a huge bounty for anyone who can figure out how to service that demand.
Easier said than done. What you're describing can take years to implement. Can OpenAI et al. keep burning cash at the same rate for two years while they wait for the salvation of custom silicon if the investments dry up?
Don't you see the massive problem with requiring visual input? Are blind people not intelligent because they cannot solve ARC-AGI-3 without a "harness"?
A theoretical text-only superintelligent LLM could prove the Riemann hypothesis but fail ARC-AGI-3 and won't even be AGI according to this benchmark...
Well, it would be AGI if you could connect a camera to it to solve it, similar to how blind people would be able to solve it if you restored their eyesight. But if the lack of vision is a fundamental limitation of their architecture, then it seems more fair not to call them AGI.
I think I can confidently say they are not visually intelligent at all.
If you were phrasing things to quantify intelligence, you would have a visual intelligence pillar. And they would not pass that pillar. It doesn't make them dysfunctional or stupid, but visual intelligence is a key part of human intelligence.
Visual intelligence is a near meaningless term as it's almost entirely dependant on spatial intelligence. The visually impaired do have high spatial intelligence, I wouldn't be surprised if their spatial intelligence is actually higher on average than those without visual impairment.
I think they don't actually lack them, or lack only a small fraction (their brains are ≈99% like a normal human brain), such that if they were an AI model, they could be fairly trivially upgraded with vision capability.
Assistance of other humans? You do realise we're talking about an intelligence test right, at that point what are you even testing for. I'm sure you've taken exams where you couldn't bring your own notes, use Google or get help from someone, even though real life doesn't have those constraints
Well said. That's exactly what has been rubbing me the wrong way with all those "LLMs can never *really* think, ya know" people. Once we pass some level of AI capability (which we perhaps already did?), it essentially turns into an unfalsifiable statement of faith.
reply