Is it though? I would say 'proof of concept' instead.
The fact that it's running on a phone now just sets the goalpost and gets everyone excited about it: add more RAM and GPU to the next iPhone and it's not a toy anymore. Co-incidentally, phone companies also have thousands of engineers sitting around wondering what to do in their next release to convince consumers to buy ...
'Toy' and 'proof of concept' are synonymous. What this really opens up is running non-toy models like Qwen3.5 35B-A3B, which are still considered very large in the mobile device context. Yes, it's too slow for interactivity, but if you acknowledge that it's supposed to deliver "Pro" level inference it works quite fine.
> add more RAM and GPU to the next iPhone and it's not a toy anymore
We're not going to get more RAM and GPU in consumer devices.
All of the supply is going into data center build outs. As the hyper scaler gamble on the future continues, we get left with weaker (or more expensive) devices - not stronger ones.
The market makers make more money if we're left to thin clients. They're also the ones who control supply and the shapes of devices.
We're talking six orders of magnitude difference between 0.6t/sec and 35kt/sec.
While there are problems that can be solved with 0.6t/sec, particularly offline, at the edge, in the field applications, these are currently vastly outnumbered by other applications.
absolutely, however this doesn’t mean we should abandon local. i can’t remember who, but someone in the ai nuts and bolts arena said “smaller local models is where the exciting stuff is happening right now. it’s the area real fast progression is happening.” and it seems to be true. new big models aren’t making near the leaps smaller models are.
it’s so important we keep moving forward on running locally for the same reason it was important for us to use open standards when building the internet. if we hadn’t we’d all be connected through aol with 10 hours/month allowed internet usage and termed in through a sun workstation renting cpu cycles from some mainframe company at like “you’ve got 10,000 cpu cycles left on your monthly plan, please deposit $500 for 5,000 more.”
while all of this this is before my time, i’ve heard and read so many horror stories about how people could only connect through dumb terminals to “you wouldn’t believe it, computers then were the size of buildings” 1000 miles away and had to sign up for workload timeslots. make no mistake, this is the future these companies want, they want us to rent everything and own nothing.
Local is enough for most users as long as they're willing to accept a non-realtime response - which is a real limitation (especially for personal agentic use) but not a very significant one. The hardware is not that expensive, a single user's needs aren't going to saturate a state-of-the art AI datacenter rack or anything like that. Not even for heavy agentic workloads.
You rent your broadband internet. It's not a foreign concept that we can't own all the infra.
I don't know why we can't just get over the local compute thing and instead build open infra and models in the cloud. That's literally the only way we'll be able to keep pace with hyperscalers.
Local is not going to benefit 99% of use cases. It's a silly toy.
If we build open infra for cloud-based provisioning and inference, we could build a future we still have some ownership in. We'd be able to fine tune large models for lots of purposes. We wouldn't be locked in to major vendors.
i personally think we need to work towards both open weights in the cloud and local.
use the experience we gain from both to bolster the other.
a future where we are unable to locally run is kind of troubling. as is a future with no open cloud. we need both to stop some of the horrors the hyperscalers will happily inflict.
This is a toy.
We need to build open infrastructure in the cloud capable of hosting a robust ecosystem of open weights.
And then we need to build very large scale open weights.
That's the only way we don't get owned by the hyperscalers.
At the edge isn't going to happen in a meaningful way to save us.