I'm still using Win32 (via WTL) to write a developer focused application, but I know that someday I will need to switch to Qt, dear ImGui, or develop my own UI abstraction layer.
The biggest disappointment has been that all new OS features require a new runtime and are not accessible from just pure Win32 any more. Heck the pipeline for developing UWP/WinUI3 apps requires extra steps to register and install the application just to run it inside a debugger.
You laugh, but thanks to those critical minerals ads during cartoons, my kids are now begging me for praseodymium and scandium. Prices for rare earths are through the roof but my 10 year old just won't accept that she can't refine advanced alloys in this economy.
If she wants to refine advanced alloys then should look into the environmental regulations first, there's a reason nearly all such processing is done in China, or South East Asia.
That's a nice start but it's really just a band aid over the real problem, kind of like how politicians don't actually want to solve the underlying issue but instead just want to be seen to do something.
The real insanity is just gambling in Australia. As the article mentioned the people in the country have the highest losses to gambling anywhere in the world. This goes beyond sports betting, as that is just the latest thing, to poker machines (slot machines for you USA people) and other online gambling (cough Kick & related companies cough).
I am old enough to remember the introduction of poker machines in pubs and clubs here in Australia and it was always framed as a personal choice and government revenue source but all it did was result in money going from families to the big corporate "entertainment venue" operators (pub and club owners). I'd love it if Australia got rid of all gambling from normal parts of society and limited it to strictly regulated casinos (at most), but the gambling industry is so firmly entrenched in politics and society that I don't see any change happening soon
I think we must be of a similar generation, I also remember being a kid seeing those stupid ads with celebrities “bending their finger”. It’s really gross to think how normalised gambling is.
There is another side to this though, and that’s our pub culture and how “big gambling” plays a role in keeping pubs financially viable with the help of pokies. In the wise words of Tim Whitlam, “Blow up the pokies”
What if all the people currently using these "AI" services are the entire market for those services? I'm pretty sure everyone that wants to use LLMs is already doing so and already paying for the service.
That would mean the only way to increase growth would be to charge more per token and to get the existing people to use more tokens. Both of which seem to be only what mature companies do when trying to squeeze the cash cow for all it's worth.
It also explains why they're trying to stuff AI into everything, to keep the numbers up, and to get everyone to try and pay them money.
When I show people personal projects I’ve vibe-coded with Claude Code, they often seem impressed and envious. They come up with ideas for things that they would like to do, too. But they have full-time jobs outside of IT, and when I mention they might need to use the terminal to do what I’m doing now their eyes glaze over.
A couple of such people, after they learned about Claude Cowork, signed up for Anthropic subscriptions and are now using it in their jobs. But overall my impression is that there is still huge potential demand from regular people who use computers for agentic systems with less barrier to entry, and that many will be willing to pay for such systems when more mature and user-friendly ones arrive.
Most of humanity hasn't figured out they need to adapt yet. It's a bit like email and the internet in the mid nineties. People had heard about it but hadn't really embraced it yet. Five years later most people with white collar jobs had email addresses. Fifteen years later, billions of internet capable smartphones were in circulation.
The AI revolution is following a similar adoption curve. Right now many of the tools are only really usable if you are a developer or at least not too shy making AI agents use developer tools on your behalf. That's not going to stay like that for very long. It's going to be a messy transition that will likely take much longer than some people seem to think. But eventually most people doing knowledge work will be leaning heavily on all sorts of AI agents to do their thing. And quite a few will have to learn new skills as most of the stuff they still do manually today just goes away as a thing that you do manually.
Like the mid nineties, these are amazing times for people with a slight head start over everybody else. Which is why there is such an investment frenzy around AI right now. Lots of possibilities where lots of money might be made. And lots of things that won't work out. And lots of people really not seeing the forest for the trees as well. And generally behaving like headless chickens. But the internet in the end proved to be not a fad and it didn't all go back to normal after the hype died and the .com bubble burst.
IMHO, the bubble around AI is not so much the technology but things like data center and energy pricing. The cost of data center production is long term a fraction of current cost (dominated by GPUs costing tens of thousands of dollars). Likewise cheap and plentiful energy to power them is going to eventually cost a lot less. Short term scarcity eats up a lot of billions right now. But you'd be mistaken to confuse that for long term structural cost. Cost is going to come down and that will drive adoption. And that's before you consider edge compute on commodity phones and laptops. There will be billions of devices running small AI agents. Add robotics to the mix and it's a whole new world.
In short, companies like OpenAI and Anthropic are valued so high because all of that is happening right now. Yes, it's a bit of a bubble. But stuff will definitely happen.
On the other hand, the productivity gains from AI automation are so large that you are forced to use it to compete in the workplace, even if you strongly dislike the terminal, you will dislike homelessness more.
Think about all the "people" AI services can displace in due time. There's a fuckload of pencil pushers / knowledge workers with 100k student loans whose lifetime contribution can probably be measured in a few hundred dollars in tokens. And TBH normalizing AI crutch for kids is going to make large % of future cohorts even more replaceable. Skill atrophy among youth is declining hard, but AI is basically crippling future workforce quality to make their displacement even easier. There's even less reason to hire entry level in 4 years not just because models get better but human capita is going to be so much worse.
The market hasn’t been built out yet. There’s that post from a couple days ago where someone frontloaded the entire UX of an operating system onto an LLM, so you just tell the hardware what you want to do and it does it. https://news.ycombinator.com/item?id=47557165
The growth is there but it’s going to be a marathon, not a sprint. I don’t know why everyone’s in such a goddamn hurry all the time
Give it a year or two, and apple or any other hardware will have unified memory OR AMD will have a good offering to run all that stuff locally. It won't be as good as Claude, but it'll do for 90% of the things. It will be expensive as first, just like the first mainframes, then give it another 5 years or so, and it'll be affordable.
People that do light office work tend to have light office machines, which are very unlikely to have powerful NPUs or even a lot of RAM. Therefore with this minimal setup is it even feasible to do any sort of LLM based work locally on those machines, or will they all be dumb terminals connecting to hosted LLMs of the big companies?
This is also what I wonder, what practical applications can you actually do locally on something like a minimum spec NPU?
Because it has a high likelyhood of being written completely by a LLM without any human thought or attention being put into it.
Being written by a LLM is a signal that the submission is of low effort and therefore probably low quality, which then puts the onus on the people reviewing and reading the submission instead of the original generator of the submission. Hence I would classify it as spam.
Open source communities also have rules against LLM generated contributions, for various moral, ethical, or legal reasons.
Using a LLM to fix a spelling mistake is retardedly lazy.
Presumably they used a free version of the LLM, therefore it is completely understandable that it inserted a snippet of text advertising its use into the output. I mean using a free email provider also adds a line of text to the end of every email advertising the service by default - "Sent from iPhone" etc.
Using a LLM to fix a spelling mistake is retardedly lazy.
If you do it manually, sure.
If you have an agent watching for code changes and automatically opening PRs for small fixes that don't need a human-in-the-loop except for approving the change, it's the opposite of lazy. It eliminately all those tedious 1 point stories and let's the team focus on higher value work that actually needs a person to think about it.
Given time all small changes will be done this way, and eventually there won't be a person reviewing them.
That scenario doesn't require any explicit "summoning", and if there's a human in the loop approving the change, certainly they can fix the typo themself.
Sounds like a great use of energy and tokens, not overkill at all
As much as AI uses a lot of energy, having something that fixes issues in the background is very likely to be a net saving if you consider the number of users who fail to complete a task due to the bug and have to either wait in a broken state or retry later.
It's probably using less energy than a person fixing the issue too. That's a guess though.
I may be in the minority but I like that C++ has multiple package managers, as you can use whichever one best fits your use case, or none at all if your code is simple enough.
It's the same with compilers, there's not one single implementation which is the compiler, and the ecosystem of compilers makes things more interesting.
Multiple package managers is fine, what's needed is a common repository standard (or even any repository functionality at all). Look at how it works in Java land, where if you don't want to use Maven you can use Gradle or Bazel or what have you, or if you hate yourself you can use Ant+Ivy, but all of them share the same concept of what a dependency is and can use the same repositories.
Also, having one standard packaging format and registry doesn't preclude having alternatives for special use cases.
There should be a happy path for the majority of C++ use cases so that I can make a package, publish it and consume other people's packages. Anyone who wants to leave that happy path can do so freely at their own risk.
The important thing is to get one system blessed as The C++ Package Format by the standard to avoid xkcd 927 issues.
In the Linux world and even Haiku, there is a standard package dependacy format, so dependencies aren’t really a problem. Even OSX has Homebrew. Windows is the odd man out.
On the contrary, most Linux distributions use platform-specific global-only packaging formats for C++ libraries, and if anything I think that's holding back the development of a real, C++-native packaging/dependency manager.
That would actually be pretty cool. Though I think there might have been papers written on this a few years ago. Does anyone know of these or have any updates about them?
CPS[1] is where all the effort is currently going for a C++ packaging standard, CMake shipped it in 4.3 and Meson is working on it. Pkgconf maintainer said they have vague plans to support at some point.
There's no current effort to standardize what a package registry is or how build frontends and backends communicate (a la PEP 517/518), though its a constant topic of discussion.
I asked recently on social media if anyone knows if there has been a legal decision regarding if GPL source code that was used for training an LLM will taint all that LLMs output with the same GPL licence. So far nothing has come up but I think people are wanting to know the answers.
It has been said that Microsoft indemnifies people using its LLM tools against copyright and patent issues, but I don't know if it applies to LLM output which might/should be GPL licenced.
reply