There's almost no point in arguing about this anymore. Neither you nor the other person are going to be convinced. We just have to wait and see if a new crop of 100x productivity AI believer companies come along and unseat all the incumbents.
I'm deeply convinced that there's 2 reasons we don't see real takes like this: 1) is because these people are quietly appreciating the 2-50% uplift you get from sanely using LLMs instead of constantly posting sycophantic or doomer shit for clout and/or VC financing. 2) is because the real version of LLM coding is boring and unsexy. It either involves generating slop in one shot to POC, then restarting from scratch for the real thing or doing extensive remediation costing far more than the initial vibe effort cost; or it involves generally doing the same thing we've been doing since the assembler was created except now I don't need to remember off-hand how to rig up boilerplate for a table test harness in ${current_language}, or if I wrote a snippet with string ops and if statements and I wish it were using regexes and named capture groups, it's now easy to mostly-accurately convert it to the other form instead of just sighing and moving on.
But that's boring nerd shit and LLMs didn't change who thinks boring nerd shit is boring or cool.
> because the real version of LLM coding is boring and unsexy
Some people do find it unfun, saying it deprives them of the happy "flow" of banging out code. Reaching "flow" when prompting LLMs arguably requires a somewhat deeper understanding of them as a proper technical tool, as opposed to a complete black box, or worse, a crystal ball.
There’s also just the negative association factor.
I use LLMs in my every day work. I’m also a strong critic of LLMs and absolutely loathe the hype cycle around them.
I have done some really cool things with copilot and Claude and I keep sharing them to within my working circle because I simply don’t want to interact that much with people who aren’t grounded on the subject.
I would be interested to hear your take on Copilot vs Claude. I have used Copilot (trial) in VS Code and I found it to mostly meet my needs. It could generate some plans and code, which I could review on the go. I found this very natural to me as I never felt 'left behind' in whatever code the AI was generating. However, most of the posts I see here are on Claude (I haven't tried it) and very few mentions of Copilot. What is your impression about them and the use cases each is strong in?
(Context: I'm a different person, but have thoughts on this)
I started using Copilot at work because that's what the company policy was. It's a pretty strict environment, but it's perfectly serviceable and gets a lot of fresh, vetted updates. IDE integration with vs code was a huge plus for me.
Claude code is definitely a messier, buggier frontend for the LLM. It's clunkier to navigate and it has much more primitive context management tools. IDE integration is clunky with vs code, too.
However, if you want to take advantage of the Anthropic subscription services, I've found Claude Code is the way to go... Simply because Anthropic works hard to lock you into their ecosystem if you want the sweet discounts. I'm greedy, so I bit the bullet for all of the LLM coding stuff I do in my personal life.
Copilot isn’t really a competing product to Claude - in fact I use Claude through copilot.
I have found in general that for the type of work I do (senior to staff level engineering, 90-10 research to programming) that Claude Opus is the only model really worth my time - but I just really like the Copilot CLI tooling.
I do use LLMs to learn about new subjects but we already only bill 10% for "coding" and that's inflating it to cover other parts.
I can't imagine that slopping it up would be a great decision. Having alien code that no one ever understood between a bug report and a solution. Anthropic isn't going to give us money for our lost contracts, is it?
I would say I'm using it for about half of the "10%" and and a quarter of the "90%".
> I can't imagine that slopping it up would be a great decision. Having alien code that no one ever understood between a bug report and a solution. Anthropic isn't going to give us money for our lost contracts, is it?
Absolutely, that's a real concern. The only time I will let it loose on something is a throwaway project to test something, or a small tool that I know I can write deterministic tests for.
On codebases of any significant size, I'm using it more like a custom domain Stackoverflow search engine.
Software engineering is only about 20% writing code (the famous 40-20-40 split). Most people use it only for the first 40%, and very succesfully (im in that camp). If you use it to write your code you can theorettically maybe get 20% time improvement initially, but you loose a lot of time later redoing it or unraveling. Not worth bothering.
20% is one of those cool lies SWEs have been able to push through (like “our jobs are oh so very special we can’t really estimate it, we’ll create an entire sub-industries with our industry to make sure everyone knows we can’t estimate”).
SWEs spend 20% of the time writing code for exactly the same reason brick-layers spend 20% of their time laying bricks
- A lot of research. Libraries documentation, best practice, sample solutions, code history,... That could be easily 60% of the time. Even when you're familiar with the project, you're always checking other parts of the codebase and your notes.
- Communication. Most projects involve a team and there's a dependency graph between your work. There may be also a project manager dictating things and support that wants your input on some cases.
- Thinking. Code is just the written version of a solution. The latter needs to exists first. So you spend a lot of time wrangling with the problem and trying to balance tradeoffs. It also involves a lot of the other points.
Coding is a breeze compared to the others. And if you have setup a good environment, it's even enjoyable.
It can be any number of things. From spending hour or two just writing requirements, to giving an example of existing curated code from another project you wrote and would like to emulate, or rewriting existing apps in a different language/architecture (sort of like translating), to serving as a QA agent or reviewer for the LLM agent, or vice versa.
I kinda like how you can just use it for anything you like. I have bazillion personal projects, I can now get help with, polish up, simplify, or build UI for, and it's nice. Anything from reverse engineering, to data extraction, to playing with FPGAs, is just so much less tedious and I can focus on the fun parts.
I think we'll get there. We need to get at least some AI bust going first though. It's impossible to talk sense into people who think AI is about to completely replace engineers, or even those who think that, while it might not replace engineers, it's going to be doing 100% of all coding within a year. Or even that it can do 100% of coding right now.
There's a couple unfortunate truths going on all at the same time:
- People with money are trying to build the "perfect" business: SaaS without software eng headcount. 100% margin. 0 Capex. And finally near-0 opex and R&D cost. Or at least, they're trying to sell the idea of this to anyone who will buy. And unfortunately this is exactly what most investors want to hear, so they believe every word and throw money at it. This of course then extends to many other business and not just SaaS, but those have worse margins to start with so are less prone to the wildfire.
- People who used to code 15 years ago but don't now, see claude generating very plausible looking code. Given their job is now "C suite" or "director", they don't perceive any direct personal risk, so the smell test is passed and they're all on board, happily wreaking destruction along the way.
- People who are nominally software engineers but are bad at it are truly elevated 100x by claude. Unfortunately, if their starting point was close to 0, this isn't saying a lot. And if it was negative, it's now 100x as negative.
- People who are adjacent to software engineering, like PMs, especially if they dabble in coding on the side, suddenly also see they "can code" now.
Now of course, not all capital owners, CTOs, PMs, etc exhibit this. Probably not even most. But I can already name like 4 example per category above from people I know. And they're all impossible to explain any kind of nuance to right now. There's too many people and articles and blog posts telling them they're absolutely right.
We need some bust cycle. Then maybe we can have a productive discussion of how we can leverage LLMs (we'll stop calling it "AI"...) to still do the team sport known as software engineering.
Because there's real productivity gains to be had here. Unfortunately, they don't replace everyone with AGI or allow people who don't know coding or software engineering to build actual working software, and they don't involve just letting claude code stochastically generate a startup for you.
> Or even that [AI] can do 100% of coding right now.
I don't actually think the article refutes this. But the AI needs to be in the hands of someone who can review the code (or astrophysics paper), notice and understand issues, and tell the AI what changes to make. Rinse, repeat. It's still probably faster than writing all the code yourself (but that doesn't mean you can fire all your engineers).
The question is, how do you become the person who can effectively review AI code without actually writing code without an AI? I'd argue you basically can't.
> The question is, how do you become the person who can effectively review AI code without actually writing code without an AI? I'd argue you basically can't.
I agree, and I'd go a step further:
You can be the absolute best coder in the world, the fastest and most accurate code reviewer ever to live, and AI still produces bad code so much faster than you can review that it will become a liability eventually no matter what
There is no amount of "LLM in a loop" "use a swarm of agents" or any other current trickery that fixes this because eventually some human needs to read and understand the code. All of it.
Any attempt to avoid reading and understanding the code means you have absolutely left the realm of quality software, no exceptions
It isn't as binary as that. We already don't read all the code. When was the last time you checked the disassembly/IR of the programs you wrote? Very few will say recently. The others, despite not reading the entirety of the code, can still produce something with good quality.
Besides, it's not like QA ends at code analysis. In fact, that's pretty shallow for the field.
My boss decreed the other day that we’re all to start maximising our use of agents, and then set an accordingly ambitious deadline for the current project. I explained that being relatively early in my career I’ve been hesitant to use any kind of LLM so I can gain experience myself (to say nothing of other concerns), and yeah in his words I’ve “missed the opportunity”
Unfortunately in the majority of organizations, the idiots are at the wheels. It's not people with actual experience of how engineers do things, that dictates what those engineers should do.
Interesting, we only have generic 'use AI' in our goals. Though its generic framing probably indicates more company's belief in this tech than anything else.
A lot of draft laws haven't been touched in a long time and aren't updated for modern gender politics. Though I do wonder if they'll actually get updated ever - no politician wants to touch it and it's not like anyone is screaming for the right to be forced to go die in war.
It's always weird to me how surprised women are that every single man they know has had to specifically, actually physically ink paper to sign up for the draft. It definitely feels weird/spooky when you do it, given the implications and that despite being compulsory it's not automatically done for you.
To clarify: every young person regardless of gender is legally obliged to go through fitness testing for conscription and if deemed suitable must go through it if selected. I imagine it’s roughly similar in Denmark?
Up until the fall of the USSR ~all men did go through conscription/basic military training. After the fall only the ones that wanted to and were selected did. Now it’s ramping up massively.
> and that despite being compulsory it's not automatically done for you.
I though it was weird that the United States had a requirement for people to physically sign a paper to do it. It looks like only this year they made it automated.
> Beginning on December 18, 2026, the Selective Service System will be required to identify, locate, and register all male (as assigned at birth) U.S. residents 18 to 26 years old on the basis of other existing federal databases. Men will no longer be required to register themselves or be subject to penalties for failing to do so. This was noted to be the most significant change to Selective Service since the self-registration system began in 1980.
both are already tied to residence registration (which is mandatory in germany, because it defines where you pay taxes). there is no need to register for the draft. it is automatic, once you turn 18 you get the letter to get tested if you qualify.
Programming is a necessary but not sufficient condition for software products to exist. So while the programming has to be good, so too do many other things, like product vision, product management, project management, and of course there still needs to be feedback between all of the above so that engineering isn't implementing a misunderstood version of the product and that product isn't asking for 5 years and a PhD research team. And on and on and on. Typing the code is like 2-10% of actually ending up with a software project and it's more toward the 2% for a software business.
So while AI made coding maybe 110% faster, it has also made literally every other person in the process lose their gd minds and they're wanting to break or skip everything else in the process to just shit out code faster.
Going faster only works WHEN you know EXACTLY (or close to it) what you want.
Going faster when experimenting? Nah you actually need a mix of slow and fast, and mostly slow stuff up-front.
There's a fundamental misunderstanding of how people actually do stuff imo - its akin to force fitting a square peg in a round hole. Im sure many are hoping its just a 'your organisation is designed wrong' problem. I doubt it though.
I don't think it's all that bad. There's definitely vibe coding that is "copy paste / throw away" programming on ultra steroids. But after vibe coding two products and then finding them essentially impossible to then actually get to a quality bar I considered ready to launch, I've been working on a more measured approach that leverages AI but in a way that simply speeds up traditional programming. I use it to save tons of time on "why is pylance mad about X" "X works from the docs example but my slightly modified X gives error Y" "how do I make a toggle switch in css and html" "how am I supposed to do Python context managers in 2026 (I didn't know about the generator wrapper thing)" all that bullshit that constantly slows you down but needs to be right . AI is great at helping you kickstart and then keeping you unblocked.
I've been using Gemini chat for this, and specifically only giving it my code via copy paste. This sounds Luddite but actually it's been pretty interesting. I can show it my couple "core" library files and then ask it to do the next thing. I can inspect the output and retool it to my satisfaction, then slot it in to my program, or use it as an example to then hand code it.
This very intentional "me being the bridge" between AI and the code has helped so much in getting speed out of AI but then not letting it go insane and write a ton of slop.
And not to toot my own horn too much, but I think AI accelerates people more the wider their expertise is even if it's not incredibly deep. Eg I know enough CSS to spot slop and correct mistakes and verify the output. But I HATE writing CSS. So the AI and I pair really well there and my UIs look way better than they ever have.
Pure 'vibe coding' is essentially technical 'tittytainment'. Using AI for the horizontal spread while you enforce vertical architectural depth is true deep work.
This is great and - something about programming has always felt adjacent to esotericism and the occult to me. Serial Experiments Lain is kind of in this vein too.
Luckily electromagnetism is the great equalizer. I'm imagining guerrilla warfare involving giant (in terms of GWh stored) Jerry-rigged capacitors driving electromagnets that are lobbed into places that would be extremely unappreciative surprise recipients of magnetic fields with flux densities measured in full Teslas
reply