Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I agree with most of this, but my one qualm is the notion that LLMs "are particularly good at generating ideas."

It's fair enough that you can discard any bad ideas they generate. But by design, the recommendations will be average, bland, mainstream, and mostly devoid of nuance. I wouldn't encourage anyone to use LLMs to generate ideas if you're trying to create interesting or novel ideas.

 help



I have found the one of the better use cases of llms to be a rubber duck.

Explaining a design, problem, etc and trying to find solutions is extremely useful.

I can bring novelty, what I often want from the LLM is a better understanding of the edge cases that I may run into, and possible solutions.


I always find folks bringing up rubber ducking as a thing LLMs are good at to be misguided. IMO, what defines rubber ducking as a concept is that it is just the developer explaining what their doing to themselves. Not to another person, and not to a thing pretending to be a person. If you have a "two way" or "conversational" debugging/designing experience it isnt rubber ducking, its just normal design/debugging.

The moment I bring in a conversational element, I want a being that actually has problem comprehension and creativity which an LLM by definition does not.


Sometimes I don't want creativity though, I'm just not familiar enough with the solution space and I use the LLM as a sort of gradient descent simulator to the right solution to my problem (the LLM which itself used gradient descent when trained, meta, I know). I am not looking for wholly new solutions, just one that fits the problem the best, just as one could Google that information but LLMs save even that searching time.

> I'm just not familiar enough with the solution space

Neither is the LLM


(Trying to find where you might still see this)

I've read the thread and in my mind you're missing that LLMs increase the surface area of visibility of a thing. It's a probe. It adds known unknowns to your train of thought. It doesn't need to be "creative" about it. It doesn't need to be complete or even "right". You can validate the unknown unknown since it is now known. It doesn't need to have a measured opinion (even though it acts as it does), it's really just topography expansion. We're getting in the weeds of creativity and idea synthesis, but if something is net-new to you right now in your topography map, what's so bad about attributing relative synthesis to the AI?


Because if that's it we've made a ludicrously expensive i-ching.

If there is something LLMs are good at it's knowing some obscure fact that only 10 other people on this planet know.

They're also very good at almost knowing an obscure fact that only 10 people know but getting a detail catastrophically wrong about it

No, this is the kind of thing LLMs are very good at. Knowing the specifics and details and minutiae about technologies, programming languages, etc.

Oh Lord, no. Not at all. That's what they're terrible at. They are ok-ish at superficial overviews and catastrophically bad at specific minutiae

Honest, non-confrontational, non-passive aggressive question: Have you used any of the latest models in the last 6 months to do coding? Or frankly, in the last year?

I have. And the people who say "use a frontier" model are full of it. The frontier models aren't any better than the free ones.

What are you defining as free versus frontier, and for what purpose? For coding there is a big difference between Opus and GPT 5.3/4 versus Sonnet and other models such as open weight ones.

They note in another comment they don't even use search engines so I don't think they're the right person to ask regarding frontier models.

I'd ask them what tools they do use, but I doubt they'll see my comment; I'll see if I can mail it to them.

(Why wouldn't I see your comment?)

I just don't use the web much anymore because the experience has degraded so much over the past several years and it has become decreasingly useful at work as well. I do sometimes need to search for a document and find Kagi pretty good for that, but the old way of using a search engine to kind of explore and discover stuff just isn't viable anymore, unfortunately.

I administer software for a living so I read a lot of documentation of that software but it comes with the software so I don't ever really need to search for it; I also read and participate in some forums and us the relevant IRC channels.


Oftentimes it is though, good enough for my purposes.

If you're not familiar with the problem space, by definition you don't know whether or not that's the case. The problem spaces I do know well, I know the LLM isn't good at it, so why would I assume it's better at spaces I don't know?

I said familiar enough, not familiar. For example, let's say I'm building an app I know needs caching, the LLM is very good at telling me what types of caching to use, what libraries to use for each type, and so on, for which I can do more research if I really want to know specifically what the best library out of all the rest are, but oftentimes its top suggestion is, like I said, good enough for my purpose of e.g. caching.

I still don't get what you're saying. If you possess enough information to accurately judge the LLM's suggestions you possess enough information to decide on your own. There's not really a way around that.

Of course I'm deciding on my own, I'm not letting the LLM decide for me (although some people do). But the point is whatever the suggestion is is merely an implementation detail that either solves my problem or not, not sure what part of that is confusing. Replace LLM with glorified Google and maybe it's less confusing.

No, Google (at least back when it worked) ranked results based on the feedback of other users, so it was a useful signal.

Theoretically the LLM would weight more popular suggestions more too. Regardless you're reading too much into this, either use the LLM or don't, I'm not sure if someone else can convince you. As I said for my purposes of getting shit done it works perfectly fine and works more like a research tool than anything else, especially if it can understand my specific use case unlike general research tools like Google or Stack Overflow.

IDK man this sounds a lot like my junior devs saying "it works fine for me" as they hand in PRs that break prod

If you don't review the code it generates then that's still on you. There isn't an excuse for handing in breaking PRs like your juniors. It's a tool at the end of the day and it's the responsibility of the user to utilize it correctly.

Do you use search engines or do you just memorize all the world’s information?

I don't use search engines for much of anything nowadays (does anybody still?) At work I read documentation if I need to learn something.

This is a very strange and contradictory situation. I'm not sure there's any point in engaging with you since there is nothing but a stream of weak dismissals farming for engagement.

You dismiss LLMs because of factual inaccuracy, which is fair, but now you're doubling down on an anti search engine stance, which is weird, because the modern substitute is letting LLMs either use search engines on your behalf or learn the entire internet with some error and you've dismissed both.

Yes, I'm the "backwards" guy who still uses search engines. We still exist.


I've noticed that HN can attract some of the most extreme people I've ever seen, and I suppose there is precedent in the tech world when I'm reminded of the story of Stallman not using a browser but instead sending webpages to his email where he then reads the content. It's literally nonsensical for 99.9999% of the population and I've read similar absurd things on HN as well.

This person not using LLMs is fine, I understand the argument like you said, but the double down on not using search engines either makes me not take anything they say seriously. Not to be too crass but it reminds me of this situation on the nature of arguing on the internet [0].

[0] https://www.reddit.com/r/copypasta/comments/pxb2kn/i_got_int...


Absolutely, the whole point of the rubber duck is that it's inanimate. The act of talking to the rubber duck makes you first of all describe your problem in words, and secondly hear (or read) it back and reprocess it in a slightly different way. It's a completely free way to use more parts of your brain when you need to.

LLMs are a non-free way for you to make use of less of your brain. It seems to me that these are not the same thing.


Maybe it’s just a semantic distinction, which, sure. I guess I’d just call it research? It’s basically the “I’m reading blogs, repos, issue trackers, api docs etc. to get a feel for the problem space” step of meaningful engineering.

But I definitely reach for a clear and concise way to describe that my brain and fingers are a firewall between the LLM and my code/workspace. I’m using it to help frame my thinking but I’m the one making the decisions. And I’m intentionally keeping context in my brain, not the LLM, by not exposing my workspace to it.


Sometimes people just need something else to tell them their ideas are valid. Validation is a core principle of therapeutic care. Procrastination is tightly linked to fear of a negative outcome. LLMs can help with both of these. They can validate ideas in the now which can help overcome some of that anxiety.

Unfortunately they can also validate some really bad ideas.


I feel I've had the most success with treating it like another developer. One that has specific strengths (reference/checklists/scanning) and weaknesses (big picture/creativity). But definitely bouncing actual questions that I would say to a person off it.

My understanding was that rubber ducking was using a different portion of your brain by speaking the words.

The same discovery often happens when you explain a problem to a coworker and midway through the explanation you say "nvm, I know what I did wrong"


Do you not know any people who can help? Suddenly realised how lonely this sounds.

Coordinating with people is hard and only gets harder as you live. And actually, finding someone that is earnestly receptive to hearing you pitch your half-baked startup ideas (just an example) and is in some capacity qualified to be at all helpful, is uhhh, not easy.

Really? Sometimes I think I'm not very social, then I read something like this. Don't you have any friends? Colleagues? Maybe that's the problem you need to solve rather than sitting in a room burning energy for endless token streams with LLMs that anyone has access to?

Ah, I couldn't help myself practice my creative writing in the other reply. This reply is more constructive:

Both LLM based rubber-ducking and human discussions seem like a win win. I see no reason to jump to labeling unhealthy social connections just for pairing with LLMs.


lol. nobody is proposing this "well if not friends, then...". Appreciate your concern. I am fine.

This is for Internet posterity: thought-partnering with AI does not in fact make you a sorry socially inept loser that needs globular-toast to come in and help you dial that helpline.

Also: one's friends do not, in reality want to thought-partner about work issues, esoteric hobbies, and that million dollar idea.

Also: these friends, every and any one of them it seems, will not in fact speak the word of God into your ear as manifest insight for said work issue, million dollar idea, and so forth.


I'm torn.

I sometimes use them when I'm stuck on something, trying to brainstorm. The ideas are always garbage, but sometimes there is a hint of something in one of them that gets me started in a good direction.

Sometimes, though, I feel MORE stuck after seeing a wall of bad ideas. I don't know how to weigh this. I wasn't making progress to begin with, so does "more stuck" even make sense?

I guess I must feel it's slightly useful overall as I still do it.


Mainstream ideas are often good. That's why they're mainstream. Being different for being different isn't a virtue.

That being said I don't think LLMs are idea generators either. They're common sense spitters, which many people desperately need.


"by design, the recommendations will be average"

This couldn't be more wrong. The simplest refutation is just to point out that there are temperature and top-k settings, which by design, generate tokens (and by extension, ideas) that are less probable given the inputs.


I think it's just a confusing use of the term "generating." It's thinking of the LLM as a thesaurus. You actually generate the real idea -- and formulate the problem -- it's good at enumerating potential solutions that might inspire you.

That can be very valuable.

All LLM output is always dry as fuck quite frankly. At all levels from ideas and concepts through to the actual copy. And that’s dotted with pure excrement.

I think the only reason it’s seen as good anywhere is there are a lot of tasteless and talentless people who can pretend they created whatever was curled out. This goes for code as well.

If I offend anyone I will not be apologising for it.


> I think the only reason it’s seen as good anywhere is there are a lot of tasteless and talentless people who can pretend they created whatever was curled out. This goes for code as well.

This is an oversimplification.

If you have taste and talent, then the LLM output you get is going to reflect that.

So on the one hand, yes: tasteless and talentless people won't know good output from bad output. On the other hand, people with taste and talent can actually get good output.


No it’s not. That’s total rubbish.

You can’t coerce quality creative writing out of it however you attempt to gaslight it into doing so.


Well you're free to disagree but my experience has been counter to your position. I write both code and research / technical documentation. The quality of what the LLM produces is limited by the quality of ideas I give it initially (mind you, this is just a starting point), and the quality of my review of its output.

Agreed! No LLM is producing Pynchon, Calvino, Borges, Castaneda, Le Guin, Vonnegut.

I think that’s an unfair comparison. It can’t even produce Mills and Boon trash.

But can they produce Tom Clancy or James Patterson?

Malcollm Gladwell

> If I offend anyone I will not be apologising for it.

What you said is simply counterfactual, so no reason to be offended.


> curled out

This is the kind of understated yet thoroughly disgusting imagery an LLM couldn't come up with on its own, great example.


Thank you :)

Yes, I didn't get this portion at all. I feel as though letting an LLM brainstorm ideas for you would be worse in externally framing your thoughts than letting it write/proofread for you. If you pick one idea out of the 10 presented by the LLM, you are still confining yourself to the intersection of what the LLM thinks is important and what you think is important, because then you can never "generate" a thought that the LLM hasn't presented.

Having to fix the LLM’s recommended solution is a good exercise though.

It’s like being a smart-ass for the right reasons, without any social consequences.


So maybe the framing is: LLMs are good at mapping the landscape, but not at discovering new continents

LLMs can come sometimes up with novel or non-obvious insights...or just regurgitate google-like results.

Asking the LLM better will return better than average and bland and mainstream results.

How does one ask better? Does better vary per model?

Yes, its context based.

It's like asking a coworker. Providing too little information, or too much context can give different responses.

Try asking the model to not provide it's most common or average answer.

Been using it this way for 2, almost 3 years.


Why would they return "better" results?

Because AI is not a search engine. It does not return the best search result every time.

What it considers best, is what occurs most often, which can be the most average answers. Unless the service is tuned for search (perplexity, or google itself for example), others will not provide as complete an answer.

How well we ask can make all the difference. It's like asking a coworker. Providing too little information, or too much context can give different responses.

Try asking the model to not provide it's most common or average answer.

Been using it this way for 2, almost 3 years.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: