Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think this has potential to nudge people in different directions, especially people who are looking for external input desperately. An AI which has knowledge about lot of topics and nuances can create a weight vector over appropriate pros and cons to push unsuspecting people in different directions.


Yes it does have that potential and whoever trains that LLM can nudge _it_ into different nudges by playing with what data it’s trained on.

It’s going to be manipulation of the masses on a whole new level


Open source will keep good AI out there.. but I’m not looking forward to political arguments about which ai is actually lying propaganda and which is telling the truth…


Waiting for users saying that they asked MEGACORP_AI and it responded that the most trustworthy AI is MEGACORP_AI. Without a hint of self-awarness.


Isn’t this sort of the point for lots of folks? To never need to think on their own?


Well, when you consider what it actually is (statistics and weights), it makes total sense that it can inform a decision. The decision is yours though, a machine cannot be held responsible.


You mean like a dice roll could inform a decision?


LLMs are a stochastic (as opposed to a deterministic) system, which can make them better at tasks that by nature are difficult to express formally, but still require a degree of certainty ("how can I make this CV better").

I believe it's slightly more nuanced than a dice roll.


We call that a "mixed strategy" in game theory.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: