Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Generative AI will make the world on general a worse place to be. They are not very good at writing truth, but they are very excellent at writing convincing bullshit. It's already difficult to distinguish generated text/image/video from human responses / real footage, its only gonna get more difficult to do so and cheaper to generate.

In other words, it's very likely generative AI will be very good at creating fake simulacra of reality, and very unlikely it will actually be good AGI. The worst possible outcome.



Half of zoomers get their news from TikTok or Twitch streamers, neither of whom have any incentive for truthfulness over holistic narratives of right and wrong.

The older generations are no better. While ProPublica or WSJ put effort into their investigative journalism, they can’t compete with the volume of trite commentary coming out of other MSM sources.

Generative AI poses no unique threat; society’s capacity to “think once and cut twice” will remain in tact.


> Generative AI poses no unique threat;

While the threat isn't unique the magnitude of the threat is. This is why you can't argue in court the threat of a car crash is nothing unique even when you're speeding vs driving within limit.


Sure, if you presume organic propaganda is analogous to the level of danger driving within limit.

But a car going into a stroller at 150mph versus 200mph is negligible.

The democratization of generative AI would increase the number of bad agents, but with it would come a better awareness of their tactics; perhaps we push less strollers into the intersections known for drag racing.


> But a car going into a stroller at 150mph versus 200mph is negligible.

I guess when you distort every argument to an absurd you can claim you're right.

> but with it would come a better awareness of their tactics

I don't follow. Are you saying new and more sophisticated ways to scam people are actually good because we have a unique chance to know how they work ?


It’s not absurd. The bottleneck for additional predation is not the available toolkit, else we’d see a more obvious correlation between a society’s resource endowment and its callousness.

Handwringing over the threat of AI without substantiating an argument beside “enabled volume” is just self-righteousness.

AI isn’t posed to shift the balance of MFA versus phishers in a way that can’t be meaningfully corrected in the short and long term, so using “scamming” as a means to oppose disseminating tech feels reductive at best.


> It’s not absurd.

It is, because I wasn't directly comparing AI to traffic but only reaching for an example to illustrate how irrelevant is the case whether the threat is something completely unique or not.

> Handwringing over the threat of AI without substantiating an argument beside “enabled volume” is just self-righteousness.

Dismissing it as "meh, not new" is plain silliness.

> AI isn’t posed to shift the balance of MFA versus phishers in a way that can’t be meaningfully corrected

What on Earth makes you think that ? The beautiful way we're handling scams right now ? If you think it's irrelevant that phishing via phone call can now or soon be fully automated and the attack may even be conducted using a copy of someone's voice - well, we won't get anywhere here.


It’s already automated, you don’t need AI/ML to perform mass-phishing attempts. LDo you think there’s someone manually-dialing you every time you get a spam call?

The way we mitigate scams today definitely encourages me; the existence of victims does not imply the failure or inadequacy of safeguards keeping up with technology.

While AI stokes the imagination, it’s not so inspiring that I can make the argument in my head for you about why humanity’s better off with access to these tools being kept in the hands of corporations that repeatedly get sued for placing profits over public welfare.


> It’s already automated, you don’t need AI/ML to perform mass-phishing attempts. LDo you think there’s someone manually-dialing you every time you get a spam call?

Ok, now you're being just stubborn. No, no one is manually dialing your number but as soon as the scammer knows you've answered you get to talk to a human who tries to convince you to install a "safety" app for your bank or something. THAT part isn't automated, but it may as well be, which means phishing calls and scams can potentially be done with a multiplication factor of hundreds, maybe thousands - limited only by scammer infrastructure.


You underestimate the amount of people who don't at all care whether or not their stroller goes splat as long as they're on asphalt they like the feel of.


We will have to go back to using trust in the source as the main litmus test for credibility. Text from sources that are known to have humans write (or verify) everything they publish in a reasonably neutral way will be trusted, the rest will be assumed to be bullshit by default.

It could be the return of real journalism. There is a lot to rebuild in this respect, as most journalism has gone to the dogs in the last few decades. In my country all major newspapers are political pamphlets that regularly publish fake news (without the need for any AI). But one can hope, maybe the lowering of the barrier of entry to generate fake content will make people more critical of what they read, hence incentivizing the creation of actually trustworthy sources.


If avalanche of generative content would tip the scales towards (blind) trust of human writers, these "journalists" pushing out propaganda and outright fake news will have increased incentive to do so, not lowered.


Replace "AI" in your comment with "human journalists" and it still holds largely true though.

It's not like AI invented clickbait, though it might have mastered the art of it.

The convincing bullshit problem does not stem from AI, I'd argue it stems from the interaction between ad revenue and SEO and the weird and unexpected incentives created in when mixing those 2.

To put it differently, the problem isn't that AI will be great at writing 100 pages of bullshit you'll need to scroll through to get to the actual recipie, the problem is that there was an incentive to write those pages in there first place. Personally I don't care if a human or a robot wrote the bs, in fact I'm glad one fewer human has to waste their time doing just that. Would be great if cutting the bs was a more profitable model though.


> I'd argue it stems from the interaction between ad revenue and SEO and the weird and unexpected incentives created in when mixing those 2.

Personally, I highly dislike this handwaving of SEO. SEO is not some sinister agenda following secret cult trying to disseminate bullshit. SEO is just... following the rules set forth by search engines, which for quite a long time is effectively singlehandedly Google.

Those "weird and unexpected incentives" are put forth by Google. If Google for whatever reason started ranking "vegetable growing articles > preparation technique articles > recipes > shops selling vegetables" we would see metaphorical explosion of home gardening in mere few years, only due to the relatively long lifecycles inherent in gardening.


It's a classic case of "once a metric becomes a target, it ceases to be a good metric"

To clarify, Google defines the metrics by which pages are ranked in their search results, and since everyone want to be at the top of Google's search results, those metrics immediately become targets for everyone else.

It's quite clear to me that the metrics Google have introduced over the year have been meant to improve the quality of the results on their search. It's also clear to me that they have, in actual fact, had the exact opposite effect, namely that recipes are now prepended with a poorly written novella about that one time the author had a emotionally fulfilling dinner with love ones one autumn, in order to increase time spent on the page, since Google at one point quite reasonably thought that pages where visitors stay longer are of higher quality, otherwise why did visitors stay so long?


In a bit broader sense this situation is created by "enshitification", term coined by Cory Doctorow. Google itself, as a platform, has an incentive to rank higher sites that produce more ad spend. Googles has no incentive to rank "good" (by whatever definition) sites high, that do not spend money on ads themselves or do not contain ad space.


I think they do have at least some incentive to rank good results highly. Why use a search engine if it's no good at finding relevant stuff?

And if no one is using the search engine, who's gonna see all those ads?

Of course, they do have other incentives too, some of which are directly conflicting with high quality search results, as you point out.

I suppose one could argue that their near-monopoly in the search business has allowed them to be somewhat negligent on the quality of search, but now that there are a few competitors at least somewhat worthy of the name, one can hope high quality results will be a higher priority.

Anyway, I'm holding out hope someone will one day manage to train a LLM to distinguish between quality content and SEO bullshit, and then put that to use in a search engine.

I'm not well versed enough in the current status of LLM's to make a prediction on how hard that will be, but my impression is that we're a fair ways off from any LLM being able to do that well enough to be valuable.

I'd really love to be proven wrong on this one, if you're reading this and have some relevant experience, consider yourself challenged! (feel free to rephrase this last bit in your head to whatever motivates you the most)


The explosion would be in BS articles about gardening, plus ads for whatever the user's profile says they are susceptible to.

SEO is gaming Google's heuristics. Google doest generate a perfect ranking according to Google human's values.

SEO gaming is much older than Google. Back when "search" was just an alphabetical listing of everyone in a printed book, we had companies calling themselves "A A Aachen" to get to the front of the book.


> SEO is gaming Google's heuristics.

I fail to see immediate disagreement here: I don't see how Google's ranking process/method/algorithm being heuristic changes the observation that to a website in the end it is a set of ranking rules, that can in some ways be gamed. SEO is two part process: discovering those ranking rules and abusing them.

Your example with gaming alphabetical listings only reinforces the idea that SEO abuses rules set forth by the ranking engine.

However, it does not meaningfully matter whether the incentives inherent in the ranking system form results the way they do intentionally. What matters is the eventual behavior of the ranking system. Mostly because by definition you cannot filter out bad actors entirely, all you can do is 1) place some arbitrarily enforced barriers, which are generally prohibitively costly 2) place incentives minimizing gain of bad actors.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: