Half of your examples aren't even things an LLM can do and the other half can be written by hand too. I can name a bunch of bad sounding things as well but that doesn't mean any of them have any relevance to the conversation.
EDIT: Can't reply but you clearly have no idea what you're talking about. AI is used to create these things, yes. But the question was LLMs which I reiterated. They are not equal. Please read up on this stuff before forming judgements or confidently stating incorrect opinions that other people, who also have no idea what they're talking about, will parrot.
If we can change the rules of a discussion midway through, everyone loses. The parent replied to a question "What damage can be done with an llm without guardrails?" (regardless of the grandparent, this is how conversations work, you talk about the thing the other person talked about if you reply to them) and the response was to rattle off a bunch of stuff that LLMs can't do. Yes, they connected an LLM to an image generation AI. No, that doesn't mean "LLMs can generate images" aside from triggering some thing to happen. It's not pedantic or unreasonable to divide the two. The question was blatantly about LLMs.
If y'all want to rant and fear monger about any AI technology, including tech that has existed for years (deepfakes existed well before LLMs were mainstream), do that in a different thread. Don't just force every conversation to be about whatever your mind wants to rant about.
That said, arguing with you people is pointless. You don't even seem to think.
> If we can change the rules of a discussion midway through, everyone loses.
Then we lost repeatedly at almost every other step back to the root, because it switched between those two loads of times.
The change to LLMs was itself one such shift.
> No, that doesn't mean "LLMs can generate images" aside from triggering some thing to happen
The aside is important.
> It's not pedantic or unreasonable to divide the two.
It is unreasonable on the question of "guardrails, good or bad?"
It is unreasonable on the question of "can it cause harm?"
It's not unreasonable if you are building one.
> If y'all want to rant and fear monger about any AI technology, including tech that has existed for years (deepfakes existed well before LLMs were mainstream)
And caused problems for years.
> That said, arguing with you people is pointless. You don't even seem to think.
Communication isn't a single-player game, I can't make you understand something you're actively unwilling to accept, like the idea that tools enable people to do more, for good and ill, and AI is such a tool.
Perhaps you should spend less time insulting people on the internet you don't understand. Go for a walk or something. Eat a Snickers, take a nap. Come back when you're less cranky.
AI already is used to create fake porn, either of celebreties or children, fact. It is used to create propaganda pieces and fake videos and images, fact. Those can be used for everything from deffamation to online harassment. And AI is using other peoples copyrighted content to do so, also a fact. So, what's your point again?
Your other comment is nested too deeply to reply to. I edited my comment reply with my response but will reiterate. Educate yourself. You clearly have no idea what you're talking about. The discussion is about LLMs not AI in general. The question stated "LLMs" which are not equal to all of AI. Please stop spreading misinformation.
You can say "fact" all you want but that doesn't make you correct lol
No. I'm declaring that you either can't read or don't understand that there's a difference between "gen AI" and LLMs. LLMs generate text. They don't generate images. Are you just a troll or not actually reading my messages? The question you're replying to asked about LLMs. I don't understand what's so difficult about this.
One has to love pedants. Your whole point was, LLMs don't create images (don't you say...), hence all the other points are wrong? Now go back to the first comment, assume LLMs and gen AI are used interchangeable (I am too lazy to re-read my initial post). Or don't, I don't care, because I do not argue semantics, tgere is hardly a more lazy, and disengenious, way to discuss. Ben Shapiro is doing that all the time and thinks he's smart.
- propaganda and fake news
- deep fakes
- slander
- porn (revenge and child)
- spam
- scams
- intelectual property theft
The list goes on.
And for quite a few of those use cases I'd want some guard rails even for a fully on-premise model.