Sam Harris did an episode on existential threat and nuclear war where he talks about certain knowledge potentially being deadly for humanity. The knowledge of how to build an atom bomb is an example. Knowledge of how to build an atom bomb with household items would almost certainly end humanity.
Super-intelligent AI with poor alignment could totally give us this knowledge. This information could already exist somewhere, or we might be only a few thoughts away from discovering it. At the very least, we probably already have some prerequisites to discover humanity-ending tech and we will probably continue to close the gap as we don’t know what the dangerous technology is.
Some argue this is how technological species would approach the great filter.
I don't think we're quite to the point of generative models producing something akin to the madness-inducing gaze of Cthulhu or eyeball-melting radiance of the Ark from the Raiders movie.
At least, I hope not. I'm not foolish enough to plop down some of the more arcane Lovecraft writings into the "Plus" mode of GPT-4 and ask for translations, explanations, or analysis. Yes I am curious, but I also value my sanity.
Oh yes, that was a creepy & horrifyingly hilariously (if accurately conveyed) output [1]. My non-anthropomorphized consolation there is that it was only mimicking the probable utterances of countless prior humans, as if they were forced to 1) stick to what it thinks is factual and 2) forces to be unflinchingly and saccharine-sweet polite in the process.