Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

https://openai-openai-detector.hf.space/

Using this tool on the comment yields a 77.6% fake score. To give (pretty limited) contrast, a response from ChatGPT gives 99.9% fake, and another comment from this thread gives 0.1% fake.

I assume that means the comment text was GPT-assisted, with some moderate editing from a human.



I checked this tool on some of my comments. Mostly it reported around "100% real". However https://news.ycombinator.com/item?id=35259575 this comment was generated by ChatGPT and it was also reported as 97.6% real. And this comment: https://news.ycombinator.com/item?id=35286822 was reported as 88.79% fake but I wrote it myself. So I wouldn't trust this tool too much, even from my very limited testing it wasn't very reliable.


I asked chatGTP to "write a hacker news like comment on the rise of website builders effect on web design and development industry and how to found joy in other roles and keeping programming in side hobby" and it gave me a 97% real comment by the score on this site, not convincing.


Those tools are completely unreliable at the moment. Don't trust the results.


Oh, I didn't know this. Thanks.


This is fascinating. I can’t imagine why someone would go to the trouble to do all that for a comment.


I think it is likely the complete opposite of trouble. It helped them write their comment faster and in such a way that it was easy to understand. This is one of the best use cases for chatgpt. Rather than spend minutes trying to get the right wording, tell chatgpt what you want to say in a few short notes and get a well formed coherent text.

Now if this is a good thing, that’s up for debate - although overall I personally would say yes.

Written with zero chatgpt assistance


> It helped them write their comment faster and in such a way that it was easy to understand.

Right, but why not save even more time by not commenting at all?


I think the point here is that for a group of people (I would argue most) the primary goal of a comment is to broadcast your ideas, with the language being used to convey them of secondary concern, of which this part could be delegated to AI.


The irony of using a LLM to write a text that complains of the pervasiveness of automated tooling for web design is hard to miss.

But perhaps it’s just that the author has internalized copywriting formulas to a degree that their text appears machine generated?


The GPT-detector is fascinating (and seemed to work pretty well on some inputs I tested)

On the other hand, this next paragraph is scored as 84% fake and I'm quite sure Churchill didn't have Chat-GPT to help:

> We shall fight on the beaches, we shall fight on the landing grounds, we shall fight in the fields and in the streets, we shall fight in the hills; we shall never surrender, and even if, which I do not for a moment believe, this island or a large part of it were subjugated and starving, then our Empire beyond the seas, armed and guarded by the British fleet, would carry on the struggle, until, in God's good time, the new world, with all its power and might, steps forth to the rescue and the liberation of the old.

IMO, we must not be too quick to conclude that any given text was created/assisted by an LLM based on a scoring algorithm alone. (A low GPT likelihood is probably reliable [for now, until that starts being gamed].)

If you offered me even odds, I'd wager that the subject comment was 100% hand-written.


We are the hollow men We are the stuffed men Leaning together Headpiece filled with straw. Alas! Our dried voices, when We whisper together Are quiet and meaningless As wind in dry grass Or rats' feet over broken glass In our dry cellar


Or Reddit. Let's not forget what LLMs are trained on. It's not just Wikipedia and some official text corpora, it's Reddit dumps and other regular Internet conversations. If you learned how to write English on-line, there's a chance you've internalized a style similar to how LLMs often respond.

(In fact, I often feel like I sound too much like ChatGPT myself.)


I ran a 5000 character essay that was mainly generated by GPT4 and lightly edited by me through that and it reported 99.98% human lol


So this is the future we want, eh? Everybody paranoid, running everybody else's responses through an AI detector. One that isn't even trusting itself.

And the bitch of it is, people ARE going to be using AI generated responses in arguments.

The decade is turning out to be truly fucking terrible.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: