Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Society is approaching them as some type of universal ethical arbiter, expecting an omniscient sense of justice which is fundamentally unreconcilable even between two sentient humans when the ethics are really just a hacked on mod to the core model.

That’s a real issue but I doubt the solution is technical. Society will have to educate itself on this topic. It’s urgent that society understand rapidly that LLMs are just word prediction machines.

I use LLMs everyday, they can be useful even when they say stupid things. But mastering this tool requires that you understand it may invent things at any moment.

Just yesterday I tried the Cal.ai assistant which role is to manage your planning (but it don’t have access to your calendars that’s pretty limited). You communicate with it by mail. I asked him to organise a trip by train and book an hotel. It responded, « sure what is your preferred time for the train and which comfort do you want ? » I answered and it answered back that, fine, it will organise this trip and reach me back later. It even added that it will book me an hotel.

Well, it can’t even do that, it’s just a bot made to reorganize your cal.com meetings. So it just did nothing, of course. Nothing horrible since I know how it works.

But would I have been uneducated enough on the topic (like 99,99% of this planet’s population, I’d just thought « Cool, my trip is being organized, I can relax now ».

But hey, it succeeded at the main LLM task : being credible.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: