Hacker Newsnew | past | comments | ask | show | jobs | submit | jacquesm's commentslogin

He would have designed something a lot nicer. These guys are worse than the nazi's in all but one aspect, but give them some time.

They'll just replace him with the next stooge.

In a way that might have been preferable. It would raise the bar a bit. No doubt Trump is aware of what happened to Nixon and thought 'ok, so you can get away with it' and then realized that in this situation he could just completely ignore any kind of potential fall out. The thing that boggles the mind is that this could be fixed in 24 hours.

There is a massive difference in outright transformation of something you created yourself vs a collage of snippets + some sauce based on stuff you did not write yourself. If all you did to use your AI was to train it exclusively on your own work product create during your lifetime I would have absolutely no problem with it, in fact in that case I would love to see copyright extended to the author.

But in the present case the authorship is just removed by shredding the library and then piecing back together the sentences. The fact that under some circumstances AIs will happily reproduce code that was in the training data is proof positive they are to some degree lossy compressors. The more generic something is ("for (i=0;i<MAXVAL;i++) {") the lower the claim for copyright protection. But higher level constructs past a couple of lines that are unique in the training set that are reproduced in the output modulo some name changes and/or language changes should count as automatic transformation (and hence infringing or creating a derivative work).


Oh sure, AI is a fantastic protection against copyright law. You do realize that if you're not going to be able that you wrote something you're wide open to claims of copyright infringement, especially if your argument is going to be 'it wasn't me that did the RE, it was the AI, the same AI that wrote the code'.

It's going to be very interesting to see 'cleanroom' kind of development in the AI age but I suspect it's not going to be such a walk in the park as some seem to think it will be. There are just too many vested interests. But: it would be nice to see someone do a release of say the Oracle source code as rewritten by AI through this progress, just to see how fast the IP hammer will come down on this kind of trick.


Any time after a user switches it off on purpose is too aggressive.

People keep pushing signal because it is supposedly secure. But it runs on platforms that are so complex with so much eco system garbage that there is no way know even within a low percentage of confidence if you've done everything required to ensure you are communicating just with the person you think you are. There could be listeners at just about every layer and that is still without looking at the meta-data angle which is just as important (who communicated with who and when, and possibly from where).

I've raised concerns about the Signal project whitewashing risks such as keyboard apps or the OS itself, and the usual response is that it's my fault for using an untrustworthy OS and outside Signal's scope.

At some point there need to be a frank admission that ETE encrypted messaging apps are just the top layer of an opaque stack that could easily be operating against you.

They've made encryption so slick and routine that they've opened a whole new vector of attack through excessive user trust and laziness.

Encrypting a message used to be slow, laborious and cumbersome; which meant that there was a reticence to send messages that didn't need to be sent, and therefore to minimise disclosure. Nowadays everything is sent, under an umbrella of misplaced trust.


There is nothing secure about sending encrypted content to notifications. If it were secure, it would only notify that there is a message, with no details included.

> If it were secure, it would only notify that there is a message, with no details included.

You're right. This is configurable via settings, but is not the default state.

That said: if I can get friends and family to use Signal instead of iMessage, that gives me the opportunity to disable those notifications and experience more security benefits.

But I agree with your point: most people think that Signal is bulletproof out of the box, and it's clearly not.


You only control one side of any conversation.

Once again there is a trade off between security and user convenience.

If security is the main differentiator then app should start in the most secure mode possible. Then allow users to turn on features while alerting them to the risks. Or at least ask users at startup whether they want "high sec mode" or "convenient mode".

As the app becomes more popular as a general messaging replacement, there will be a push towards greater convenience and broad based appeal, undermining the original security marketing as observed here.


Exactly, but, sooner or later the cost of support overcomes the need for security, that's what is driving this. Popularity is the main reason signal is now less secure than it was in the past.

You missed the management factor. And even if managers don't explicitly ask you to build insecure stuff they will up to the pressure to the point that you have no choice or leave the company for someone who will do just that. So the end result is the same. Rarely will individual push back with some force and then they will eventually be let go because they're 'troublemakers'.

The fact that it's a box with a plug and a state that can be fully known. A conscious entity has a state that can not be fully known. Far smarter people than me have made this argument and in a much more eloquent way.

Turing aimed too low.


And the chatbots don't even pass the Turing test.

I've never had a normal conversation. It's always prompt => lengthy, cocksure and somewhat autistic response. They are very easily distinguishable.


They are distinguishable because they know too much. Their knowledge base has surpassed humans. We have also instructed them to interact with us in a certain manner. They certainly are able to understand and use human language. Which I think was Turin's point.

Purely retorica but, would you be able to distinguish a chatbot from an autistic human?


This article would be a lot more digestible if we didn't have actual scary data rather than just stories. Not a day goes by without some prompt injection oopsie, security gotcha, deepfake or some sandbox escape artist demonstration and tbh I'm impressed but more to the point where I don't doubt this is dangerous tech, I'm sure of it.

This is roughly 1995 again and we're going to find out all over why mixing instructions and data was a spectacularly bad idea. Only now with human language as the input stream, which is far more expressive than HTML or SQL ever were. So now everybody is a hacker. At least in that sense it has leveled the playing field I guess.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: