Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Computers have been invented to surprise us... If we knew what computers do, we would not use them, and we would not have built any.

This is just trying to sound clever for the sake of sounding clever. The first bit is factually wrong in any non-trivial way, and the counter argument for the second bit is the very next slide.



As a formal methods researcher, he expresses in a clever way what formal method researchers know: it is extremely hard to establish interesting non-trivial properties of programs. His cleverness is pointing out that this is intentional: if we could easily tell all interesting properties of the results our programs give us, we wouldn't need them in the first place (of course this isn't always true, but don't be so literal). It's another way of saying that software exists for its essential complexity, and that that complexity -- as formal methods/software analysis research tells us -- is high.


> if we could easily tell all interesting properties of the results our programs give us, we wouldn't need them in the first place

At least to a degree, that may be our own hubris instead of an essential quality of programming. Rarely do I encounter a program whose problems are in their essential complexity. What I see instead is people convincing themselves that all of their pain is necessary.

We have a lot of architectural astronauts who seek complexity for its own sake. We have feature factories adding new complexity all the time. Code being written to justify code still being written - programming bureaucracies. Like moths to a flame we reach for complexity. And we reach, and we reach, and we reach.

I have spent a lot of my career working to scale developers vertically. In software, when communication is the bottleneck, we either fix it and keep scaling horizontally, or can’t and work to scale our hardware vertically. Achievements in developer communications have been rare, and yet we keep young to scale horizontally like we don’t already know how this story ends. Dusty, old, 25th anniversary editions of Brooks’ lie unheaded on the shelf.

Boringly predictable code is how I do that. Tools that automate very repetitive but error prone processes are part of that mix. In which case I definitely know the answer, I just really want to make sure I get it. This is, after all, how software got started in the first place. The logical conclusion of a story started by Monsieur Jacquard.

Some people get really uncomfortable in the face of such changes, but they are typically folks I have already identified as part of the complexity problem. Some can be converted, others cannot. We are poisoning the well and standing around complaining about it.


> Rarely do I encounter a program whose problems are in their essential complexity.

I'm not saying we don't also introduce non-essential complexity, but I think that working on distributed/interactive/concurrent systems and specifying them in TLA+, which allows us to expresses just essential complexity, will dissuade you of that notion. Because TLA+ allows you to either write a proof of an assertion or to automatically look for counterexamples with a model checker, it makes it easy to compare the relative difficulty of those two activities. Not only is proving much harder, in most cases you'll find that "intuitive" reasoning is just wrong.


But assuming "if we knew what computers do, we would not use them" is true, then the rest of his talk is pointless.

We use computers precisely because we know what they do. We're worried about black boxes and long for formal proofs of our programs because they are useful only when we know what they do.

If I hand you a black box with a button and two LEDs that blink in response to input, it would be initially useless to you precisely because you do not know what it does. Only if you learn what it does, by exploration or me telling you, can it become useful to you.


I think that an overly literal and precise reading of his pithy phrase misses the point.

Computers do exactly as we tell them to do, and we might also know what we'd like them to do but we usually don't know what we'd like about the result of what they do. Since the birth of computer science we've known that computers are mysterious in the sense that even though their operation is deterministic in a very natural sense, its outcome is not generally knowable; so deterministic but indeterminable. Dowek merely points out that this is not just a problem with computers but also the very point of them. If their operation's outcome were easily determinable, we wouldn't need it.


> I think that an overly literal and precise reading of his pithy phrase misses the point.

I suppose you're right. I get what you're saying, and based on that it seems what Dowek should have said was that "if we knew what computers would do, we would not use them".

For me that's a pretty crucial distinction though.

In any case, I agree that computation being deterministic yet indeterminable is indeed fairly surprising[1] and quite interesting, and indeed it is this potential for complexity that makes computers useful.

[1]: in the common sense of the word, if not both


Surprise in this context means you didn't know the exact data the computer will output, not that you don't know how a computer or the program works.

Like if you ask a calculator to calculate sin(13), you know exactly how it calculates it but you didn't know the exact number so the result was "surprising".


Right, from the Shannon reference mentioned I got that. I'll accept that as an "inside" joke.

I still think "surprise" is one of the worst words one could use to describe this though, but that's not a fight I'm gonna win :)


This seems like an unnecessary ad hominem.

Just because something is factually incorrect doesn’t mean it isn’t true.

Surprise is a measure of new Information (ref: Shannon).

It is simply a tautology.

To be unsurprised is to provide the answer before the computer produces it.


> This seems like an unnecessary ad hominem.

I didn't mean it as such. I just found it wrong, and thus not very interesting, for the reasons I stated.

> Surprise is a measure of new Information (ref: Shannon).

Ah. Not read Shannon himself, so wasn't aware he used his own special definition of surprise. All the work I've seen has talked about information or entropy.

> To be unsurprised is to provide the answer before the computer produces it.

I'm still not buying that definition of surprise...


>Ah. Not read Shannon himself, so wasn't aware he used his own special definition of surprise.

You must have missed it then...

https://en.wikipedia.org/wiki/Entropy_(information_theory)

https://en.wikipedia.org/wiki/Information_content

https://plus.maths.org/content/information-surprise

>I'm still not buying that definition of surprise...

You don't have to "buy it". I am just telling you that it's the definition used by every person who knows what a bit is.

https://en.wikipedia.org/wiki/Bit


> so wasn't aware he used his own special definition of surprise. All the work I've seen has talked about information or entropy.

It isn't 'special', and directly relates to the definition of entropy.


Well guess I've either read sources which avoided that word when defining entropy, or I mentally erased it because of the poor fit.


"Poor fit" is an ironic choice of words.

https://en.wikipedia.org/wiki/Goodness_of_fit

Fundamentally statistics is Information theory.

https://en.wikipedia.org/wiki/Principle_of_maximum_entropy#H...


... If the answer to the square root of 91287346540 is unsurprising, you should just be able to say precisely what it is without working it out.

I think the issue is the definition of surprise is the issue here. It may be unsurprising to receive a birthday gift, but the exact gift is still a surprise.


The answer to the square root of 91287346540 is trivially surprising. It is not surprising in any interesting way.


Red herring.

Trivial and non-trivial surprises are still surprising.


...what is the least trivially surprising natural number? That is, would such a number still be trivially surprising?


> what is the least trivially surprising natural number?

Whatever it is, if it exists, it's not what computers were invented to find.


Predicating this on surprise is the problem.

We still would have used computers for data entry, storage, and transmission even though we know humans could do that because they're simply superior than paper and typewriters in most ways. There's nothing surprising about that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: