Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

From time to time I stop and think about how blindingly FAST computers are.

It's easy enough to click "refresh" on a web server running on your desktop and wait half a second for a refresh...but then when you ponder that not only might you be executing a few THOUSAND or TENS OF THOUSANDS of lines of your own code, but you're executing a few hundred SQL queries, each of which is doing mind-bendingly complex stuff over in the SQL server ... running the same CPU ... it just boggles the mind.

I remember when I was amazed at how many computations my overclocked Apple IIe (running at 2.5 MHz !!!) could do in an eyeblink.



I had a similar realization when it dawned on me that my computer performed about twenty instructions before the light from my monitor arrived at my cornea...

(edit: light travels about a foot in a nanosecond, or 6 inches per clock cycle in a standard 2ghz pc. multiple simple additions can be done in a single clock cycle; multiple by number of cores)


Wow that's an amazing way of looking at the computing power and speed of modern processors. Maybe Intel should start marketing processor functionality this way!


Tthe speed of light is starting to impact how fast cores can be. They need to take it into account when routing signals on the core. And it's one reason asynchronous cores work a lot better, since at high hz it's impossible to keep the entire core synchronized.


Electric current in copper does not travel anywhere near the speed of light. In fact, motherboards using fiber optic buses instead of copper buses may be one solution to the problem you're talking about.


As I understand it, it has been an issue and a major reason for actual cores being as small as they are. The time a signal takes to propagate from one section of the core to another is significant.


Chip sizes are a function of yield and performance. Core sizes are designed to optimize performance for a given chip size. (Yes, there's a circular dependence but yield is a huge factor in chip size.)

That said, it's been at least a process generation since a given clock signal could be used to synch events on the opposite sides of a core, let alone a (high performance) chip. This is due to both increasing clock frequency and the fact that propagation speed has gone down as feature size decreased.


And when you consider how much longer it takes for you perceive, the light it gets even more ridiculous!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: