Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Optimizing for Fan Noise (dadgum.com)
65 points by inklesspen on Feb 11, 2010 | hide | past | favorite | 29 comments


The author makes a point that moving an application from one core to four might look good in benchmarks, but that it will peg all four cores and cause the fan to start running. I'd argue that that is precisely what you want, and that the operating system should provide a user-controlled mechanism to throttle the CPU back. That way, if you want raw performance, you can get it, but if you want a quiet computer and longer battery life you can get that as well. You _should_ engineer your software to use all available resources. It's the OS's job to divvy up those resources.

I know CPU throttling is possible on many notebooks in Linux. Do any Mac users out there know of such a feature in OS X?


I recall the GameBoy Advanced SDK docs stating something along the lines of "Even if your game is so simple that it won't strain this tiny processor, please optimize it for speed as much as you can anyway so that you can spend as much time as possible sleeping in low power mode between frames. That way the battery will last longer. Your customers will notice and they will tell their freinds."

I think that sentiment is closer to what the article was trying to get across.


OS X has what is probably the most aggressive set of software optimizations aimed at increasing battery life, and no GUI to configure it. The only options you have without delving into command line tools are to turn down screen brightness, disable wi-fi and bluetooth, turn off extra processor cores, and quit applications. Fortunately, the defaults work very well for any common real-world use.

Windows offers pretty good configuration options for power management, including setting maximum processor speed (as a percentage) while running on battery or on AC.

The author seems to be blurring the lines between fixed tasks like transcoding videos and interactive tasks like games. It's not at all obvious that transcoding with a single-threaded app saves any power over the duration of the job than with a multi-threaded implementation. Most laptop CPUs don't have the kind of power gating that are on the latest chips like the Core i7, so even if no instructions are issued to a core, it still draws some power.

With interactive tasks, the only way to save power is to do less, so that the CPU and other components can spend most of their time waiting for the user while in a low-power state. Predictive caching and speculative execution increase the responsiveness of a system on average, but at the cost of performing some tasks whose results won't be used. The only real take-away from this article is that apps that might be used on the go should provide a way to disable these techniques.

(It's important to note that games don't do a lot of this stuff, so there your only options are to turn down the eye candy and the frame rate. A good example of how to accomplish this is Torchlight's netbook mode.)


Doubling the speed of a program by moving from one to four cores is a win if you're looking at the raw benchmark numbers, but an overall loss in terms of computation per watt.

The first time I read this I didn't understand it, but the author has laid a trap here for the unwary reader in the word "doubling". If you can get 4x the performance using 4 cores the energy efficiency is the same or better, but if your program scales poorly and only gets 2x performance from 4 cores the energy efficiency is worse.


That depends on how much power the rest of the system uses. If the CPU is a small enough part of total system power, using four cores to cut time in half might be worth it.


http://en.wikipedia.org/wiki/Parallel_computing#Amdahl.27s_l...

Basically, you will never get a linear increase with parallel computing, especially on consumer type apps that are much more synchronous. So running multiple cores, with requisite context switches will get less computing per watt, even if it isn't as bad as the 2x number.


Sort of. Herb Sutter shows some examples where executing programs in parallel gives superlinear speed-ups: http://herbsutter.wordpress.com/2008/01/30/effective-concurr...


> Herb Sutter shows some examples where executing programs in parallel gives superlinear speed-ups

His definition of "superlinear" is a bit tricky. As he explains, you can only get superlinear by "Do disproportionately less work." and/or "Harness disproportionately more resources.". However, his example of the former is actually a subtle instance of the latter.

Superlinear speedups are always due to more effective caching. Sutter uses "disproportionatly more resources" when the added effectiveness comes from larger caches. However, his "do less work" examples are "just" better cache behavior.


"with requisite context switches"

Can you explain this part?


A generic catchall for OS level crap that multi-core requires. Interrupts / memory management / ... other stuff.... I'm only really aware that there's overhead, and not the specifics of what it ends up being.


Battery life is a fair optimization. Fan noise is a function of your fans. I have a quad-core i7, and when I max it out with 8 threads (it has HT), it does not make any noise. The stock cooler is nearly silent. My case has 140mm fans (!), and they don't make much noise either. They are even mounted via rubber hangers, so the vibration is not transfered to the case (same goes for the hard drives, although those are quite silent now too).

In addition to this, the computer is on the floor, not right next to my ears. So even if it were louder, it would still not annoy me.

So basically, I think your computer is broken if you can hear the fans. It's either designed wrong, or in the wrong position, or both.


The way to measure this is not by fan speed, it's by power draw. To measure power draw, the most reliable way is to:

a) Make sure you can get your machine in a quiescent state (say, 5 minutes after bootup, all unnecessary background tasks and services are turned off).

b) Make sure ambient temperature is the same between runs.

c) Hook your computer up to a PDU with power usage monitoring built in.

d) Run automated tests, measuring power usage by the machine during the test. The automated part is important, because you want them to be fairly lengthy and also repeatable.

e) Compare results before and after. It's important to have a baseline for comparison so you know whether you're making progress or not.


Based on what I've seen in my experiences with building my own computers, the cause of fan noise in computers is not so much a necessary evil in our age of faster processing (old CPUs like the Pentium 4 ran hot as well), but rather a simple hardware problem due to the fact that many computer manufactures use cheap/noisy small stock fans. Replace a computer cooling systems with a slower-rotating larger diameter fan, and you will most likely have more airflow with less noise. Obviously that solution would be harder in space-constrained laptops, but I imagine with some clever engineering (ex. having large fans blow out the bottom of the computer) the designers could work out those issues if they really had the money and inclination to do so.


I completely agree with this article. If there is one thing I hate it is a program which makes my laptop's CPU fan start whirring like a jet engine. Usually with games this is because developers didn't properly program a FPS limiter, or else the game just uses a lot of alpha blending and graphical effects.

However, with modern processors just because they are fast that doesn't mean that programmers have to max out the processor and make it heat up. For example, I wrote an article a while back about how I increased performance in a 2D game and brought processor usage from 99% to 35% using a few graphical tricks such as trying to do alpha blending during the load (precalculated) rather than while rendering each frame:

http://experimentgarden.blogspot.com/2009/08/sdl-tile-game-w...


Personally, I have optimised fan noise right out of all computers, aside from those in server closets, for years.


You mean by using fans that don't make noise? What a concept :)


I would assume it involves drastically increasing the size of the fans, consequently reducing the RPM (and noise) whilst keeping the same overall airflow.


It's interesting that there appears to be a trade-off between using parallelism to reduce response time and optimizing for power usage, specifically in the use of redundant computations to avoid communication between nodes (as mentioned in the Guy Steele slides posted earlier.) Precomputing information a user might request is another potential power-waster that can be very good for responsiveness. My first reaction: man, I have enough environmental dilemmas in my life already! Will I have to turn on "high latency mode" in my applications just like I turn off my low-flow showerhead while I'm lathering up?


When you start seriously optimizing, you're always making a tradeoff. At first, you start moving from sloppy naive algorithms to more specific ones. No big problem there. Then you go from the general-case algorithm to a "tuned" one that biases to your specific situation. Maybe you change to data types that your language can optimize better. Then you start playing directly with memory and you have to think about little things like byte alignments, pipelining and cache size, getting more and more specific to the architecture you are running on.

At each step, the nature of the solution gets a little bit more intertwined with the details.

Parallelization becomes a tradeoff you "feel" much earlier, because you're changing things all the way up at the algorithm level.


Computer noise is a function of hardware design. It's true that lower-power hardware is easier to cool, but on a laptop space constraints prevent you from implementing proper (quiet) cooling solutions. There, you really are trading performance against noise.

My homebuilt desktop is an overclocked 4GHz 8500 with a 4870 video card. It uses over 300W when pushed, but is still practically inaudible due to its water cooling and 250mm fan radiator tucked away in the corner under the desk.

http://www.silentpcreview.com/ is an excellent resource for finding low-noise hardware and designs.


In the end, I took this as a discussion of thermal energy dissipation as it pertains to mobile devices. In other words, battery life.

My conclusion is that we need to vastly improve the energy density of batteries (or consider using carbon-based energy sources to power mobile devices).

Edit: Not sure why this is getting downvoted. The end of the article really does discuss mobile devices and energy dissipation - which I found to be the most critical part. Yes, the author ties it in together nicely with his anecdote of writing out code by hand. So in his day, you optimized for code length. Now, he advocates that we optimize for lower energy consumption. But I would instead, or in parallel, argue that mobile devices also simply have too little energy available to them.


The batteries in your devices represent decades of R&D; scientists are spending entire careers looking for 5% improvements. Saying "just use better batteries" appears to trivialize that effort. Also, programmers (generally) aren't qualified to invent new battery chemistries, but they can make software use fewer cycles.


"Just use better batteries" does make it sound trivial, but those were not my words at all. Saying that we need to "vastly improve energy density" felt, to me, to be conceding that it would require vast effort. That is why I also offered an alternative - carbon-based fuels. They pack an extremely high energy density, but are volatile and would require very special approaches to make them safe and easy to refill. In other words, no matter which option one looks into, it will take a lot of work to improve on the state of the art. By no means did I intend to convey the message that I think otherwise.


Plus Apple has a tendency to capitalize on better battery technology by just using smaller+lighter batteries in later generations and preserving the same runtime.

You do get a nice boost when buying a replacement battery for your 2-3 year-old device...


I think you are not giving Apple enough credit. My 1st gen MBP had a runtime of 1.5h new, the current ones more like 6h.


Interestingly relevant in light of recent discussions as to whether or not the "no third-party background apps on the iPhone OS for sake of power conservation" policy is valid for the iPad.


Use a CPU that does not require a fan. The 1 Mhz 6502 did not need a fan. The ARM, Apple A4, and others processors do not need fans. The problem is the Intel architecture has too many fans.


Really, I guess all those other cpus are just as fast and have the same cache and cores too huh?


My solution: A small wire shelf with foldable legs gets used as our laptop desk. The foldable legs allows our "desk" to be adjusted to two different heights to suit different user preferences. The wire shelving allows adequate airflow around the laptop. It never gets hot like it used to on a solid surface. This solution costs under $10 and can be implemented with one very brief shopping trip.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: