It's a shame too because one of the inhibiting factors of AI as complex as DeepMind is the fact it isn't cost effective due to its power consumption for quite a few tasks.
"In recent months, the Alphabet Inc. unit put a DeepMind AI system in control of parts of its data centers to reduce power consumption by manipulating computer servers and related equipment like cooling systems. It uses a similar technique to DeepMind software that taught itself to play Atari video games, Hassabis said in an interview at a recent AI conference in New York."
This part really incensed me. That's like describing SpaceX's rocket "as being based on similar technology as Wernher von Braun's V2 rockets in World War 2". I exaggerate for effect, but you get the point.
I disagree. If you read the atari paper you will get plenty of details and you can infer how it is applied to electricity consumption. They were using reinforcement learning. The algorithms would learn to get a better score by looking at the screen and sending actions accordingly. Here you could imagine the same algorithm with energy consumption as a score, a set of datacenter metrics as the screen (state) and change of metrics as actions.
Errr... No. Just no. Deep reinforcement learning is not some pixie dust that magically works for any problem with a reward function that you throw it at. It's astounding how commenters on HN think this is all "easy".
> Here you could imagine the same algorithm with energy consumption as a score, a set of datacenter metrics as the screen (state) and change of metrics as actions.
What you have described is easily instantiated with any numerical optimization technique of the last 40 years. The devil for any of these problems is in the details.