Data Center Outage Puts Spotlight On Power Systems

from the the-other-grid dept

This week’s outage of a major San Francisco data center has prompted a lot of discussion about the tech industry’s massive energy requirements, and whether or not the existing energy infrastructure will continue to prove satisfactory. Although we blamed excessive hubris for the crash, some are pointing fingers at the utility PG&E, for letting the underlying power outage happen. Of course, this doesn’t explain why 365 Main’s extensive energy backup system failed to kick in as it was supposed to. Either way, it’s likely that continued investments into energy systems are in order. IBM has been investing in technology that will reduce the energy demands of data centers, but the trend is helping low-tech firms as well: yesterday, engine maker Cummins reported strong earnings, due in part to the sale of generators to data centers. All of this is further evidence that tech firms are increasingly forced to get down and dirty with tangible, physical goods in order to stay competitive.

Filed Under:
Companies: cummins, pg&e

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Data Center Outage Puts Spotlight On Power Systems”

Subscribe: RSS Leave a comment
18 Comments
inc says:

Re: Testing

This is true. The data center I work in regularly tests the power conditioning, batteries and generators to ensure they are going to work when we need them. It’s important that the batteries have capacity for all the critical systems, including air conditioning, and that the generators come online automatically. Also an important factor is to have contract for guaranteed fuel for that generator. When Hurricane Wilma knocked out power for 2 weeks in our DC that generator was the employee of the month.

Enrico Suarve says:

Not just the data centers

I don’t remember who posted it but in the previous thread one commentor was spot on with their analysis, that generators can spring into action all they want, but if you have an area wide power outage all they are going to do is reduce risks of data corruption from cold booting

Basically if the entire grid goes down (as it did in this case) your connection to the data center is still only as good as the power to the first public network component not connected to your generators (i.e. still dead in the water)

You’re just a very expensive diesel powered data island

A truly resilient solution would have to involve two completely seperate grids (and similarly seperated backup lines), or better – two very seperate locations

Wes (user link) says:

Generation

Backup generation with UPS can give you 100% uptime. Our company has a contract with an large tech firm for such a system. They do complex benchmarks that require constant power for over 6 months at a time. The systems exist, in this case it seems like there was a catastrophic failure at either the trasfer switches or the load study was incorrectly done.

Businesses have a tendency to add components to a electrical system without considering the output of the electrical generators and the extra load.

Anonymous Coward Too says:

Data Center Outage and Redundant Power

My DC has redundant feeds from the street, redundant pipes, racks and racks of backup batteries. Didn’t do a bit of good when a huge spike came in off the street and *vaporized* the emergency switching gear. I’m not kidding.

No batteries, no redundant grid, no amount of testing will guarantee a no-impact failover. I’ve seen multiple outages at multiple sites over 20 years. The answer is IT DEPENDS. With hugs power feeds it gets complicated fast.

You can take cheap shots if you want, doesn’t mean you know anything.

Paul Rice, former Security Architect Engineer of t (user link) says:

Re: Data Center Outage and Redundant Power

An appropriate sized REGULATOR(s) before the panels and even UPS’s are meant to absorb and take out all power spikes from city power. Sometimes, we forget to add them to our systems. Their only purpose is to prevent power spikes from continuing on to your infrastructure…

sean says:

http://seancasaidhe.wordpress.com

This is entirely their own stupid fault. Anyone who depends entirely on telecoms for their survival had better be operating out of two or more physically diverse locations. UPS and gennies are just going to keep your servers running (although I understand that even this failed, showing a remarkably cavalier attitude on behalf of Technorati, Craiglists et. al. toward ensuring that your BCP works) – to keep your telecoms up you need to be on different telecoms networks AND different power networks, just altogether in a different city, state or country.
I wonder if I can get a hugely overpaid consulting job advising them on the basics of continuity planning?

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...