Welcome to 4D4M D07 N37  b t e

< -
+ >

(Re)Designing a Data Center

This year I have the both the miss fortune and the pleasure of being the project lead on a project to redesign an old data center. Some years ago (2006ish I believe) we built a new building and in the basement housed, at the time, a very fancy data center. A far cry from the old 'network closet' we had in the old building. We went from two 2-post racks and two 4-post racks housing some patch panels, switches, and a couple servers to eight full size racks in a data center. It was a moon landing size upgrade for us. However, it wasn't enough. Within a year we were already bursting at the seams to get out of there. The room was designed to house the company's growth for five years but it wasn't built till five years after it was designed. Given we grow at what seems like an exponential rate it was almost comical to see how fast we ran out of space. Not just space either, you have to toss power and environmental into the mix now too. If you can't cool your equipment and get it power then all the space in the world doesn't help you.

So some years later we built a data center that was almost comically large. We learned the lessons from the first build and went with a standalone building some miles across town. We went from housing eight or nine racks packed into a small room to having 24 racks and room to spare. So much room actually we can triple that amount if we want to. Now I say comical, not because it was a giant waist or that it was a gross over compensation for the first data center. The comedy comes in because in IT it is always hard to see things coming and the thing we didn't really see coming here was high density compute. Or blade servers if you're nasty. We designed this data center off the current environment where we had 40 plus Dell R7xx servers acting as VM hosts. In order to virtualize the couple hundred servers we have grown to we needed a lot of these boxes. Which took up a ton of room. Then about the time we finished the new data center we decided to jump ship from the old pizza boxes to the new and utterly amazing Cisco UCS platform. This changed everything. In short order we were looking at a dozen racks of Dell servers to house our environment. We were going to need that extra space in a couple years. Now, two UCS chassis house all 300+ of our severs and we still have some room to spare (to clarify some of our SQL servers take up entire blades so if that doesn't sound like a lot of servers know some of our servers have insane compute requirements). That entire environment has been condensed into about a half of rack of equipment.

That was a fun time for me. I wasn't involved in planning the new data center but I ended up taking the reins to complete its implementation. My boss at the time, and the person in charge of the data center build, left the company three months before it went live. This left me with a very fun ' opportunity. One I felt was handled very well for a still somewhat green network engineer. But there was plenty of learning to be had too and this was a learning experience that every IT professional should have in their carrier. Getting that knowledge of how power, air, network, storage, and computer all have to go together and work is priceless.

Then, a couple years later, we get hit with one of those things you don't really see coming. The powers that be want to implement an 'Active / Active' data center scenario. This term could mean different things to different people but in our world it meant we were moving to a VPLEX to virtualize our storage and then we were going to have real time replication of that data across two data centers. Then we would allow for our VM environment to be able to migrate between the two data centers at will. Not only giving use the ability expand much more rapidly but also giving us nearly instant fail over between the two data centers. Also adding the ability to move everything to one data center while we do maintenance or upgrades on the other. To handle this we would need to build UCS out on the other side and implement a data center network that can handle this properly. Now we're talking about Cisco Nexus on each side and using OTV to stretch layer two across.

Well, you may assume that now that we have two data centers we can just spin up the campus one and we're all set. Well you, like the company, would be wrong. When this hit my desk I had to apply the lessons I learned from helping set up the new data center. This meant breaking the news to them that we don't have the racks space to house this, the air handling to cool it, or the power available to power it. Now I have the pleasure of not only helping implement the redesign of the old data center but I'm actually doing the redesign. It's been a really good experience for me. Coordinating the teams, the construction, and business needs. This one is a bit tricky too because there is part of the production environment still housed here. This means I have to setup a temporary location to house this equipment why the construction goes on and then coordinating the moving of this equipment. Then we'll be moving one of the walls to make room to house additional racks. We had put in a UPS that could handle triple the power requirements and we'll be adding the secondary backup UPS next year. Finally we are implementing some containment to maximize our current air handling and then adding in row cooling to help supplement it. I say supplement but in reality the new in row coolers can cool better than the old giant air handling unit we currently have.

After all this construction is done and we move back into the room. Then we get to start my favorite part. Designing, staging, and implementing out new data center network. OTV is a very exciting technology. It's a bit complicated but it offers a very clean way to extend layer 2 across the data centers without all the inherent problems that comes with doing traditional layer 2 links. We get all those broadcast contained, eliminate the spanning tree issues, and at the end of the day have some very smart intelligence in the network on how to get packets where they need to go. I'm looking forward to getting the final network in place and seeing what it can do. Also, as a Netscaler guru I'm very excited to start implementing more load balancers in the network to help intelligently send external traffic to the proper data center. We'll be using GSLB to help direct users to the data center that actually houses the server they're trying to reach. This will help us stop non optimal pathing or hair pinning across the data center interconnects.

At the end of the day it is a huge project and I'm really happy I get to lead it. The opportunity to learn and also show that I can be a key player for this company is priceless. I'm one lucky, albeit stressed out, network engineer. This is what makes us or breaks us at IT professionals.

π rss Copyright © 2005 - 2024 4d4m.net