Repurposing Older Servers in High Performance Settings

You might think that running a high end data center requires constantly maintaining a tight upgrade cycle with hardware upgrades every couple of years, however by properly clustering your servers, you can lengthen your upgrade cycles and get more usage from your equipment. As discussed in an article by Wired, Google has been leading the way in data center innovations in many ways, one of the biggest being that many of their systems are custom built at a fraction of the cost of retail servers by creating systems which only have the components needed to get work done.

Server room

Removing graphics cards and enclosures help to reduce equipment costs while making the hardware much more energy efficient. Although it isn’t practical for every data center to build custom servers, another effective tactic from Google to increase performance is to focus on computing power as a single pool rather than at the individual server level. Rather than using uniform sets of high-end servers, Google uses a mix of equipment at varying capacities and uses modern systems to efficiently delegate tasks.

In today’s environment where high-performance is key to power most online systems such as streaming videos or running applications, the power of a single server is irrelevant since data centers still need multiple devices to handle all the requests.

Clustering in a Nutshell

Server clusters are two or more servers which appear on a network as a single entity. In many cases, these are used in mission critical settings such as web services, databases, file systems and email servers, to reduce the odds of a single server bringing the entire network down. While the technology used to be associated with high-end settings, clustering is used by companies of all sizes. Cloud computing is one of most powerful applications of clustering today because it shows that even the most demanding clients can have their needs met regardless of if the demand is sporadic or predictable.

Depending on the needs, clusters can be configured for fault tolerance or they can be configured for performance by spreading the load across multiple machines. For purposes of this guide, the focus is on the latter system because that technique is how you can maximize the performance of existing hardware without constantly splurging on new equipment.

As discussed in a previous article on the Site24x7 clustering today is a significant improvement over round robin server configurations because modern solutions are able to pool resources intelligently whereas the round robin approach delegates tasks randomly. This means that unless all systems have uniform equipment, it is impossible to ensure the reliability of such a system.

The Logic of Set Upgrade Cycles

Keeping the concept of clustering technology in mind, today business technology advances no longer occur on a set schedule. In the past set upgrade cycles allowed company management to have a set timetable for allocating IT budgets. Additionally IT vendors often align their support contract and end of life (EOL) cycles around the standard expected life of hardware.

For datacenter professionals, big data has thrown this cycle off by placing significant burdens on networks which were unimaginable in the past. This trend is only going to continue, however clustering can help make the situation easier to deal with as it allows you to keep older hardware in service alongside modern equipment without worrying about uniformity.

Using Virtualization to Repurpose Hardware

Even with companies flocking to purchase the latest and greatest hardware, virtualization has made used hardware an attractive option for many companies primarily due to the fact virtual machines allow for increased reliability at minimal costs. Single images of machines can be maintained from a central location, ensuring that if one machine fails, others can pick up the slack without missing a beat.

The biggest benefit of this redundancy isn’t the simplified management, but rather the ability for data center administrators to hot-swap aging equipment with newer devices once the server reaches the end of its useful life.

In the past support contracts used to be a significant reason for upgrades, however as virtualization allows the software layer to be separated from the hardware, making support much less important. Features such as snapshots, live migration and workload recovery make it much easier to move data from device to device, making it easier to use unsupported hardware and upgrade when it makes sense.

Extended Lifecycles Don’t Always Make Sense

Before you start delaying infrastructure upgrades in your datacenter, you need to consider the full operational and energy costs of your existing infrastructure and balance that against the savings from newer equipment. To help you make sense of energy costs, Site24x7 has an article on how you can use online energy efficiency calculators to help make this decision easier.

A whitepaper from IDC discusses how extending server replacements from three to five years can significantly increase hardware failure rates especially if the software running on it is not optimized for the server. Due to this aspect, you need to remember that keeping old hardware in service can help in some cases, but you shouldn’t turn to older equipment as your primary infrastructure.

 

Comments (0)