Clustering

A Server cluster is a collection of independent servers that together provide a single, highly available platform for hosting applications.

There are three primary benefits to server clustering: availability, manageability, and scalability:

Availability

Server clusters provide a highly available platform for deploying applications. A Server cluster ensures that applications continue to run in the event of planned downtime due to maintenance or unplanned downtime due to failures. Server clusters protect against hardware failures, failures of the Windows operating system, device drivers or application software. Server clusters allow operating system and application software to be upgraded across the cluster without having to take down the application.

Manageability

Server clusters allow administrators to quickly inspect the status of all cluster resources and move workloads around onto different servers within the cluster. This is useful for manual load balancing, and to perform “rolling updates” on the servers without taking important data and applications offline.

Scalability

Applications that can be partitioned can be spread across the servers of a cluster allowing additional CPU and memory to be applied to a problem. As the problem size increases, additional servers can be added to the cluster. A partitioned application is one where the data (or function) can be split up into independent units. For example, a customer database could be split into two units, one covering customers with names beginning A thru L and the other for customers with names beginning M thru Z.

What hardware do you need to build a Server cluster?

In general, the criteria for building a server cluster include the following:

Servers - Two or more PCI-based machines running one of the OS releases that support Server clusters. Server clusters can run on all hardware architectures supported by the base operating system, however, you generally cannot mix 32-bit and 64-bit architectures in the same cluster.

Storage - Each server needs to be attached to a shared, external storage bus(es) that is/are separate from the bus containing the system disk, the boot disk or the pagefile disk. Applications and data are stored on one or more disks attached to this bus. There must be enough storage capacity on the shared cluster bus(es) for all of the applications running in the cluster environment. This shared storage configuration allows applications to failover between servers in the cluster.

ITMS Ltd recommends hardware Redundant Array of Inexpensive Disks (RAID) for all cluster disks to eliminate disk drives as a potential single point of failure. This means using either a RAID storage unit, a host-based RAID adapter that implements RAID across “dumb” disks, etc.

SCSI is supported for 2-node cluster configurations only. Fibre channel arbitrated loop is supported for 2-node clusters only. ITMS Ltd recommends using fibre channel switched fabrics for clusters of more than two nodes.

Network - Each server needs at least two network cards. Typically, one is the public network and the other is a private network between the two nodes. A static IP address is needed for each group of applications that move as a unit between nodes. Server clusters can project the identity of multiple servers from a single cluster by using multiple IP addresses and computer names: this is known as a virtual server.

ITMS have a proven track record of delivering highly available, scalable cluster solutions with Microsoft, Linux and Solaris not only in single site environments, but also in Metropolitan Area cluster solutions.

Please contact us for more information: info@itmsltd.net

WordPress Appliance - Powered by TurnKey Linux