Parallel computing has seen many changes since the days of the highly expensive and proprietary super computers. Changes and improvements in performance have also been seen in the area of mainframe computing for many environments. But these compute environments may not be the most cost effective and flexible solution for a problem. Over the past decade, cluster technologies have been developed that allow multiple low cost computers to work in a coordinated fashion to process applications. The economics, performance and flexibility of compute clusters makes cluster computing an attractive alternative to centralized computing models and the attendant to cost, inflexibility, and scalability issues inherent to these models.
Generally “Cluster” means a close group or bunch of similar things occurring together .
A computer cluster is a group of linked computers, working together
closely so that in many respects they form a single network.
The components of a cluster are commonly connected to each other through fast local area networks. Clusters are usually deployed to improve performance and/or availability over that provided by a single computer, while typically being much more cost effective than single computers of comparable speed or availability.
A complete deffination-
“A cluster is a type of parallel or distributed processing system, which consists of a collection of interconnected stand-alone computers co-operatively working together as a single, integrated computing resource.”
The major objective in the cluster is utilizing a group of processing nodes so as to complete the assigned job in a minimum amount of time working cooperatively. The main & important strategy to achieve this objective is by transferring the extra loads from busy load to idle load. Clusters may also be deployed to address load balancing, parallel processing, system management and scalability .
Why Clusters ?
The question may arise why clusters are designed and built when perfectly good commercial supercomputers are available on the market. The answer is that the latter is expensive. Clusters are surprisingly powerful. The supercomputer has come to play a larger role in business applications. Commercial products have their place, and there are perfectly good reasons to buy a commercially produced supercomputer . However, many who have a need to harness supercomputing power don‟t buy supercomputers because they can‟t afford them. Also it is impossible to upgrade them . Clusters, On the other hand, are cheap and easy way to take individual components and combine them into a single supercomputer. In some areas of research clusters are actually faster than commercial supercomputer. Clusters also have the distinct advantage in that they are simple to build using components available from hundreds of sources. The most obvious benefit of clusters, and the most compelling reason for the growth in their use, is that they have significantly reduced the cost of processing power. This reduction in the cost of entry to high-power computing (HPC) has been due to co modification of both hardware and software over the last 10 years particularly. All the components of computers have dropped dramatically in that time. The components critical to the development of low cost clusters are:
1. Processors - commodity processors are now capable of computational power previously reserved for supercomputers.
2. Memory - the memory used by these processors has dropped in cost right with the processors.
3. Networking Components - the most recent group of products to experience co modification and dramatic cost decreases is networking hardware. High- Speed networks can now be assembled with these products for a fraction of the cost necessary only a few years ago.
4. Motherboards, busses, and other sub-systems – All of these have become commodity products, allowing the assembly of affordable computers from off the shelf components .