Building Your Own Large Data Clouds (Raywulf Clusters)

September 27, 2009

We recently added four new racks to the Open Cloud Testbed. The racks are designed to support cloud computing, both clouds that support on demand VMs as well as those that support data intensive computing. Since there is not a lot of information available describing how to put together these types of clouds, I thought I would share how we configured our racks.

These are two of the four racks that were added to the Open Cloud Testbed as part of the Phase 2 build out.  Photograph by Michal Sabala.

These are two of the four racks that were added to the Open Cloud Testbed as part of the Phase 2 build out. Photograph by Michal Sabala.

These racks can be used as a basis for private clouds, hybrid clouds, or condo clouds.

There is a lot of information about building Beowulf clusters, which are designed for compute intensive computing. Here is one of the first tutorials and some more recent information.

In contrast, our racks are designed to support data intensive computing. We sometimes call these Raywulf clusters. Briefly, the goal is to make sure that there are enough spindles moving data in parallel with enough cores to process the data being moved. (Our data intensive middleware is called Sector, Graywulf is already taken, and there are not many words that rhyme with Beo- left. Other suggestions are welcome. Please use the comments below.)

The racks cost about $85,000 (with standard discounts), consist of 32 nodes and 124 cores with 496 GB of RAM, 124 TB of disk & 124 spindles, and consume about 10.3 kW of power (excluding the power required for cooling).

With 3x replication, there is about 40 TB of usable storage available, which means that the cost to provide balanced long term storage and compute power is about $2,000 per TB. So, for example, a single rack could be used as a basis for a private cloud that can manage and analyze approximately 40 TB of data. At the end of this note, is some performance information about a single rack system.

Each rack is a standard 42U computer rack and consists of a head node and 31 compute/storage nodes. We installed GNU/Debian Linux 5.0 as the operating system. Here is the configuration of the rack and of the compute/storage nodes.

In contrast, there are specialized configurations, such as designed by Backblaze, that provide 67TB for $8,000. This is 1/2 the storage for 1/10 the cost. The difference is that Raywulf clusters are designed for data intensive computing using middleware such as Hadoop and Sector/Sphere, not just storage.

Rack Configuration

  • 31 compute/storage nodes (see below)
  • 1 head node (see below)
  • 2 Force10 S50N switches, with 2 10 Gbps uplinks so that the inter-rack bandwidth is 20 Gbps
  • 1 10GE module
  • 2 optics and stacking modules
  • 1 3Com Baseline 2250 switch to provide to provide additional cat5 ports for IPMI management interfaces.
  • cabling

Compute/storage node.

  • Intel Xeon 5410 Quad Core CPU with 16GB of RAM
  • SATA RAID controller
  • four (4) SATA 1TB hard drives in RAID-0 configuration
  • 1 Gbps NIC
  • IPMI management

Benchmarks. We benchmarked these new racks using the Terasort Benchmark and version 0.20.1 of Hadoop and version 1.24a of Sector/Sphere. Replication was turned off in both Hadop and Sector. All the racks were located within one data center. It is clear from these tests that the new versions of Hadoop and Sector/Sphere are both faster than the previous versions.

Configuration Sector/Sphere Hadoop
1 rack (32 nodes) 28m 25s 85m 49s
2 racks (64 nodes) 15m 20s 37m 0s
3 racks (96 nodes) 10m 19s 24m 14s
4 racks (128 nodes) 7m 56s 17m 45s

The Raywulf clusters were designed by Michal Sabala and Yunhong Gu of the National Center for Data Mining at the University of Illinois at Chicago.

We are working on putting together more information of how to build a Raywulf cluster.

Sector/Sphere and our Raywulf Clusters were selected as one of the Disruptive Technologies that will be highlighted at SC 09.

The photograph above of two racks from the Open Cloud Testbed was taken by Michal Sabala.


Revisiting the Case for Cloud Computing

September 6, 2009

The backlash to the hype over cloud computing is in full swing. I have given a number of talks on cloud computing over the past few months and have been struck by a few things.

First, at an industry event that I attended, although there were quite a few talks on cloud computing (it was one of the tracks), it seems that only a small number of speakers had actually participated in a cloud computing project and I was was one of only a handful that had actually completed several cloud computing projects. Many of the other speakers were simply summarizing second and third hand reports about cloud computing. In my opinion, something was lost in the translation.

Rack of servers

Second, I think some of the backlash has gone to far. At one breakfast meeting I attended, there were essentially no acknowledgement of the potential today that clouds offer, simply emphasis on why “real companies” that have to worry about security could never use (public) clouds. Private and condo clouds were not mentioned as alternatives for companies whose security or compliance requirements preclude the use of today’s public clouds. The trade-off, which is always present, that balances potential breaches from performing certain operations in public clouds, from the productivity gains that such clouds can provide was also not mentioned.

Because of this backlash, I think it is a good time to revisit the case for cloud computing. There are three basic reasons for deploying certain operations to clouds:

Cost savings. By employing virtualization and making use of the economies of scale that cloud service providers can take advantage of, deploying certain operations to clouds can lead to improved efficiencies. This advantage seems to be well understood, and is, for example, one of the factors driving the Federal CIO’s push for cloud computing. See for example, the recent RFQ from the GSA for a cloud computing store front.

Productivity. The Elastic, virtualized services that clouds provide lead directly to productivity improvements. As a simple example, I was building an analytic model over the weekend to meet a deadline and the computation took over 4 hours. Since I was using a virtualized resource in a cloud, I was able to use the portal that controlled the various machine images to double the memory in my resource. Five minutes later, I had a new virtualized image and the computation now took less than 5 minutes. (By the way, this is typical of analytic computations. When the data is so large that a computation can no longer be done in memory and requires accessing the disk, the time required increases dramatically.) If, instead, I had gone through a standard procurement process to get a new machine with twice the memory, it would have been quite some time before the model would have been completed.

As another example, I work with a Fortune 500 client in which the analytic models are taking weeks to build instead of days because the modeling environment does not have enough disk space for the entire team to hold all the temporary files and datasets required when building analytic models nor powerful enough computers for models to be computed fast enough to provide timely feedback to the modeler. This is unfortunately fairly typical of modeling environments in Fortune 500 companies (I’ll discuss this situation in a later post). A simple cloud would dramatically improve the situation.

New capabilities. Clouds also provide new capabilities. For example, large data clouds enable the processing and analysis of large datasets that was simply not possible with architectures that manage the data using databases. As a simple example, the type of analytic computations abstracted by the MalStone Benchmark are relatively straightforward, even when there are 100 TB of data, using a Hadoop or Sector based cloud, but in practice not practical using a traditional database when the data is that size.

What’s new. Many of the ideas behind cloud computing are quite old. On the other hand, the combination of: 1) the scale, 2) the utility based pricing, and 3) the simplicity provided by cloud computing make cloud computing a disruptive technology. If you are interested in understanding cloud computing from this point of view, you might find a recent talk I gave for an IEEE Conference on New Technologies called My Other Computer is a Data Center interesting. There is also a written version of a portion of the that recently appeared in the IEEE Bulletin on Data Engineering called On the Varieties of Clouds for Data Intensive Computing.

The image is by John Seb and is available from Flickr under the Creative Commons license.