Building Your Own Large Data Clouds (Raywulf Clusters)

September 27, 2009

We recently added four new racks to the Open Cloud Testbed. The racks are designed to support cloud computing, both clouds that support on demand VMs as well as those that support data intensive computing. Since there is not a lot of information available describing how to put together these types of clouds, I thought I would share how we configured our racks.

These are two of the four racks that were added to the Open Cloud Testbed as part of the Phase 2 build out.  Photograph by Michal Sabala.

These are two of the four racks that were added to the Open Cloud Testbed as part of the Phase 2 build out. Photograph by Michal Sabala.

These racks can be used as a basis for private clouds, hybrid clouds, or condo clouds.

There is a lot of information about building Beowulf clusters, which are designed for compute intensive computing. Here is one of the first tutorials and some more recent information.

In contrast, our racks are designed to support data intensive computing. We sometimes call these Raywulf clusters. Briefly, the goal is to make sure that there are enough spindles moving data in parallel with enough cores to process the data being moved. (Our data intensive middleware is called Sector, Graywulf is already taken, and there are not many words that rhyme with Beo- left. Other suggestions are welcome. Please use the comments below.)

The racks cost about $85,000 (with standard discounts), consist of 32 nodes and 124 cores with 496 GB of RAM, 124 TB of disk & 124 spindles, and consume about 10.3 kW of power (excluding the power required for cooling).

With 3x replication, there is about 40 TB of usable storage available, which means that the cost to provide balanced long term storage and compute power is about $2,000 per TB. So, for example, a single rack could be used as a basis for a private cloud that can manage and analyze approximately 40 TB of data. At the end of this note, is some performance information about a single rack system.

Each rack is a standard 42U computer rack and consists of a head node and 31 compute/storage nodes. We installed GNU/Debian Linux 5.0 as the operating system. Here is the configuration of the rack and of the compute/storage nodes.

In contrast, there are specialized configurations, such as designed by Backblaze, that provide 67TB for $8,000. This is 1/2 the storage for 1/10 the cost. The difference is that Raywulf clusters are designed for data intensive computing using middleware such as Hadoop and Sector/Sphere, not just storage.

Rack Configuration

  • 31 compute/storage nodes (see below)
  • 1 head node (see below)
  • 2 Force10 S50N switches, with 2 10 Gbps uplinks so that the inter-rack bandwidth is 20 Gbps
  • 1 10GE module
  • 2 optics and stacking modules
  • 1 3Com Baseline 2250 switch to provide to provide additional cat5 ports for IPMI management interfaces.
  • cabling

Compute/storage node.

  • Intel Xeon 5410 Quad Core CPU with 16GB of RAM
  • SATA RAID controller
  • four (4) SATA 1TB hard drives in RAID-0 configuration
  • 1 Gbps NIC
  • IPMI management

Benchmarks. We benchmarked these new racks using the Terasort Benchmark and version 0.20.1 of Hadoop and version 1.24a of Sector/Sphere. Replication was turned off in both Hadop and Sector. All the racks were located within one data center. It is clear from these tests that the new versions of Hadoop and Sector/Sphere are both faster than the previous versions.

Configuration Sector/Sphere Hadoop
1 rack (32 nodes) 28m 25s 85m 49s
2 racks (64 nodes) 15m 20s 37m 0s
3 racks (96 nodes) 10m 19s 24m 14s
4 racks (128 nodes) 7m 56s 17m 45s

The Raywulf clusters were designed by Michal Sabala and Yunhong Gu of the National Center for Data Mining at the University of Illinois at Chicago.

We are working on putting together more information of how to build a Raywulf cluster.

Sector/Sphere and our Raywulf Clusters were selected as one of the Disruptive Technologies that will be highlighted at SC 09.

The photograph above of two racks from the Open Cloud Testbed was taken by Michal Sabala.


Cloud Computing Testbeds

July 29, 2009

Cloud computing is still an immature field: there are lots of interesting research problems, no standards, few benchmarks, and very limited interoperability between different applications and services.

The network infrastructure for the Phase 1 of the Open Cloud Testbed.

Currently, there are relatively few testbeds available to the research community for research in cloud computing and few resources available to developers for testing interoperability. I expect this will change over time, but below are the testbeds that I am aware of and a little bit about each of them. If you know of any others, please let me know so that I can keep the list current (at least for a while until cloud computing testbeds become more common).

Before discussing the testbeds per se, I want to highlight one of the lessons that I have learned while working with one of the testbeds — the Open Cloud Testbed (OCT).

Disclaimer: I am one of the technical leads for the OCT and one of the Directors of the Open Cloud Consortium.

Currently the OCT consists of 120 identical nodes and 480 cores. All were purchased and assembled at the same time by the same team. One thing that caught me by suprise is that there are enough small differences between the nodes that the results of some experimental studies can vary by 5%, 10%, 20%, or more, depending upon which nodes are used within the testbed. This is because even one or two nodes with slightly inferior performance can impact the overall end-to-end performance of an application that uses some of today’s common cloud middleware.

Amazon Cloud. Although not usually thought of as a testbed, Amazon’s EC2, S3, SQS, EBS and related services are economical enough that they they can serve as the basis for an on-demand testbed for many experimental studies. In addition, Amazon provides grants so that their cloud services can be used for teaching and research.

Open Cloud Testbed (OCT). The Open Cloud Testbed is a testbed managed by the Open Cloud Consortium. The testbed currently consists of 4 racks of servers, located in 4 data centers at Johns Hopkins University (Baltimore), StarLight (Chicago), the University of Illinois (Chicago), and the University of California (San Diego). Each rack has 32 nodes and 128 cores. Two Cisco 3750E switches connect the 32 nodes, which then connects to the outside by a 10Gb/s uplink. In contrast to other cloud testbeds, the OCT utilizes wide area high performance networks, not the familiar commodity Internet. There are 10Gb/s networks that connect the various data centers. This network is provided by Cisco’s CWave national testbed infrastructure and through a partnership with the National Lambda Rail. Over the next few months the OCT will double in size to 8 racks and over 1000 cores. In the OCT, a variety of cloud systems and services are installed and available for research, including Hadoop, Sector/Sphere, CloudStore (KosmosFS), Eucalyptus, and Thrift. The OCT is a testbed designed to support systems-level, middleware and application level research in cloud computing, as well as the development of standards and interoperability frameworks. A technical report described the OCT is available from arxiv.org:0907.4810.

Open Cirrus(tm) Testbed. The Open Cirrus Testbed is a joint initiative sponsored by HP, Intel and Yahoo! in collaboration with the NSF, the University of Illinois at Urbana-Champaign (UIUC), Karlsruhe Institute of Technology, and the Infocomm Development Authority (IDA) of Singapore. Each of the six sites consists of at least 1000 cores and associated storage. The Open Cirrus Testbed is a federated system designed to support systems-level research in cloud computing. A technical report describing the testbed can be found here.

Eucalyptus Public Cloud. The Eucalyptus Public Cloud is a testbed for Eucalyptus applications. Eucalyptus shares the same APIs as Amazon’s web services. Currently, users are limited to no more than 4 virtual machines and experimental studies that require 6 hours or less.

Google-IBM-NSF CLuE Resource. Another cloud computing testbed is the IBM-Google-NSF Cluster Exploratory or CluE Resource. The IBM-Google NSF CLuE resource appears to be a testbed for cloud computing applications in the sense that Hadoop applications can be run on the testbed but that the testbed does not support systems research and experiments involving cloud middleware and cloud services per se, as is possible with the OCT and the Open Cirrus Testbed. (At least this was the case the last time I checked. It may be different now. If it is possible to do systems level research on the testbed, I would appreciate it if someone would let me know.) NSF has awarded nearly $5 million in grants to 14 universities through its Cluster Exploratory (CLuE) program to support research on this testbed.


Large Data Clouds FAQ

July 16, 2009

This is a post that contains some questions and answers about large data clouds that I expect to update and expand from time to time.

What is large data? From the point of view of the infrastructure required to do analytics, data comes in three sizes:

  • Small data. Small data fits into the memory of a single machine. A good example of a small dataset is the dataset for the Netflix Prize. The Netflix Prize dataset consists of over 100 million movie rating files by 480 thousand randomly-chosen, anonymous Netflix customers that rated over 17 thousand movie titles. This dataset (although challenging enough to keep anyone from winning the grand prize for over 2 years) is just 2 GB of data and fits into the memory of a laptop. I discuss some lessons in analytic strategy that you learn from this contest in this post.

    Building the ATLAS Detector at Cern's Large Hadron Collider

  • Medium data. Medium data fits into a single disk or disk array and can be managed by a database. It is becoming common today for companies to create 1 to 10 TB or large data warehouses.
  • Large data. Large data is so large that it is challenging to manage it in a database and instead specialized systems are used. We’ll discuss some examples of these specialized systems below. Scientific experiments, such as the Large Hadron Collider (LHC), produce large datasets. Log files produced by Google, Yahoo and Microsoft and similar companies are also examples of large datasets.

There have always been large datasets, but until recently, most large datasets were produced by the scientific and defense communities. Two things have changed: First, large datasets are now being produced by a third community: companies that provide internet services, such as search, on-line advertising and social media. Second, the ability to analyze these datasets is critical for advertising systems that produce the bulk of the revenue for these companies. This provides a measure (dollars of online revenue produced) by which to measure the effectiveness of analytic infrastructure and analytic models. Using this metric, companies such as Google, settled upon analytic infrastructure that was quite different than the grid-based infrastructure that is generally used by the scientific community.

What is a large data cloud? There is no standard definition of a large data cloud, but a good working definition is that a large data cloud
provides i) storage services and ii) compute services that are layered over the storage services that scale to a data center and that have the reliability associated with a data center. You can find some background information on clouds on this page containing an overview about clouds.

What are some of the options for working with large data? There are several options, including:

  • The most mature large data cloud application is the open source Hadoop system, which consists of the Hadoop Distributed File System (HDFS) and Hadoop’s implementation of MapReduce. An important advantage of Hadoop is that it has a very robust community supporting it and there are a large number of Hadoop projects, including Pig, which provides simple database-like operations over data managed by HDFS.
  • Another option is Sector, which consists of the Sector Distributed File System (SDFS) and a compute service called Sphere that allows users to execute arbitrary User Defined Functions (UDFs) over the data managed by SDFS. Sector supports MapReduce as a special case of a user-defined Map UDF, followed by Shuffle and Sort UDFs provided by Sphere, followed by a user-defined Reduce UDF. Sector is a C++ open source application. Unlike Hadoop, Sector includes security. There is public Sector cloud for those interested in trying out Sector without downloading it and installing it.
  • Greenplum uses a shared-nothing MPP (massively parallel processing) architecture based upon commodity hardware. The Greenplum architecture also integrates MapReduce-like functionality into its platform.
  • Aster has a MPP-based data warehousing appliance that supports MapReduce. They have an entry level system that manages up to 1 TB of data and an enterprise level system that is designed to support up to 1 PB of data.
  • How do I get started? The easiest way to get started is to download one of the applications and to work through some basic examples. The example that most people work through is word count. Another common example is the terasort example (soring 10 billion 100 byte records where the first 10 bytes is the key that is sorted and the remaining 90 bytes is the payload). A simple analytic to try is MalStone, which I have described in another post.

    What are some of the issues that arise with large data cloud applications? The first issue is mapping your problem to the MapReduce or generalized MapReduce (like Spheres UDFs) frameworks. Although this type of data parallel framework may seem quite special initially, it is surprising how many problems can be mapped to this framework with a bit effort.

    The second issue is that tuning Hadoop clusters can be challenging and time consuming. This is not surprising, considering the power provided by Hadoop to tackle very large problems.

    The third issue is that with medium (100 nodes) and large (1000 node) clusters, even a few under performing nodes can impact the overall performance. There can also be problems with switches that impact performance in subtle ways. Dealing with these types of hardware issues can also be time consuming. It is sometimes helpful to run a known benchmark such as terasort or MalStone to distinguish hardware issues from programming issues.

    What is the significance of large data clouds? Just a short time ago, it required specialized proprietary software to analyze 100 TB or more of data. Today, a competant team should be able to do this relatively straightforwardly with a 100 node large data cloud powered by Hadoop, Sector or similar software.

    Getting involved. I just set up a Google Group for large data clouds:
    groups.google.com/group/large-data-clouds. Please use this group to discuss issues related to large data clouds, including lessons learned, questions, annoucements, etc. (no advertising please). In particular, if you have software you would like added to the list below, please comment below or send a node to the large data cloud google group.


Test Drive the Sector Public Cloud

June 23, 2009

Sector is an open source cloud written in C++ for storing, sharing and processing large data sets.   Sector is broadly similar to the Google File System and the Hadoop Distributed File System, except that it is designed to utilize wide area high performance  networks.

Sphere is middleware that is designed to process data managed by Sector.  Sphere implements a framework for distributed computing that allows any User Defined Function (UDF) to be applied to a Sector dataset.

One way to think about this is as a generalized MapReduce. With MapReduce, users work with pairs and define a Map function and a Reduce function, and the MapReduce application creates a workflow consisting of a Map, Shuffle, Sort and Reduce. With Sector, users can create a workflow consisting of any sequence of User Define Functions (UDFs) and apply these to any datasets managed by Sector. In particular, Sphere has predefined Shuffle and Sort UDFs that can be applied to datasets consisting of pairs so that MapReduce applications can be implemented once a user defines a Map and Reduce UDF.

Sector also implements security and we are currently using it to bring up a HIPAA-compliant private cloud.

Since Sector/Sphere is written in C++, it is straightforward to support C++ based data access tools and programming APIs.

If you have access to high speed research network (for example if you network can reach StarLight, the National Lambda Rail, ESNet, or Internet2), then you can try out the Sector Public Cloud.

You can reach the Sector Public Cloud from the Sector home page sector.sourceforge.net.

There is a technical report on the design of Sector on arXiv: arXiv:0809.1181v2.

There is some information on the performance of Sector/Sphere in my post on the MalStone Benchmark, a benchmark for clouds that support data intensive computing.


The MalStone Benchmark, TeraSort and Clouds For Data Intensive Computing

May 25, 2009

The TPC Benchmarks have played an important role in comparing databases and transaction processing systems. Currently, there are no similar benchmarks for comparing two clouds.

Benchmark

The CloudStone Benchmark is a first step towards a benchmark for clouds designed to support Web 2.0 type applications. In this note, we describe the MalStone Benchmark, which is a first step towards a benchmark for clouds, such as Hadoop and Sector, designed to support data intensive computing.

MalStone is a stylized analytic computation of a type that is common in data intensive computing. The open source code to generate data for MalStone and a technical report describing MalStone and providing some sample implementations can be found at: code.google.com/p/malgen (look in the feature downloads section along the right hand side).

Detecting Drive-By Exploits from Log Files

We introduce MalStone with a simple example. Consider visitors to web sites. As described in the paper The Ghost in the Browser by Provos et. al. that was presented at HotBot ’07, approximately 10% of web pages have exploits installed that can infect certain computers when users visit the web pages. Sometimes these are called “drive-by exploits.”

The MalStone benchmark assumes that there are log files that record the date and time that users visited web pages. Assume that the log files of visits have the following fields:

   | Timestamp | Web Site ID | User ID

There is a further assumption that if the computers become infected, at perhaps a later time, then this is known. That is for each computer, which we assume is identified by the ID of the corresponding user, it is known whether at some later time that computer has become compromised:

   | User ID | Compromise Flag

Here the Compromise field is a flag, with 1 denoting a compromise. A very simple statistic that provides some insight into whether a web page is a possible source of compromises is to compute for each web site the ratio of visits in which the computer subsequently becomes compromised to those in which the computer remains uncompromised.

We call MalStone stylized since we do not argue that this is a useful or effective algorithm for finding compromised sites. Rather, we point out that if the log data is so large that it requires large numbers of disks to manage it, then computing something as simple as this ratio can be computationally challenging. For example, if the data spans 100 disks, then the computation cannot be done easily with any of the databases that are common today. On the other hand, if the data fits into a database, then this statistic can be computed easily using a few lines of SQL.

The MalStone benchmarks use records of the following form:

   | Event ID | Timestamp | Site ID | Compromise Flag | Entity ID

Here site abstracts web site and entity abstracts the possibly infected computer. We assume that each record is 100 bytes long.

In the MalStone A Benchmarks, for each site, the number of records for which an entity visited the site and subsequently becomes compromised is divided by the total number of records for which an entity visited the site. The MalStone B Benchmark is similar, but this ratio is computed for each week (a window is used from the beginning of the period to the end of the week of interest). MalStone A-10 uses 10 billion records so that in total there is 1 TB of data. Similarly, MalStone A-100 requires 100 billion records and MalStone A-1000 requires 1 trillion records. MalStone B-10, B-100 and B-1000 are defined in the same way.

TeraSort Benchmark

One of the motivations for choosing 10 billion 100-byte records is that the TeraSort Benchmark (sometimes called the Terabyte Sort Benchmark) also uses 10 billion 100-byte records.

In 2008, Hadoop became the first open source program to hold the record for the TeraSort Benchmark. It was able to sort 1 TB of data using using 910 nodes in 209 seconds, breaking the previous record of 297 seconds. Hadoop set a new record in 2009 by sorting 100 TB of data at 0.578 TB/minute using 3800 nodes. For some background about the TeraSort Benchmark, see the blog posting by Jamie Hamilton Hadoop Wins Terasort.

Note that the TeraSort Benchmark is now deprecated and has been replaced by the Minute Sort Benchmark. Currently, 1 TB of data can be sorted in about a minute given the right software and sufficient hardware.

Generating Data for MalStone Using MalGen

We have developed a generator of synthetic data for MalStone called MalGen. MalGen is open source and available from code.google.com/p/malgen. Using MalGen, data can be generated with power law distributions, which is useful when modeling web sites (a few sites have a lot of visitors, but most sites have relatively few visitors).

Using MalStone to Study Design Tradeoffs

Recently, we did several experimental studies comparing different implementations of MalStone on 10 billion 100-byte records. The experiments were done on 20 nodes of the Open Cloud Testbed. Each node was a Dell 1435 computer with 12 GB memory, 1TB disk, 2.0GHz dual dual-core AMD Opteron 2212, and 1 Gb/s network interface cards.

We compared three different implementations: 1) Hadoop HDFS with Hadoop’s implementation of MapReduce; 2) Hadoop HDFS using Streams and coding MalStone in Python; and 3) the Sector Distributed File System (SDFS) and coding the algorithm using Sphere User Defined Functions (UDFs).

MalStone A
Hadoop MapReduce 454m 13s
Hadoop Streams/Python 87m 29s
Sector/Sphere UDFs 33m 40s
MalStone B
Hadoop MapReduce 840m 50s
Hadoop Streams/Python 142m 32s
Sector/Sphere UDFs 43m 44s

Please note that these timings are still preliminary and may be revised in the future as we better optimize the implementations.

If you have 1000 nodes and want to run a data intensive or analytic computation, then Hadoop is a very good choice. What these preliminary benchmarks indicate though is that you may want to compare the performance of Hadoop MapReduce and Hadoop Streams. In addition, you may also want to consider using Sector.

The image above is from Strolling everyday and available via a Creative Commons license.

Disclaimer: I am involved in the development of Sector.


Open Source Analytics Reaches Main Street (and Some Other Trends in Analytics)

May 11, 2009

This is the first of three posts about systems, applications, services and architectures for building and deploying analytics. Sometimes this is called analytic infrastructure. This post is primarily directed at the analytic infrastructure needs of companies. Later posts will look at analytic infrastructure for the research community.

In this first post of the series, we discuss five important trends impacting analytic infrastructure.

Trend 1. Open source analytics has reached Main Street. R, which was first released in 1996, is now the most widely deployed open source system for statistical computing. A recent article in the New York Times estimated that over 250,000 individuals use R regularly. Dice News has created a video called “What’s Up with R” to inform job hunters using their services about R. In the language of Geoffrey A. Moore’s book Crossing the Chasm, R has reached “Main Street.”

Some companies still either ban the use of open source software or require an elaborate approval process before open source software can be used. Today, if a company does not allow the use of R, it puts the company at a competitive disadvantage.

Trend 2. The maturing of open, standards based architectures for analytics. Many of the common applications used today to build statistical models are stand-alone applications designed to be used by a single statistician. It is usually a challenge to deploy the model produced by the application into operational systems. Some applications can express statistical models as C++ or SQL, which makes deployment easier, but it can still be a challenge to transform the data into the format expected by the model.

The Predictive Model Markup Language (PMML) is an XML language for expressing statistical and data mining models that was developed to provide an application-independent and platform-independent mechanism for importing and exporting models. PMML has become the dominant standard for statistical and data mining models. Many applications now support PMML.

By using these applications, it is possible to build an open, modular standards based environment for analytics. With this type of open analytic environment, it is quicker and less labor-intensive to deploy new analytic models and to refresh currently deployed models.

Disclaimer: I’m one of the many people that has been involved in the development of the PMML standard.

Trend 3. The emergence of systems that simplify the analysis of large datasets. Analyzing large datasets is still very challenging, but with the introduction of Hadoop, there is now an open source system supporting MapReduce that scales to thousands of processors.

The significance of Hadoop and MapReduce is not only the scalability, but also the simplicity. Most programmers, with no prior experience, can have their first Hadoop job running on a large cluster within a day. Most programmers find that it is much easier and much quicker to use MapReduce and some of its generalizations than it is develop and implement an MPI job on a cluster, which is currently the most common programming model for clusters.

Trend 4. Cloud-based data services. Over the next several years, cloud-based services will begin to impact analytics significantly. A later post in this series will show simple it is use R in a cloud for example. Although there are security, compliance and policy issues to work out before it becomes common to use clouds for analytics, I expect that these and related issues will all be worked out over the next several years.

Cloud-based services provide several advantages for analytics. Perhaps the most important is elastic capacity — if 25 processors are needed for one job for a single hour, then these can be used for just the single hour and no more. This ability of clouds to handle surge capacity is important for many groups that do analytics. With the appropriate surge capacity provided by clouds, modelers can be more productive, and this can be accomplished in many cases without requiring any capital expense. (Third party clouds provide computing capacity that is an operating and not a capital expense.)

Trend 5. The commoditization of data. Moore’s law applies not only to CPUs, but also to the chips that are used in all of the digital device that produce data. The result has been that the cost to produce data has been falling for some time. Similarly, the cost to store data has also been falling for some time.

Indeed, more and more datasets are being offered for free. For example, end of day stock quotes from Yahoo, gene sequence data from NCBI, and public data sets hosted by Amazon, including the U.S. Census Bureau, are all available now for free.

The significance to analytics is that the cost to enrich data with third party data, which often produces better models, is falling. Over time, more and more of this data will be available in clouds, so that the effort to integrate this data into modeling will also decrease.


Sector – When You Really Need to Process 10 Billion Records

April 19, 2009

As is well known by now, Google demonstrated the power of a layered stack of cloud services that are designed for commodity computers that fill a data center. The stack consists of a storage service (the Google File System (GFS)), a compute service based upon MapReduce, and a table service (BigTable).

Although the Google stack of services is not directly available, the open source Hadoop system, which has a broadly similar architecture, is available.

The Google stack, consisting of GFS/MapReduce/Bigtable, and the Hadoop system, consisting of the Hadoop Distributed File System (HDFS) and Hadoop’s implementation of MapReduce, are examples of clouds designed for data intensive computing — these types of clouds provide computing capacity on demand, with capacity scaling all the way up to the size of a data center.

There are still many open questions about how best to design clouds for data intensive computing. During the best several years, I have been involved with a cloud designed for data intensive computing called Sector. The lead developer of Sector is Yunhong Gu of the University of Illinois at Chicago. Sector was developed independently of Hadoop and the Google cloud services and makes several different design choices (see the table below).

To quantify the impact of some of these choices, I have been involved with the development of a benchmark for data intensive computing called MalStone. I will talk more about MalStone in a future post, but briefly, MalStone is a stylized analytic computing that can be done simply using MapReduce, as well as variants and generalizations of MapReduce. The open source MalStone code comes with a generator of synthetic records and one benchmark (called MalStone B) generates 10 billion 100-byte records (similar to terasort).

MalStone B Benchmarks

System Time (min)
Hadoop MapReduce 799 min
Hadoop Streaming with Python 143 min
Sector 44 min

Tests were done using 20 nodes on the Open Cloud Testbed. Each node contained 500 million 100-byte records.

Comparing Sector and Hadoop

Hadoop Sector
Storage cloud block-based file system file-based
Programming model MapReduce user defined functions and MapReduce
Protocol TCP UDP
Security NA HIPAA capable
Replication at time of writing periodically
Language Java C++

I’ll be giving a talk on Sector at CloudSlam ’09 on Monday, April 20, 2009 at 4pm ET. CloudSlam is a virtual conference, so that it is easy to listen to any of the talks that interest you.