Open Source Cloud Computing Software at SC 09

November 11, 2009

SC 09 is in Portland this coming week from November 14 to 20. The Laboratory for Advanced Computing will have a booth and be showcasing a number of open source cloud computing technologies including:

Sector. Sector/Sphere is a high performance storage and compute cloud that scales to wide area networks. With Sector’s simplified parallel programming framework, you can easily apply a user defined function (UDF) to datasets that fill data centers. The current version of Sector is version 1.24 and includes support for streams and multiple master servers. Sector was the basis for an application that won the SC 08 Bandwidth Challenge. For more information, see sector.sourceforge.net.

As measured by the MalStone Benchmark, Sector was over 2x fast as Hadoop. Sector was one of six technologies selected by SC 09 as a disruptive technology.

How efficient is your cloud?

This snapshot is from the LAC Cloud Monitor monitoring a Sector computation on the Open Cloud Testbed.

Cistrack. The Chicago Utilities for Biological Science or CUBioS is a set of integrated utilities for managing, processing, analyzing and sharing biological data. CUBioS integrates databases with cloud computing to provide an infrastructure that scales to high throughput sequencing platforms. CUBioS uses the Sector/Sphere cloud to process images produced by high throughput sequencing platforms. Cistrack is a CUBioS instance for cis-regulatory data. For more information, see www.cistrack.org.

Canopy. With clouds, it is now possible with a portal to create, monitor, and migrate Virtual Machines (VMs). With the open source Canopy application, it is now possible to create, monitor and migrate Virtual Networks containing multiple VMs connected with virtualized network infrastructure. Canopy provides a standardized library of functions to programatically control switch VLAN assignments to create VNs at line speed. Canopy is an open source project with an alpha releases planned for 2010.

UDT. UDT is a widely deployed (with millions of deployed instances) application level network transport protocol designed for large data transfers over wide area high performance networks. For more information, see udt.sourceforge.net.

UDX. UDX is a version of UDT that is designed for wide area high performance research and corporate networks within a single security domain (UDX does not contain the code UDT uses for transversing fire walls). In recent tests, UDX was able to achieve over 9.2 Gbps on a 10 Gbps wide area testbed. For more information, see udt.sourceforge.net.

LAC Cloud Monitor (LACCM). The LAC Cloud Monitor is a low overhead monitor for clouds that gathers system performance for thousands of servers along multiple dimensions. It integrates with the Argus Monitoring System and Nagios for logging and alerting. LACCM is used to monitor the OCC Open Cloud Testbed. LACCM is open source.

LAC Cloud Scheduler (LACCS)The LAC Cloud Scheduler (LACCS) is a system for scheduling clouds for exclusive use by researchers. It is simple to use, scalable, and easy to deploy. Using LACCS, multiple groups can share easily a local or wide area cloud. LACCS is used for scheduling the Open Cloud Testbed. LACCS is open source.

This is a segment that aired on WTTW’s Chicago Matters about cloud computing that described the Sector/Sphere and the Open Cloud Testbed. You need to select the episode on the right hand side of the page dated November 10, 2009 and titled “Chicago Matters Beyond Burnham (9:40)”


What is the “Unit” of Cloud Computing? Virtual Machines, Virtual Networks, and Virtual Data Centers

October 21, 2009

This is a post that summarizes some conversations that Stuart Bailey (from Infoblox) and I have been having.

There is a lot of market clutter today about cloud computing and it can be challenging at times to identify the core technical issues. Sometimes it is helpful with an emerging technology to ask the question: “What is the ‘unit’ of deployment for the technology?” There are two important related questions: “How are the units named?” “How do the units communicate?”

Sometimes the perspective matters.

Sometimes the perspective matters.

Before we think about the answers for cloud computing, let’s warm up with some other examples.

  • For the web, the “unit” is the web page, web pages are identifid by URLs (or URIs), and the units “communicate” using HTTP and related protocols. Of course, web pages aggregate into web sites.
  • In networking, the “unit” is the IP address (at Layer 3) or the MAC address (at Layer 2) and DNS is the link between URLs and IP addresses (allowing them to communicate), while ARP (or NDP in IPv6) is the link between MAC addresses and IP addresses.
  • In grid computing, the “unit” is a computer in a cluster (“a grid resource”) and computers commnicate using the Message Passing Interface (MPI).

Depending upon your perspective and your role in the cloud computing eco-system, you could argue that any of the following are the units:

Infrastructure Perspective

  • A virtual machine (VM).
  • A virtual network (VN), consisting of multiple VMs and all required information to network the VMs.
  • A virtual data center (VDC), consisting of one or more VNs.

Data/Content/Resource Perspective

  • An identifier specifying the name of a resource for a cloud storage service. Examples include an object managed by Amazon’s S3 service, or a file managed by the Hadoop Distributed File System (HDFS).
  • An identifier specifying the name of a data resource for a cloud data service. Examples include a domain (database table) manged by Amazon’s SimpleDB service or a table (or row) manged by a BigTable-like service.

Once we take this point of view, a number of issues become much easier to discuss.

Intercloud Protocols. Today with clouds, we are in the same situation that networking was before Internet protocols enabled internetworking by supporting communication between networks. Until TCP and related Internet protocols were developed, there were not agreed upon standards identifying the appropriate entities and layers nor for passing names of entities between layers. We can ask what are the appropriate mechanisms for naming VMs, VNs and VDCs, as well as cloud and tables services, how do we pass the names of objects between layers, and how do the objects in the infrastructure stack communicate with objects in the data stack.

Virtual networks also count. Most of the cloud virtualization discussion today focuses on VMs and their migration, but it is just as essential to support VNs and their migration. If we look to how IP addresses arose, then it is tempting to think about using names for VMs that include information about VNs. Today, depending upon the units we feel are important, we will need layers in the cloud for naming and linking VMs, VNs and VDCs, not just VMs.

Removing the distinction between clouds and large data clouds. There are two fundamentally different approaches to cloud services for storage or data. In the first, there is an implicit assumption that the storage or data service must fit in a single VM (S3) or other device (such as NAS). In the second, the whole point is to develop cloud storage and data services that span multiple VMs and devices (Google’s GFS/MapReduce/BigTable), Hadoop HDFS/MapReduce, Sector Distributed File System/Sphere UDFs, etc.).

Services that link virtual infrastructure and data. In many discussions, no effort is made to span the virtual infrastructure perspective entities (VMs, VNs) with the data perspective. One simple approach is to provide a dynamic infrastructure service so that data/content/resource services could easily determine which VMs and VNs support their service (there is usally done with static configuration files today). With this approach, large data cloud services are simply data/content/resource services that are engineered to scale to multiple VMs (and perhaps VNs).

Scaling to services to data centers. One of attributes that I think is a core attribute of certain types of clouds, is for a service to scale beyond a single machine or VM to an entire data center or VDC. Defining these types of scalable services is something that is relatively easy to do from the perspective here.

Acknowledgements: The photograph is from the Flickr photostream of bourget_82 and was posted with a Attribution-No Derivative Works 2.0 Generic Creative Commons License.


Building Your Own Large Data Clouds (Raywulf Clusters)

September 27, 2009

We recently added four new racks to the Open Cloud Testbed. The racks are designed to support cloud computing, both clouds that support on demand VMs as well as those that support data intensive computing. Since there is not a lot of information available describing how to put together these types of clouds, I thought I would share how we configured our racks.

These are two of the four racks that were added to the Open Cloud Testbed as part of the Phase 2 build out.  Photograph by Michal Sabala.

These are two of the four racks that were added to the Open Cloud Testbed as part of the Phase 2 build out. Photograph by Michal Sabala.

These racks can be used as a basis for private clouds, hybrid clouds, or condo clouds.

There is a lot of information about building Beowulf clusters, which are designed for compute intensive computing. Here is one of the first tutorials and some more recent information.

In contrast, our racks are designed to support data intensive computing. We sometimes call these Raywulf clusters. Briefly, the goal is to make sure that there are enough spindles moving data in parallel with enough cores to process the data being moved. (Our data intensive middleware is called Sector, Graywulf is already taken, and there are not many words that rhyme with Beo- left. Other suggestions are welcome. Please use the comments below.)

The racks cost about $85,000 (with standard discounts), consist of 32 nodes and 124 cores with 496 GB of RAM, 124 TB of disk & 124 spindles, and consume about 10.3 kW of power (excluding the power required for cooling).

With 3x replication, there is about 40 TB of usable storage available, which means that the cost to provide balanced long term storage and compute power is about $2,000 per TB. So, for example, a single rack could be used as a basis for a private cloud that can manage and analyze approximately 40 TB of data. At the end of this note, is some performance information about a single rack system.

Each rack is a standard 42U computer rack and consists of a head node and 31 compute/storage nodes. We installed GNU/Debian Linux 5.0 as the operating system. Here is the configuration of the rack and of the compute/storage nodes.

In contrast, there are specialized configurations, such as designed by Backblaze, that provide 67TB for $8,000. This is 1/2 the storage for 1/10 the cost. The difference is that Raywulf clusters are designed for data intensive computing using middleware such as Hadoop and Sector/Sphere, not just storage.

Rack Configuration

  • 31 compute/storage nodes (see below)
  • 1 head node (see below)
  • 2 Force10 S50N switches, with 2 10 Gbps uplinks so that the inter-rack bandwidth is 20 Gbps
  • 1 10GE module
  • 2 optics and stacking modules
  • 1 3Com Baseline 2250 switch to provide to provide additional cat5 ports for IPMI management interfaces.
  • cabling

Compute/storage node.

  • Intel Xeon 5410 Quad Core CPU with 16GB of RAM
  • SATA RAID controller
  • four (4) SATA 1TB hard drives in RAID-0 configuration
  • 1 Gbps NIC
  • IPMI management

Benchmarks. We benchmarked these new racks using the Terasort Benchmark and version 0.20.1 of Hadoop and version 1.24a of Sector/Sphere. Replication was turned off in both Hadop and Sector. All the racks were located within one data center. It is clear from these tests that the new versions of Hadoop and Sector/Sphere are both faster than the previous versions.

Configuration Sector/Sphere Hadoop
1 rack (32 nodes) 28m 25s 85m 49s
2 racks (64 nodes) 15m 20s 37m 0s
3 racks (96 nodes) 10m 19s 24m 14s
4 racks (128 nodes) 7m 56s 17m 45s

The Raywulf clusters were designed by Michal Sabala and Yunhong Gu of the National Center for Data Mining at the University of Illinois at Chicago.

We are working on putting together more information of how to build a Raywulf cluster.

Sector/Sphere and our Raywulf Clusters were selected as one of the Disruptive Technologies that will be highlighted at SC 09.

The photograph above of two racks from the Open Cloud Testbed was taken by Michal Sabala.


Revisiting the Case for Cloud Computing

September 6, 2009

The backlash to the hype over cloud computing is in full swing. I have given a number of talks on cloud computing over the past few months and have been struck by a few things.

First, at an industry event that I attended, although there were quite a few talks on cloud computing (it was one of the tracks), it seems that only a small number of speakers had actually participated in a cloud computing project and I was was one of only a handful that had actually completed several cloud computing projects. Many of the other speakers were simply summarizing second and third hand reports about cloud computing. In my opinion, something was lost in the translation.

Rack of servers

Second, I think some of the backlash has gone to far. At one breakfast meeting I attended, there were essentially no acknowledgement of the potential today that clouds offer, simply emphasis on why “real companies” that have to worry about security could never use (public) clouds. Private and condo clouds were not mentioned as alternatives for companies whose security or compliance requirements preclude the use of today’s public clouds. The trade-off, which is always present, that balances potential breaches from performing certain operations in public clouds, from the productivity gains that such clouds can provide was also not mentioned.

Because of this backlash, I think it is a good time to revisit the case for cloud computing. There are three basic reasons for deploying certain operations to clouds:

Cost savings. By employing virtualization and making use of the economies of scale that cloud service providers can take advantage of, deploying certain operations to clouds can lead to improved efficiencies. This advantage seems to be well understood, and is, for example, one of the factors driving the Federal CIO’s push for cloud computing. See for example, the recent RFQ from the GSA for a cloud computing store front.

Productivity. The Elastic, virtualized services that clouds provide lead directly to productivity improvements. As a simple example, I was building an analytic model over the weekend to meet a deadline and the computation took over 4 hours. Since I was using a virtualized resource in a cloud, I was able to use the portal that controlled the various machine images to double the memory in my resource. Five minutes later, I had a new virtualized image and the computation now took less than 5 minutes. (By the way, this is typical of analytic computations. When the data is so large that a computation can no longer be done in memory and requires accessing the disk, the time required increases dramatically.) If, instead, I had gone through a standard procurement process to get a new machine with twice the memory, it would have been quite some time before the model would have been completed.

As another example, I work with a Fortune 500 client in which the analytic models are taking weeks to build instead of days because the modeling environment does not have enough disk space for the entire team to hold all the temporary files and datasets required when building analytic models nor powerful enough computers for models to be computed fast enough to provide timely feedback to the modeler. This is unfortunately fairly typical of modeling environments in Fortune 500 companies (I’ll discuss this situation in a later post). A simple cloud would dramatically improve the situation.

New capabilities. Clouds also provide new capabilities. For example, large data clouds enable the processing and analysis of large datasets that was simply not possible with architectures that manage the data using databases. As a simple example, the type of analytic computations abstracted by the MalStone Benchmark are relatively straightforward, even when there are 100 TB of data, using a Hadoop or Sector based cloud, but in practice not practical using a traditional database when the data is that size.

What’s new. Many of the ideas behind cloud computing are quite old. On the other hand, the combination of: 1) the scale, 2) the utility based pricing, and 3) the simplicity provided by cloud computing make cloud computing a disruptive technology. If you are interested in understanding cloud computing from this point of view, you might find a recent talk I gave for an IEEE Conference on New Technologies called My Other Computer is a Data Center interesting. There is also a written version of a portion of the that recently appeared in the IEEE Bulletin on Data Engineering called On the Varieties of Clouds for Data Intensive Computing.

The image is by John Seb and is available from Flickr under the Creative Commons license.


Cloud Computing Testbeds

July 29, 2009

Cloud computing is still an immature field: there are lots of interesting research problems, no standards, few benchmarks, and very limited interoperability between different applications and services.

The network infrastructure for the Phase 1 of the Open Cloud Testbed.

Currently, there are relatively few testbeds available to the research community for research in cloud computing and few resources available to developers for testing interoperability. I expect this will change over time, but below are the testbeds that I am aware of and a little bit about each of them. If you know of any others, please let me know so that I can keep the list current (at least for a while until cloud computing testbeds become more common).

Before discussing the testbeds per se, I want to highlight one of the lessons that I have learned while working with one of the testbeds — the Open Cloud Testbed (OCT).

Disclaimer: I am one of the technical leads for the OCT and one of the Directors of the Open Cloud Consortium.

Currently the OCT consists of 120 identical nodes and 480 cores. All were purchased and assembled at the same time by the same team. One thing that caught me by suprise is that there are enough small differences between the nodes that the results of some experimental studies can vary by 5%, 10%, 20%, or more, depending upon which nodes are used within the testbed. This is because even one or two nodes with slightly inferior performance can impact the overall end-to-end performance of an application that uses some of today’s common cloud middleware.

Amazon Cloud. Although not usually thought of as a testbed, Amazon’s EC2, S3, SQS, EBS and related services are economical enough that they they can serve as the basis for an on-demand testbed for many experimental studies. In addition, Amazon provides grants so that their cloud services can be used for teaching and research.

Open Cloud Testbed (OCT). The Open Cloud Testbed is a testbed managed by the Open Cloud Consortium. The testbed currently consists of 4 racks of servers, located in 4 data centers at Johns Hopkins University (Baltimore), StarLight (Chicago), the University of Illinois (Chicago), and the University of California (San Diego). Each rack has 32 nodes and 128 cores. Two Cisco 3750E switches connect the 32 nodes, which then connects to the outside by a 10Gb/s uplink. In contrast to other cloud testbeds, the OCT utilizes wide area high performance networks, not the familiar commodity Internet. There are 10Gb/s networks that connect the various data centers. This network is provided by Cisco’s CWave national testbed infrastructure and through a partnership with the National Lambda Rail. Over the next few months the OCT will double in size to 8 racks and over 1000 cores. In the OCT, a variety of cloud systems and services are installed and available for research, including Hadoop, Sector/Sphere, CloudStore (KosmosFS), Eucalyptus, and Thrift. The OCT is a testbed designed to support systems-level, middleware and application level research in cloud computing, as well as the development of standards and interoperability frameworks. A technical report described the OCT is available from arxiv.org:0907.4810.

Open Cirrus(tm) Testbed. The Open Cirrus Testbed is a joint initiative sponsored by HP, Intel and Yahoo! in collaboration with the NSF, the University of Illinois at Urbana-Champaign (UIUC), Karlsruhe Institute of Technology, and the Infocomm Development Authority (IDA) of Singapore. Each of the six sites consists of at least 1000 cores and associated storage. The Open Cirrus Testbed is a federated system designed to support systems-level research in cloud computing. A technical report describing the testbed can be found here.

Eucalyptus Public Cloud. The Eucalyptus Public Cloud is a testbed for Eucalyptus applications. Eucalyptus shares the same APIs as Amazon’s web services. Currently, users are limited to no more than 4 virtual machines and experimental studies that require 6 hours or less.

Google-IBM-NSF CLuE Resource. Another cloud computing testbed is the IBM-Google-NSF Cluster Exploratory or CluE Resource. The IBM-Google NSF CLuE resource appears to be a testbed for cloud computing applications in the sense that Hadoop applications can be run on the testbed but that the testbed does not support systems research and experiments involving cloud middleware and cloud services per se, as is possible with the OCT and the Open Cirrus Testbed. (At least this was the case the last time I checked. It may be different now. If it is possible to do systems level research on the testbed, I would appreciate it if someone would let me know.) NSF has awarded nearly $5 million in grants to 14 universities through its Cluster Exploratory (CLuE) program to support research on this testbed.


Large Data Clouds FAQ

July 16, 2009

This is a post that contains some questions and answers about large data clouds that I expect to update and expand from time to time.

What is large data? From the point of view of the infrastructure required to do analytics, data comes in three sizes:

  • Small data. Small data fits into the memory of a single machine. A good example of a small dataset is the dataset for the Netflix Prize. The Netflix Prize dataset consists of over 100 million movie rating files by 480 thousand randomly-chosen, anonymous Netflix customers that rated over 17 thousand movie titles. This dataset (although challenging enough to keep anyone from winning the grand prize for over 2 years) is just 2 GB of data and fits into the memory of a laptop. I discuss some lessons in analytic strategy that you learn from this contest in this post.

    Building the ATLAS Detector at Cern's Large Hadron Collider

  • Medium data. Medium data fits into a single disk or disk array and can be managed by a database. It is becoming common today for companies to create 1 to 10 TB or large data warehouses.
  • Large data. Large data is so large that it is challenging to manage it in a database and instead specialized systems are used. We’ll discuss some examples of these specialized systems below. Scientific experiments, such as the Large Hadron Collider (LHC), produce large datasets. Log files produced by Google, Yahoo and Microsoft and similar companies are also examples of large datasets.

There have always been large datasets, but until recently, most large datasets were produced by the scientific and defense communities. Two things have changed: First, large datasets are now being produced by a third community: companies that provide internet services, such as search, on-line advertising and social media. Second, the ability to analyze these datasets is critical for advertising systems that produce the bulk of the revenue for these companies. This provides a measure (dollars of online revenue produced) by which to measure the effectiveness of analytic infrastructure and analytic models. Using this metric, companies such as Google, settled upon analytic infrastructure that was quite different than the grid-based infrastructure that is generally used by the scientific community.

What is a large data cloud? There is no standard definition of a large data cloud, but a good working definition is that a large data cloud
provides i) storage services and ii) compute services that are layered over the storage services that scale to a data center and that have the reliability associated with a data center. You can find some background information on clouds on this page containing an overview about clouds.

What are some of the options for working with large data? There are several options, including:

  • The most mature large data cloud application is the open source Hadoop system, which consists of the Hadoop Distributed File System (HDFS) and Hadoop’s implementation of MapReduce. An important advantage of Hadoop is that it has a very robust community supporting it and there are a large number of Hadoop projects, including Pig, which provides simple database-like operations over data managed by HDFS.
  • Another option is Sector, which consists of the Sector Distributed File System (SDFS) and a compute service called Sphere that allows users to execute arbitrary User Defined Functions (UDFs) over the data managed by SDFS. Sector supports MapReduce as a special case of a user-defined Map UDF, followed by Shuffle and Sort UDFs provided by Sphere, followed by a user-defined Reduce UDF. Sector is a C++ open source application. Unlike Hadoop, Sector includes security. There is public Sector cloud for those interested in trying out Sector without downloading it and installing it.
  • Greenplum uses a shared-nothing MPP (massively parallel processing) architecture based upon commodity hardware. The Greenplum architecture also integrates MapReduce-like functionality into its platform.
  • Aster has a MPP-based data warehousing appliance that supports MapReduce. They have an entry level system that manages up to 1 TB of data and an enterprise level system that is designed to support up to 1 PB of data.
  • How do I get started? The easiest way to get started is to download one of the applications and to work through some basic examples. The example that most people work through is word count. Another common example is the terasort example (soring 10 billion 100 byte records where the first 10 bytes is the key that is sorted and the remaining 90 bytes is the payload). A simple analytic to try is MalStone, which I have described in another post.

    What are some of the issues that arise with large data cloud applications? The first issue is mapping your problem to the MapReduce or generalized MapReduce (like Spheres UDFs) frameworks. Although this type of data parallel framework may seem quite special initially, it is surprising how many problems can be mapped to this framework with a bit effort.

    The second issue is that tuning Hadoop clusters can be challenging and time consuming. This is not surprising, considering the power provided by Hadoop to tackle very large problems.

    The third issue is that with medium (100 nodes) and large (1000 node) clusters, even a few under performing nodes can impact the overall performance. There can also be problems with switches that impact performance in subtle ways. Dealing with these types of hardware issues can also be time consuming. It is sometimes helpful to run a known benchmark such as terasort or MalStone to distinguish hardware issues from programming issues.

    What is the significance of large data clouds? Just a short time ago, it required specialized proprietary software to analyze 100 TB or more of data. Today, a competant team should be able to do this relatively straightforwardly with a 100 node large data cloud powered by Hadoop, Sector or similar software.

    Getting involved. I just set up a Google Group for large data clouds:
    groups.google.com/group/large-data-clouds. Please use this group to discuss issues related to large data clouds, including lessons learned, questions, annoucements, etc. (no advertising please). In particular, if you have software you would like added to the list below, please comment below or send a node to the large data cloud google group.


Test Drive the Sector Public Cloud

June 23, 2009

Sector is an open source cloud written in C++ for storing, sharing and processing large data sets.   Sector is broadly similar to the Google File System and the Hadoop Distributed File System, except that it is designed to utilize wide area high performance  networks.

Sphere is middleware that is designed to process data managed by Sector.  Sphere implements a framework for distributed computing that allows any User Defined Function (UDF) to be applied to a Sector dataset.

One way to think about this is as a generalized MapReduce. With MapReduce, users work with pairs and define a Map function and a Reduce function, and the MapReduce application creates a workflow consisting of a Map, Shuffle, Sort and Reduce. With Sector, users can create a workflow consisting of any sequence of User Define Functions (UDFs) and apply these to any datasets managed by Sector. In particular, Sphere has predefined Shuffle and Sort UDFs that can be applied to datasets consisting of pairs so that MapReduce applications can be implemented once a user defines a Map and Reduce UDF.

Sector also implements security and we are currently using it to bring up a HIPAA-compliant private cloud.

Since Sector/Sphere is written in C++, it is straightforward to support C++ based data access tools and programming APIs.

If you have access to high speed research network (for example if you network can reach StarLight, the National Lambda Rail, ESNet, or Internet2), then you can try out the Sector Public Cloud.

You can reach the Sector Public Cloud from the Sector home page sector.sourceforge.net.

There is a technical report on the design of Sector on arXiv: arXiv:0809.1181v2.

There is some information on the performance of Sector/Sphere in my post on the MalStone Benchmark, a benchmark for clouds that support data intensive computing.