Open Source Analytics Reaches Main Street (and Some Other Trends in Analytics)

May 11, 2009

This is the first of three posts about systems, applications, services and architectures for building and deploying analytics. Sometimes this is called analytic infrastructure. This post is primarily directed at the analytic infrastructure needs of companies. Later posts will look at analytic infrastructure for the research community.

In this first post of the series, we discuss five important trends impacting analytic infrastructure.

Trend 1. Open source analytics has reached Main Street. R, which was first released in 1996, is now the most widely deployed open source system for statistical computing. A recent article in the New York Times estimated that over 250,000 individuals use R regularly. Dice News has created a video called “What’s Up with R” to inform job hunters using their services about R. In the language of Geoffrey A. Moore’s book Crossing the Chasm, R has reached “Main Street.”

Some companies still either ban the use of open source software or require an elaborate approval process before open source software can be used. Today, if a company does not allow the use of R, it puts the company at a competitive disadvantage.

Trend 2. The maturing of open, standards based architectures for analytics. Many of the common applications used today to build statistical models are stand-alone applications designed to be used by a single statistician. It is usually a challenge to deploy the model produced by the application into operational systems. Some applications can express statistical models as C++ or SQL, which makes deployment easier, but it can still be a challenge to transform the data into the format expected by the model.

The Predictive Model Markup Language (PMML) is an XML language for expressing statistical and data mining models that was developed to provide an application-independent and platform-independent mechanism for importing and exporting models. PMML has become the dominant standard for statistical and data mining models. Many applications now support PMML.

By using these applications, it is possible to build an open, modular standards based environment for analytics. With this type of open analytic environment, it is quicker and less labor-intensive to deploy new analytic models and to refresh currently deployed models.

Disclaimer: I’m one of the many people that has been involved in the development of the PMML standard.

Trend 3. The emergence of systems that simplify the analysis of large datasets. Analyzing large datasets is still very challenging, but with the introduction of Hadoop, there is now an open source system supporting MapReduce that scales to thousands of processors.

The significance of Hadoop and MapReduce is not only the scalability, but also the simplicity. Most programmers, with no prior experience, can have their first Hadoop job running on a large cluster within a day. Most programmers find that it is much easier and much quicker to use MapReduce and some of its generalizations than it is develop and implement an MPI job on a cluster, which is currently the most common programming model for clusters.

Trend 4. Cloud-based data services. Over the next several years, cloud-based services will begin to impact analytics significantly. A later post in this series will show simple it is use R in a cloud for example. Although there are security, compliance and policy issues to work out before it becomes common to use clouds for analytics, I expect that these and related issues will all be worked out over the next several years.

Cloud-based services provide several advantages for analytics. Perhaps the most important is elastic capacity — if 25 processors are needed for one job for a single hour, then these can be used for just the single hour and no more. This ability of clouds to handle surge capacity is important for many groups that do analytics. With the appropriate surge capacity provided by clouds, modelers can be more productive, and this can be accomplished in many cases without requiring any capital expense. (Third party clouds provide computing capacity that is an operating and not a capital expense.)

Trend 5. The commoditization of data. Moore’s law applies not only to CPUs, but also to the chips that are used in all of the digital device that produce data. The result has been that the cost to produce data has been falling for some time. Similarly, the cost to store data has also been falling for some time.

Indeed, more and more datasets are being offered for free. For example, end of day stock quotes from Yahoo, gene sequence data from NCBI, and public data sets hosted by Amazon, including the U.S. Census Bureau, are all available now for free.

The significance to analytics is that the cost to enrich data with third party data, which often produces better models, is falling. Over time, more and more of this data will be available in clouds, so that the effort to integrate this data into modeling will also decrease.


Learning About Cloud Analytics

April 6, 2009

Clouds are changing the way that analytic models get built and the way they get deployed.

Neither analytics nor clouds have standard definitions yet.

A definition I like is to define analytics as the analysis of data to support decisions. For example, analytics is used in marketing to develop statistical models for acquiring customers and predicting the future profitability of customers. Analytics is used in risk management to identify fraud, to discover compromises in operations, and to reduce risk. Analytics is used in operations to improve business and operational processes.

Cloud computing also doesn’t yet have a standard definition. A good working definition is to define clouds as racks of commodity computers that provide on-demand resources and services over a network, usually the Internet, with the scale and the reliability of a data center.

There are two different, but related, types of clouds: the first category of clouds provide computing instances on demand, while the second category of clouds provide computing capacity on demand. Both use the same underlying hardware, but the first is designed to scale out by providing additional computing instances, while the second is designed to support data- or compute-intensive applications by scaling capacity. Amazon’s EC2 and S3 services are an example of the first type of cloud. The Hadoop system is an example of the second type of cloud.

Currently, as a platform for analytics, clouds offer several advantages:

  1. Building analytic models on very large datasets. “Hadoop style clouds” provide a very effective platform for developing analytic models on very large datasets.
  2. Scoring data using analytic models. Given an analytic model and some data (either a file of data or a stream of data), “Amazon style clouds” provide a simple and effective platform for scoring data. The Predictive Model Markup Language (PMML) has proved to be a very effective mechanism for moving a statistical or analytic model built using one analytic system into a cloud for scoring. Sometimes the terminology PMML Producer is used for the application that builds the model and PMML Consumer is used for the application that scores new data using the model. Using this terminology, “Amazon style clouds” can be used to score data easily using PMML models built elsewhere.
  3. Simplifying modeling environments. Finally, computing instances in a cloud can be built that incorporate all the analytic software required for building models, including preconfigured connections to all the data required for modeling. At least for small to medium size datasets, preconfiguring computing instances in this way can simplify the development of analytic models.
  4. Easy access to data. Clouds can also make it much easier to access data for modeling. Amazon has recently made available a variety of public datasets. For example, using Amazon’s EBS service, the U.S. Census data can be accessed immediately.

I’ll be one of the lecturers in two up coming courses on cloud analytics that introduce clouds as well as cloud analytics.

The first course will be taught in Chicago on June 22, 2009 and the second one in San Mateo on July 14, 2009.   You can register for the Chicago course using this registration link and the San Mateo course using this registration link.

This one day course will give a quick introduction to cloud computing and analytics. It describes several different types of clouds and what is new about cloud computing, and discusses some of the advantages and disadvantages that clouds offer when building and deploying analytic models. It includes three case studies, a survey of vendors, and information about setting up your first cloud.

The course syllabus can be found here: www.opendatagroup.com/courses.htm.