Chameleon: A Large-Scale, Advanced Experimental Research Environment for Next Generation Clouds

August 20, 2014

Chameleon: A Large-Scale, Advanced Experimental Research Environment for Next Generation Clouds
Chicago, Illinois, August 20, 2014. At the 2014 SIGCOMM conference today, the National Science Foundation (NSF) and a consortium of researchers announced a new testbed for advanced cloud research. Cloud services have become ubiquitous to all major 21st century economic activities. However, cloud services and technologies can be significantly more powerful than they are now. A persistent barrier to further advancement has been the lack of a large-scale experimental cloud research platform.
With funding from the NSF, the Chameleon project will create a breakthrough platform for cloud -- and Internet -- innovation. Chameleon will allow computer scientists nation-wide to explore transformative concepts in deeply programmable cloud services, design, and core technologies. A special feature of Chameleon is an exceptionally close integration of clouds and networks, which substantially enhances the capabilities of both.
The project is led by the Computation Institute at the University of Chicago and partners from the Texas Advanced Computing Center (TACC) at the University of Texas at Austin, the International Center for Advanced Internet Research at Northwestern University (iCAIR), Network-Based Computing Laboratory at The Ohio State University, and the UTSA Cloud and BigData Laboratory at University of Texas at San Antonio, comprising a highly qualified and experienced team. The team includes members from Rackspace, as well as the NSF supported FutureGrid project and from the Global Environment for Network Innovations (GENI) community, both forerunners of the NSFCloud program that funds this project.
The Chameleon testbed, will be deployed at the University of Chicago and the Texas Advanced Computing Center (TACC) and will consist of 650 multi-core cloud nodes, 5PB of total disk space, and leverage 100 Gbps connections between the sites including a federated cloud between TACC and Rackspace’s DFW datacenter for further extension. While a large part of the testbed will consist of homogenous hardware to support large-scale experiments, a portion of it will support heterogeneous units allowing experimentation with high-memory, large-disk, low-power GPUs and co-processor units. The project will also leverage existing FutureGrid hardware at UC and TACC in its first year to provide a transition period for the existing FutureGrid community of experimental users.
“Like its namesake, the Chameleon testbed will be able to adapt itself to a wide range of experimental needs, from support for bare metal reconfiguration to popular cloud resources,” said Kate Keahey, a scientist at the University of Chicago and Argonne National Laboratory and principal investigator for Chameleon.
To support a broad range of experiments emphasizing a range of requirements ranging from a high degree of control to ease of use, the project will support a graduated configuration system allowing full user configurability of the stack, from provisioning of bare metal and network interconnects to delivery of fully functioning cloud environments. In addition, to facilitate experiments, Chameleon will support a set of services designed to meet researcher needs, including support for experimental management, reproducibility, and repositories of trace and workload data of production cloud workloads.
To facilitate the latter, the project organizers will form a set of partnerships with commercial as well as academic clouds, such as Rackspace, CERN and Open Science Data Cloud (OSDC). They will also partner with other testbeds, notably GENI and INRIA’s Grid’5000 testbed. For further information, see www.chameleoncloud.org.

Enabling, Enhancing, and Extending Petascale Computing for Science and Engineering

About the Computation Institute at the University of Chicago
The Computation Institute (CI) was established in 2000 as a joint initiative between the University of Chicago and Argonne National Laboratory to advance science through innovative computational approaches. Scholarship in the sciences, arts, and medicine depends increasingly on collection and analysis of large quantities of data and detailed numerical simulations of complex phenomena. Progress is gated by researchers’ ability to construct complex software systems, to harness large-scale computing, and to federate distributed resources. The CI is both an intellectual nexus and resource center for those building and applying such computational platforms for science. (www.ci.uchicago.edu)
About the Texas Advanced Computing Center (TACC) at the University of Texas at Austin
TACC operates many of the most powerful and capable high performance computing systems in the world, which are used by thousands of scientists and engineers each year to perform research in all domains of science, including the humanities, digital media, and the arts. At nearly 10 petaflops, Stampede is operational and available to the national open science community (as of January 7, 2013). Stampede is one of the world's most comprehensive systems for the open science community as part of the National Science Foundation's (NSF) XSEDE (formerly NSF TeraGrid) program. (www.tacc.utexas.edu)
About the International Center for Advanced Internet Research at Northwestern University
The International Center for Advanced Internet Research (iCAIR) at Northwestern University was established to accelerate leading-edge innovation and enhanced digital global communications and networking technologies and to undertake this activity specifically in partnership with international network research communities. iCAIR undertakes international R&D projects in four key areas: advanced applications driven by next generation communication services and technologies, including applications required by data-intensive science, advanced network middleware, advanced infrastructure, and public policy initiatives. iCAIR also develops and manages several large-scale networking facilities, laboratories, and experimental network research facilities and major national and international testbeds in partnership with other organizations. (www.icair.org)
About the Network-Based Computing Laboratory at The Ohio State University
The Network-Based Computing Laboratory proposes new designs for high performance network-based computing systems by taking advantages of modern networking technologies and computing systems, develops better middleware, API, and programming environments so that modern network-based computing applications can be developed and implemented in a scalable and high performance manner, performing this research by integrating resources (systems, networking, and applications), and focuses on experimental computer science research. (nowlab.cse.ohio-state.edu)
About the UTSA Cloud and BigData Laboratory at University of Texas at San Antonio
The UTSA Cloud and BigData Laboratory was established to support cloud computing and big data research and development. The laboratory, developed in large part through industry collaboration, helps the international business community improve their computing platforms through open-source hardware and cloud and big data technologies. The university laboratory also trains a pipeline of students for the workforce. The Cloud and Big Data Laboratory, is devoted to the research of new technologies and innovations in various areas of computing such as Open Compute, OpenStack, and Software Defined Network (SDN). (www.utsa.edu)

Last Updated: