Skip to main content

Testbed Facilities

This page provides a list of research testbed facilities that can be used by research communities and others interested in early access to advanced technology capabilities and services. These research testbeds are designed to be persistent for many years.

 

7.1.1.1.1 StarLight International/National/Regional/Local R&E Testbed Networks

  • AMIS Testbed: Advanced Measurements Instrument and Services for Programmable Network Measurement of Data-intensive Flows
  • AutoGOLE/NSI/MEICAN: Dynamic Global L2 Provisioning
  • Chameleon Cloud Testbed
  • CienaOPn Research On Demand Network Testbed: North and East–100 Gbps between StarWave and Ciena Research Lab in Ottawa via CANARIE, and between StarLight and Ciena in Hanover, MD, via AL2S, with an extension to MAN LAN
  • Cisco ICN Testbed (Content-Centric Networking)
  • Compute Canada Testbed
  • DTN-as-a-Service Testbed
  • Elastic Data Transfer Enabled by Programmable Data Planes Testbed
  • ESnet 100 Gbps National Testbed
  • ExoGENI: ESnet–StarLight–ANA-n*100G–SURFnet/NetherLight 100/40 GbpsTestbed
  • FABRIC: National U.S. Computer Science Testbed (Forthcoming, 2020)
  • GEMnet: NTT Global Enhanced Multifunctional Network, Japan
  • GENI Multi-Services Exchange SDX Network
  • GENI Network
  • HPDMnet: High-Performance Digital Media Network
  • Illinois Express Quantum Communications and Networking Testbed (Forthcoming, 2020, in development)
  • International BigData Express Testbed
  • International Data Mover Challenge 100 Gbps Testbed
  • International Global Environment for Network Innovations (iGENI) SDN/OpenFlow Testbed *
  • International P4 Testbed
  • International PetaTrans Testbed: Petascale Science 100 Gbps Testbed
  • IRNC SDX-OSG / GRP Federation
  • JGN-X: Japan Gigabit Network – eXtreme testbed, Japan
  • KREONET SD-WAN Testbed (KREONET-S)
  • LHCONE P2P Prototype Service International Dynamic Multi-Domain L2 Service for HEP
  • LSST Prototype Service Testbed
  • Multi-Mechanisms Adaptation for The Future Internet (MAKI), US-EU
  • MDTMFTP High-Performance Transport Tool Testbed
  • NASA Goddard Space Center High-End Computing Network (HECN) Testbed *
  • Naval Research Lab Network
  • OMNInet Optical Metro Network Initiative Testbed (Metro Optical-Fiber Fabric and Co-Lo Facilities for L0/L1/L2 Experimentation)
  • Open Science Data Cloud Testbed (OCT): operated by the Open Cloud Consortium (OCC), OCT is a national-scale 100 Gbps testbed for data-intensive science cloud computing – addressing large data streams, unlike other cloud architectures that are oriented toward millions of small data streams
  • Open Storage Network
  • Pacific Research Platform (PRP)
  • Prototype 400 Gbps WAN Service Testbed
  • SDX Interoperability Prototype Service
  • SEAIP DTN-as-a-Service Testbed
  • SENSE Testbed (Intelligent Network Services for Science Workflows)
  • VTS Testbed Isolated Overlay Topologies

Global Research Platform (GRP)

The Global Research Platform (GRP) is an international scientific collaboration established to create innovative advanced services that integrate worldwide resources at speeds of gigabits and terabits per second, especially for data-intensive science research.

GRP focuses on design, implementation, and operation strategies for next-generation distributed services and infrastructure to facilitate high-performance data gathering, analytics, transport, computing, and storage among multiple science sites at 100 Gbps or higher (400 Gbps-1.2 Tbps).

The GRP functions as a prototype services platform and a testbed for experimental research. GRP community partners in North America (e.g., for example, interconnected with the US National Research Platform - NRP), Asia (e.g., interconnected with the Asia Pacific Research Platform – APRP ), Europe, and South America are researching and developing new services architecture and technology to support optimal data-intensive scientific workflows.

The GRP is a worldwide Science DMZ, a distributed environment for data-intensive research. The GRP leverages optical circuits and open exchange facilities provided by its collaborators, including the 100 Gbps Global Research Platform Network (GRPnet that provides services between the Pacific Wave at the Pacific Northwest GigaPoP in Seattle, with extensions to Sunnyvale and Los Angeles, California) and the StarLight facility

StarLight International Software Defined Exchange

With initial funding from the National Science Foundation and with its global research partners, iCAIR designed, developed, implemented, and is now operating an International Software Defined Exchange (iSDX) at the StarLight International/National Communications Exchange Facility, which integrates multiple services, many specifically designed for large scale global data-intensive science. The StarLight SDX is based on a flexible, scalable, programmable platform.

The SDX initiative provides production services, a series of experimental research projects, an experimental research testbed, and a means of integrating multiple experimental research testbeds.

Services incorporate those based on 100 and 400 Gbps Data Transfer Nodes (DTNs) for Wide Area Networks (WANs), including trans-oceanic WANs. Currently, a key focus is scaling to 400, 800, and 1.2 Tbps Gbps WAN and LAN E2E technologies that provide high-performance transport services for petascale science, controlled using programmable data plane techniques such as Software Defined Networking (SDN) and techniques for programming data planes, e.g., with the P4 network programming language. Another research area is providing interoperability between services among open exchange points.

StarWave

With multiple national and international partners, including the StarLight Consortium and the Metropolitan Research and Education Network (MREN), iCAIR established an initiative, StarWave, to explore the potential for supporting data-intensive science with high-performance optical switching.

Funded by The National Science Foundation’s Advanced Research Infrastructure (ARI) Program, StarWave is a multi-100 Gbps facility supporting services within the StarLight International/National Communications Exchange Facility. StarWave provides support for large-scale E2E WAN flows, Including 100 Gbps and multi-100 Gbps flows, and for programmability of those flows.

International PetaTrans And NVMe-Over FABRICs As A MicroserviceTestbeds

With its research partners, iCAIR established the International PetaTrans/NVMe-Over-Fabrics As A Microservice testbed as a platform for creating services based on an integrated SDN/SDX/DTN design using 100 Gbps DTNs for WANs, including transoceanic WANs, to provide high-performance transport services for petascale science, controlled using SDN techniques. These SDN-enabled DTN services are specifically designed to optimize capabilities that support large-scale, high-capacity, high-performance, reliable, high-quality, sustained individual data streams for science research, including thousands of miles over multi-domain networks.

One key service supports high-performance transfers of huge files (e.g., petabytes) and extremely large collections of small files (e.g., many millions). The integration of these services with DTN-based services using SDN has been designed to ensure E2E high performance for those streams and to support highly reliable services for long duration data flows. Resolving this issue requires addressing and optimizing multiple components in an E2E path, processing pipelines, high-performance protocols, kernel tuning, OS bypass, path architecture, buffers, memory used for transport, capabilities for slicing resources across the exchange to segment different science communities while using a common infrastructure, and many other individual components.

As part of this initiative, iCAIR established the PetaTrans with NVMe-over-Fabrics as Microservice Testbed to support research projects to improve large-scale WAN microservices for streaming and transferring large data among high-performance Data Transfer Nodes (DTNs). Building on earlier initiatives, this initiative is designing, implementing, and experimenting with NVMe-over-Fabrics on 400 Gbps Data Transfer Nodes (DTNs) over large-scale, long-distance networks with direct NVMe-to-NVMe service over RoCE and TCP fabrics using SmartNICs. The NVMe-over-Fabrics Microservice connects remote NVMe devices without userspace applications, thereby reducing overhead in high-performance transfer and offloading NVMe-over-Fabrics initiators software stack in SmartNICs. A primary advantage of NVMe-over-Fabrics Microservice is that it can be deployed in multiple DTNs as a container with lower overhead.

400 Gbps WAN Services Testbed

Data production among science research collaborations continues to accelerate, a long-term trend partly propelled by large-scale science instrumentation, including high-luminosity research instruments. Consequently, the networking community is preparing for service paths beyond 100 Gbps, including 400 Gbps, 800 Gbps, and 1 Tbps WAN and LAN services. In this progression, 400 Gbps E2E WAN services are a key building block. Consequently, the requirements and implications of 400 Gbps WAN services are being explored at scale by iCAIR and its research partners, including 400 Gbps E2E on customized testbeds over tens of thousands of miles.

1.2 Tbps WAN and LAN Services Testbed

With its research partners and several science communities, iCAIR initiated a reference model architecture for 1.2 Tbps services, specifically for data-intensive science. This initiative has established a testbed to investigate capabilities for WAN services beyond 400 Gbps, including those approaching 800 Gbps, 1 Tbps, and multi-Tbps WAN and LAN services.

The requirements and implications of WAN and LAN services beyond 400 Gbps are being explored along with services and technologies that support 1.2 Tbps WAN services and multi-Tbps services E2E. These new services and techniques are being demonstrated using the SCinet testbed created for the annual IEEE/ACM International Conference on High-Performance Computing, Networking, Storage, and Analytics.

Data Transfer Node (DTN)-as-a-Service (DTNaaS) Testbed

For several years, iCAIR has developed high-performance DTN-as-a-Service initiatives to prototype network analytic services for up to 400 Gbps WAN end-to-end infrastructure. DTN-as-a-Service focuses on transporting large data across WANs and LANs within cloud environments, including using orchestrators such as Kubernetes to improve the data transport performance over high-performance networks.

These experiments are being conducted on a 400 Gbps testbed. The experiments demonstrate the implementation of cloud-native services for data transport within and among Kubernetes clouds through the DTN-as-a-Service framework, which sets up, optimizes, and monitors the underlying system and network. DTN-as-a-Service streamlines big data movement workflow by providing a Jupyter controller, a popular data science tool, to identify, examine, and tune the underlying DTNs for high-performance data movement in Kubernetes and enabling data transport over long-distance WAN networks using different networking fabrics.

DTNaaS was implemented as an XNET resource for the annual IEEE/ACM International Conference on High-Performance Computing, Networking, Storage, and Analytics for several years.

National Research Platform

The National Research Platform (NRP) initiative was established to provide academic researchers with a simple data-sharing architecture supporting end-to-end 10-to-100 Gbps performance to enable virtual co-location of large amounts of data with computing. E2E services (high-bandwidth disk-to-disk) performance is complex and challenging because the networks interconnect multiple sites and traverse multiple network management domains: campus, regional, national, and international.

The NRP initiative supports addressing these issues by innovative techniques for scaling end-to-end data sharing. The NRP also provides segmented resources for large-scale testbed experimentation in isolated environments.

AutoGOLE/NSI/MEICAN Testbed

iCAIR is a founding member of the AutoGOLE worldwide collaboration of Open eXchange Points and research and education networks developing automated end-to-end network services, e.g., supporting connection requests through the Network Service Interface Connection Service (NSI-CS), including dynamic multi-domain L2 provisioning.

iCAIR also participates in a related initiative, the Software-defined network for End-to-end Networked Science at Exascale (SENSE) system, which provides the mechanisms to integrate resources beyond the network, such as compute, storage, and Data Transfer Nodes (DTNs) into this automated provisioning environment.

NSI has been augmented by the MEICAN tools created by RNP, the Brazilian national R&E network. These initiatives use the global AutoGOLE experimental testbed, which has sites in North and South America, Asia Pacific, and Europe.

Intelligent Network Services for Science Workflows Testbed

With multiple international partners, iCAIR is participating in developing the Intelligent Network Services for Science Workflows (SENSE) Testbed and in research experiments being conducted on that testbed.

SENSE is a multi-resource, multi-domain orchestration system providing an integrated network and end-system services. (SENCE is closely integrated with AutoGOLE). Key research topics include technologies, methods, and a dynamic Layer 2 and Layer 3 network services system to meet the challenges and address the requirements of the largest data-intensive science programs and workflows.

SENSE services are designed to support multiple petabyte transactions globally, with real-time monitoring/troubleshooting, using a persistent testbed spanning the US, Europe, Asia Pacific, and Latin American regions. A significant area of investigation is the potential for integrating SENSE with the Rucio/File Transfer Service (FTS)/XRootD data management and movement system. This is the key infrastructure used by LHC experiments and more than 30 other programs in the Open Science Grid.

Recent features include an ability for science workflows to define priority levels for data movement operations through a Data Movement Manager (DMM) that translates Rucio-generated priorities into SENSE requests and provisioning operations.

Ciena Research On Demand Network Testbed

The Ciena Research On Demand Network Testbed supports research investigations of large-scale WAN services using wavelengths on long-distance optical networks at 100-400-800 Gbps and 1.2 Tbps.

The testbed provides 100 Gbps between the StarLight StarWave facility and the Ciena labs in Ottawa and 200 Gbps between the StarLight StarWave facility and a Ciena lab in Hanover, Maryland, with an extension to MANLAN in New York City.

Energy Science Network 400 Gbps Testbed

The Energy Science Network (ESnet) provides services for major large-scale science research, lab facilities, and supercomputing centers. To provide an experimental research testbed for science networking innovations, ESnet has designed and implemented a 400 Gbps testbed that connects its lab in Berkeley, California, to the StarLight facility.

Joint Big Data Testbed

The Joint Big Data Testbed (JBDT) was designed and implemented by a collaboration of government agencies and iCAIR to explore extremely large data transfers over thousands of miles of WANs. The core of the testbed is based on 400 Gbps paths among core nodes in McLean, Virginia, the StarLight facility in Chicago, and the ESnet testbed site between StarLight and Berkeley, California.

Each year, the JBDT extends its paths and capabilities with support from SCinet to showcase experiments and demonstrations at the IEEE/ACM International Conference on High Performance Computing, Networking, Storage, and Analytics.

NSF Chameleon Cloud Testbed

iCAIR is a founding member of the NSF Chameleon Cloud Testbed, a large-scale, deeply reconfigurable experimental platform built to support Computer Sciences systems research. Community projects range from systems research developing new operating systems, virtualization methods, performance variability studies, and power management research to projects in software-defined networking, artificial intelligence, and resource management. 

To support experiments of this type, Chameleon supports a bare metal reconfiguration system, giving users complete control of the software stack, including root privileges, kernel customization, and console access. While most testbed resources are configured this way, a small amount is configured as a virtualized KVM cloud to balance the need for finer-grained resource sharing sufficient for some projects with coarse-grained and stronger isolation properties of bare metal. iCAIR is collaborating with the Chameleon and FABRIC communities to integrate the capabilities of both testbeds.

FABRIC Testbed

iCAIR, StarLight, and the Metropolitan Research and Education Network (MREN) provide support for the NSF-funded FABRIC testbed (FABRIC is Adaptive ProgrammaBle Research Infrastructure for Computer Science and Science Applications), which is an International infrastructure that enables cutting-edge experimentation and research at-scale in the areas of networking, cybersecurity, distributed computing, storage, virtual reality, 5G, IoT, machine learning, and science applications.

The FABRIC testbed supports experimentation on new Internet architectures, protocols, and distributed applications using a blend of resources from FABRIC, its facility partners, and their connected campuses and opt-in sites.

FABRIC is an everywhere-programmable network combining core and edge components and interconnecting to many external facilities. FABRIC is a multi-user facility supporting concurrent experiments of differing scales facilitated through federated authentication/authorization systems with allocation controls. The FABRIC infrastructure is a distributed set of equipment at commercial collocation spaces, national labs, and campuses.

Each of the 29 FABRIC sites has large amounts of compute and storage, interconnected by high-speed, dedicated optical links. It also connects to specialized testbeds (e.g., PAWR, NSF Clouds), the Internet, and high-performance computing facilities to create a rich environment for a wide variety of experimental activities. At StarLight, FABRIC supports 1.2 Tbps capacity from the east coast and 1.2 Tpbs of capacity to the west coast. One project iCAIR and its research partners are addressing is integrating FABRIC and Chameleon. FABRIC has been designed to be extensible, continually connecting to new facilities, including clouds, networks, other testbeds, computing facilities, and scientific instruments.

 

 

 

FABRIC Across Borders

iCAIR is collaborating with the NSF FABRIC Across Borders (FAB) initiative and an extension of the FABRIC testbed connecting the core North America infrastructure to four nodes in Asia, Europe, and South America.

By creating the networks needed to move vast amounts of data across oceans and time zones seamlessly and securely, the project enables international collaboration to accelerate scientific discovery.

Photonic Data Services

iCAIR, the National Center for Data Mining at UIC, and the Laboratory for Advanced Computing at UIC are developing new methods for integrating high-performance data management techniques with advanced methods for dynamic lightwave provisioning. These techniques are termed "Photonic Data Services." These services combine data transport protocols developed at the UIC research centers with wavelength provisioning protocols developed at iCAIR, such as the Optical Dynamic Intelligent Networking protocol (ODIN).

Researchers at iCAIR and NCDM have been conducting a series of tests to ensure optimal performance of a variety of network components and protocols, such as TCP and UDP, including testing methods using services for parallel TCP striping (GridFTP). Researchers at the NCDM have been using multiple testbeds, including the national TeraFlow Network, the state-wide I-WIRE network, and the metro area OMNInet, to test protocols that they developed to allow for the design of network-based applications with reliable end-to-end performance and speeds that scale to multiple-Gbps. These protocols include UDT, which are open-source libraries used to build network applications with advanced functionality. NCDM's UDT is an innovative protocol that uses UDP as a transit protocol but provides reliability by using TCP as a control protocol.

NCDM and iCAIR have used Photonic Data Services demonstrations to set a new high-performance record for trans-Atlantic data transit.

Photonic TeraStream

iCAIR has also demonstrated the prototype Photonic TeraStream to illustrate its potential for supporting global applications with next-generation wavelength-based networking and allowing those applications to utilize the optical network control plane directly. Such new applications could be based on techniques for provisioning "Global Services-on-Demand," a method that allows applications to select services used.

The Photonic TeraStream was designed and developed to allow for experimentation with new techniques for provisioning for high-performance composite applications. The Photonic TeraStream application was developed as a prototype "composite application" -that could potentially integrate several component applications, including high-performance data transfer (based on GridFTP), digital media streaming, high-performance remote data access methods (based on iSCSI, and dynamic resource provisioning.

BRIDGES

iCAIR is collaborating with the NSF BRIDGES initiative (Binding Research Infrastructures for the Development of Global Experimental Science), an NSF-funded high-performance network testbed connecting research programs in the United States and Europe.

BRIDGES provides a flexible 100Gbps trans-Atlantic backbone ring connecting Washington DC, Paris, Amsterdam, and New York City. These four locations provide easy access to research programs in the United States and the European Union, enabling them to establish end-to-end network infrastructure between and among collaborating research teams on two continents.

BRIDGES is developing and deploying advanced network programmability software to deliver rapid reconfiguration, predictability, and repeatable service provisioning, seamless multi-domain scalability, and advanced APIs that enable cyber-infrastructure control and orchestration through automated agents.

Named Data Networking for Data Intensive Science Testbed

iCAIR and StarLight provide support for the Named Data Networking for Data Intensive Science (N-DISE) Testbed, which Northwestern University is leading.

N-DISE is developing data-centric networks and systems designs using Named Data Networking (NDN) architecture. Using named data instead of endpoints represents a paradigm shift in transferring data based on a data-centric approach to system and network design. It allows a pull-based data distribution architecture with stateful forwarding and provides native support for caching and multicasting. It supports direct authentication and securing of data and controlled data access.

It also provides system support through the data lifecycle – a) data production: naming, authenticating, and securing data directly; b) delivering data using names enables scalable data retrieval; c) in-network caching;  d) automated joint caching and forwarding; and f) multicast delivery g) network-wide data-intensive computation and learning and h) a common framework to support different application domains.

International P4 Testbeds

iCAIR is engaged in multiple research projects using the P4 network programming language (“Protocol Independent, Target Independent, Field Reconfigurable”), enabling many new capabilities for programmable networks, including capabilities supporting data-intensive science services.

Particularly important P4 capabilities are in-band telemetry (INT), which enables high-fidelity network flow visibility. To develop the capabilities of P4, an international consortium of network research institutions, including iCAIR, collaborated in designing, implementing, and operating an International P4 Testbed.

This testbed provides a highly distributed network research and development environment to support advanced empirical experiments at a global scale, including on 100 Gbps paths. The implementation includes access to the P4Runtime implementation.

In addition, iCAIR is participating in an international P4 testbed (Global P4Lab) designed and implemented by a global consortium led by GEANT, the international network for the European R&E national networks.

Network Optimized for the Transport of Experimental Data Testbed

iCAIR is participating in an international initiative led by CERN to develop a capability entitled Network Optimized for Transfer of Experimental Data (NOTED) for potential use by the Large Hadron Collider (LHC) networking community.

The goal of the NOTED project is to optimize transfers (including by using AI/ML/DP techniques, of LHC data among sites by addressing problems such as saturation, contention, congestion, and other impairments. To support this initiative's research components, iCAIR has collaborated with multiple other organizations to design and implement an international NOTED testbed.

Scitags Packet Marking Testbed

With multiple research partners led by CERN, iCAIR is participating in a research project exploring techniques and technologies for managing large-scale scientific workflows over networks.

One technique is using Scientific network tags (scitags), an initiative promoting the identification of the science domains and their high-level activities at the network level. This task is becoming increasingly complex, especially as multiple science projects share the same foundation resources simultaneously yet are governed by multiple divergent variables: requirements, constraints, configurations, technologies, etc.

A key method to address this issue is employing techniques that provide high-fidelity visibility into exactly how science flows utilize network resources end-to-end. To develop these services, architecture, techniques, and technologies with its research partners, iCAIR has designed and implemented an international Packet Marking Testbed.

NASA Goddard High End Computer Networking

iCAIR has formed a partnership with NASA, which has to process and exchange increasingly vast amounts of scientific data to address issues of transporting large amounts of data over WANs. NASA networks must scale to increasing speeds, with 400 gigabit per second (Gbps) WAN/LAN networks being the current challenge being addressed, with a plan to address 800 Gbps in the future. To meet this goal, it is not sufficient to simply have 400 Gbps network paths because normal data transfer rates would not even fill a 10 Gbps capacity.

The NASA Goddard High End Computer Networking (HECN) team is developing systems and techniques to achieve near 400G line-rate disk-to-disk data transfers between a pair of high-performance NVMe Servers across national WAN network paths by utilizing NVMe-oF/TCP technologies to transfer the data between the servers' PCIe Gen4 NVMe drives.

These techniques are being explored and tested on a national scale testbed, including the WAN testbeds created by SCinet for the IEEE/ACM International Conference on High-Performance Computing, Networking, Storage, and Analytics.

Resilient, Distributed Processing And Rapid Data Transport Testbed

The Resilient, Distributed Processing And Rapid Data Transport (RDPRD) Testbed was designed and implemented to investigate large-scale Interconnected and interlocking problems that demand a high-performance dynamic distributed data-centric infrastructure, including a close integration of high-performance WAN transport, HPC compute facilities, high-performance storage, and sophisticated data management.

GENI Multi-Services Exchange SDX Network

The NSF GENI Multi-Services Network Exchange is an iCAIR Global Environment for Network Innovations (GENI) project that developed and implemented a prototype SDX network for computer science research, including services for federating testbeds.

This GENI SDX network supports the ongoing development of, maintenance of, and support for key software and hardware components of an L2 SDN/OpenFlow exchange SDX among GENI L2 network resources and other research networks, including international research networks. The project has been developing tools and APIs for experimenters to request and receive resources from the exchange that are fully integrated with GENI standard interfaces, such as GENI stitching.

The project has also developed mechanisms to integrate GENI tools with experimenter tools from other participating networks. This SDX has supported multiple national and international demonstrations of (and experiments with) functionality with other research networks, data-intensive science campuses, and multiple experimenters over participating L2 networks. This project provides continuing support for two existing GENI racks at the StarLight Facility (i.e., the InstaGENI and ExoGENI network exchange points).

KREONET SD WAN Testbed

KREONET SD WAN Testbed is Open Source (Vendor Neutral) High Scalability Central Control Virtualization wide area network (WAN) testbed that is being used to develop services for global data-intensive sciences and, more general, using software-defined network services. The testbed has been implemented from South Korea to StarLight on the KREONET network.

Elastic Data Transfer Testbed (SciStream)

 

iCAIR collaborated with Argonne National Laboratory to investigate new techniques for elastic data transfer infrastructure using an architecture that expands and shrinks data transfer node resources based on demand. Data plane programmability has emerged as a response to the lack of flexibility in networking ASICs and the long product cycles required to introduce new protocols on their networking equipment. This approach bridges the gap between the SDN model potentials and actual OpenFlow implementations. Following the ASIC tradition,

OpenFlow implementations have focused on defining matching protocol header fields in forwarding tables, which cannot be modified once the switch is manufactured. In contrast, programmable data planes allow network programmers to define precisely how packets are processed in a reconfigurable switch chip (or in a virtual software switch). Such levels of programmability provide opportunities for offloading specific processing on the data to the network and obtaining a more accurate state of the network. One key element of the elastic DTI architecture is the statistics collector that feeds usage and performance information to a decision engine.

Another key element of the elastic DTI architecture is the load balancer that distributes the load of incoming transfers among existing virtualized resources. State-of-the-art solutions rely on traditional network monitoring systems such as SNMP and sFlow to

collect network state information. However, traditional network monitoring methods either use a polling mechanism to query network devices or use sampling when devices are allowed to push data to lower the communication overhead and save database storage space.

In-band Network Telemetry (INT) is a framework that allows the data plane to add telemetry metadata to each packet of a flow. Then, the metadata is removed and sent to a collector/analyzer before the packet is forwarded to the final destination. This initiative undertook experiments to show the impact of advanced network telemetry using

programmable switches and the P4 (Programming Protocol-independent Packet Processors) language on the granularity of network monitoring measurements. The detection gap between a programmable data plane approach and traditional methods such as a flow was compared.

Digital Research Alliance of Canada Research Testbeds

The non-profit Digital Research Alliance of Canada provides services to Canadian researchers to advance the nation’s international leadership in the knowledge economy. Key focal areas are advanced research computing (ARC), research data management (RDM), and research software (RS), which collectively provide a resource platform for multiple research communities.

To stage demonstrations of advanced services for computational science, the Digital Research Alliance, CANARIE, the national R&E network of Canada, iCAIR, the StarLight consortium, the Metropolitan Research and Education Network (MREN), and SCinet collaborate to create an international testbed for stage demonstrations at the annual IEEE/ACM International Conference on High-Performance Computing.

Virtual Transfer Services (VTS) Testbed

iCAIR and StarLight provide support for the University of Houston’s Virtual Transfer Services (VTS) Testbed, which is an SDN offering on the NSF’s Global Environment for Network Innovations (GENI), providing a VTS Aggregate Manager for GENI. VTS enables experimenters to create isolated overlay topologies based on programmable datapath elements and labeled circuit services to provide inter-domain connectivity and L2 topologies, including label isolation, common ethertypes, MAC, IPv4, IPv6 addresses, implementation and measurement of performance Isolation and enabling provision of exclusive control and management of the topologies.

Illinois Express Quantum Network Testbed

Using currently available technology, the Illinois Express Quantum Network (IEQNET) Testbed was designed, and support programs were implemented to realize metropolitan-scale quantum networking over deployed optical fiber.

The IEQNET consists of multiple sites that are geographically dispersed in the Chicago metropolitan area, including Northwestern’s campus in Evanston, the StarLight Communications Exchange Facility on Northwestern’s Chicago campus, a carrier exchange in central Chicago, Argonne National Laboratory and Fermi National Accelerator Laboratory. Each site has quantum nodes (Q-Nodes) that generate or measure quantum signals, primarily entangled photons.

 Measurement results are communicated using standard classical signals, communication channels, and conventional networking techniques such as Software-defined networking (SDN). The entangled photons in IEQNET nodes are generated at multiple wavelengths and are selectively distributed using transparent optical switches.

At the OFC conference in March 2023, the Illinois Express Quantum Network consortium, led by NuCrypt and Northwestern’s Center for Photonic Computing and Communications, demonstrated the distribution and measurement of quantum entangled signals over fiber with co-propagating classical data. Distributed measurements were collected and controlled from a single location using an embedded optical data link. An optical switch was programmed to send different quantum entangled wavelengths to spatially separated points. Demonstrating coordinated control of quantum photonic instruments at multiple sites highlighted the capability for robust operation of commercially available quantum optical equipment over fiber-optic infrastructure.

OMNInet Testbed

 

OMNInet is a large-scale collaborative experimental network established to evolve metro digital communication services to ensure they are high-capacity, high-performance, highly scalable, reliable, and manageable at all levels. OMNInet was designed to assess and validate next-generation optical technologies, architecture, and applications in metropolitan networks. Communications architecture based on complex core facilities is optimal when 80% of information flows are local. However, much Internet traffic consists of remote access. These patterns require new types of architecture and engineering. On this optical metro testbed, a research partnership has conducted trials of photonic-based GE services based on innovative optical transport switching incorporating photonic-based components, architecture, and techniques. OMNInet testbed services are based on new photonic-based components, architecture, and techniques supporting multiple interconnected lightwave (lambda) paths within fiber strands. OMNInet is based on Dense Wave Division Multiplexing (DWDM), which allows transmitting multiple data streams to travel over the same pair of fiber by utilizing different colors of light (or light frequencies. Each frequency simultaneously communicates data - substantially increasing fiber capacity.

OMNInet is based on a wide range of architecture emerging from multiple standards organizations, including the ITU, IETF, and IEEE. New techniques for traffic engineering are being explored on this testbed, especially those that can take advantage of architectural models that are more distributed than hierarchical. OMNInet allows research on core optical components, e.g., multi-protocol, integrated DWDM, experiments with new technologies and techniques (including IP control planes using GMPLS, which employs a signaling overlay architecture), testing and analysis, and new protocols. OMNInet employs Internet protocols and mesh architectures to provide reliability through redundancy, automatic restoration, optimization through traffic management, pre-fault diagnostics for trouble avoidance, granulated service definition, etc. Key components are adjustable lasers and minute mirrors that control light wavelengths to route traffic. The original optical switches were not commercial products but unique designs based on MEMs-supported O-O-O devices.

OMNInet research projects have included A) Trials of highly reliable, scalable 10 GE in metropolitan and wide area networks. Ethernet is the global standard for local area networks (LANS) that connect today's computing devices. 10 GE runs at speeds 10-100 times faster than current standards and can extend the network throughout metropolitan areas (MANs) and between cities (WANs). B) Trials of new technologies to support applications that require extremely high levels of bandwidth. C) Development and trials of optical switching, ensuring maximized capabilities in the wide-scale deployment of all-photonic networks. D) Trials of new techniques that allow for application signaling to optical network resources E) Experiments with new types of advanced networking middleware that make networks more intelligent F) Trials of Multi-Service Optical Networking (MON) as a dedicated point-to-point network service enabling interconnections among sites as well as mirroring data and transmitting large quantities of information at high performance.

OMNInet has implemented core optical switches at StarLight, interconnected by dedicated optical fiber with multiple other devices to allow flexible L1 and L2 interconnectivity. Core nodes are connected to computational clusters at various sites and other testbeds. OMNInet uses StarLight resources to extend experiments nationally and internationally.

OMNInet is being used to support quantum networking research in partnership with Argonne National Laboratory and Fermi National Accelerator Laboratory.

Photonic TeraStream Testbed

With its research partners, iCAIR is investigating the potential for enhancing large-scale, high-capacity WAN transport by implementing services (Photonic TeraStream Services) to demonstrate its potential for supporting global applications wavelength-based networking. These services include allowing applications to directly utilize the optical network control plane, and transiting selected data flows from L3 channels to L1 channels.

Such “composite” services integrate high-performance data transport protocols, custom-configured DTNs, high capacity L2switches, and optical transport switches with wavelength provisioning protocols (e.g., Optical Dynamic Intelligent Networking protocol (ODIN).

SEAIP International DTN Testbed

As part of a collaboration with SEAIP (Southeast Asia International joint-research and training Program), iCAIR assisted in designing and implementing an international Southeast Asia DTN testbed directed at creating a Data Mover Service supported by multiple sites in countries through Southeast Asia. Another goal was to participate in the Supercomputing Asia Conference Data Mover Challenge.

Open Storage Network

iCAIR supports the Open Storage Network (OSN) for science and scholarly research that requires data storage and transfer at scale by simplifying and accelerating access to data actively used by ongoing research projects. The OSN emphasizes large data (hundreds of terabytes) sets that are often difficult to share and long-tail data sets that are often difficult to find and access.

Deployment of the OSN is a response to the increasing importance of storage as the third component of national cyberinfrastructure, complementing investments in computing and networks. While other uses may emerge over time, the OSN is intended initially to serve two principal needs: (1) facilitate smooth flow of large data sets between data and computing resources such as instruments, synthetic data projects, campus data centers, national supercomputing centers, and cloud providers; and (2) make it easy to expose long tail data sets to the entire scientific community.

The OSN is a functionally and administratively coherent federation of storage systems called Pods that reside at independent sites. The OSN design leverages well-defined standards and APIs that accommodate local variation while ensuring uniform global behavior. This approach is intended to enable scaling to hundreds of pods with aggregate raw capacity of hundreds of petabytes.