Network Research Exhibition
The SC Conference Series is a test bed for cutting-edge developments in high-performance networking, computing, storage, and analysis. Network Research Exhibition (NRE) demonstrations leverage the advanced capabilities of SCinet, SC’s dedicated high-capacity network.
Additionally, each year, a selection of NRE participants are invited to share the results of their demos and experiments from the preceding year’s conference as part of the Innovating the Network for Data-Intensive Science (INDIS) workshop.
Network researchers and professionals from government, education, research, and industry are invited to submit proposals for demonstrations and experiments at the SC Conference that display innovation in emerging network hardware, protocols, and advanced network-intensive scientific applications.
Accepted NRE Demos
SC19-NRE-001
Global Research Platform: A Distributed Environment for Science Research and New Knowledge Discovery
Location: Booth 993 (StarLight)
An international collaboration has been established to design, develop, implement, and operate a highly distributed environment – the Global Research Platform (GRP) for large scale international science collaborations. These demonstrations showcase the capabilities of the GRP to support large scale data intensive world-wide science research. A motivation for this initiative is the recognition that large scale world-wide collaborative science, especially data intensive science research cannot be well supported by traditional commodity networks. Instead, specialized networks that address the demanding requirements of science applications and data workflows must be implemented, particularly services for high capacity individual data streams transported thousands of miles over multi-domain networks.
SC19-NRE-002
International P4 Networking Testbed
Location: Booth 993 (StarLight)
To realize the advantages of the P4 network programming language (“Protocol Independent, Target Independent, Field Reconfigurable”), including for data intensive science, network research institutions around the world organized a collaboration to design, implement, and operate an International P4 Testbed. This testbed is a highly distributed network research and development environment supporting advanced empirical experiments at global scale, including over 100 Gbps paths. Funded by the National Science Foundation’s (NSF) International Research Network Connections (IRNC) program, the International Center for Advanced Internet Research (iCAIR) designed, implemented and now operates International Software Defined Exchange (SDX) at the StarLight International/National Communications Exchange Facility (StarLight). This SDX supports multiple national and international network research testbeds, including an international P4 testbed that allows member institutions to share multiple distributed P4 resources over international research and education networks.
SC19-NRE-003
IRNC Software Defined Exchange (SDX) Services Integrated with 100Gbps Data Transfer Nodes (DTNs) for Petascale Science
Location: Booth 993 (StarLight)
iCAIR is designing, developing, implementing and experimenting with an International Software Defined Exchange (SDX) at the StarLight International/National Communications Exchange Facility, which integrates services based on 100 Gbps Data Transfer Nodes (DTNs) for Wide Area Networks (WANs), including trans-oceanic WANs, to provide high performance transport services for petascale science, controlled using Software Defined Networking (SDN) techniques. These SDN enabled DTN services are being designed specifically to optimize capabilities for supporting large scale, high capacity, high performance, reliable, high quality, sustained individual data streams for science research. This initiative is funded by the National Science Foundation’s (NSF) International Research Network Connections (IRNC) program.
SC19-NRE-004
400Gbps WAN Services: Architecture, Technology and Control Systems
Location: Booth 993 (StarLight)
With its research partners including the SCinet WAN group, the International Center for Advanced Internet Research (iCAIR) at Northwestern University is designing, implementing and demonstrating an E2E 400 Gbps WAN service from the StarLight International/National Communications Exchange Facility in Chicago to the SC19 venue in Denver. Data production among science research collaborations continues to increase. Recently, LHC planners estimated that the annual rate of increase for the foreseeable future will be 50% per year. Consequently, the networking community must begin preparing for 400 Gbps WAN services. 100 Gbps WAN services, which have become ubiquitous, have been implemented for over ten years. Before they were widely deployed, it was necessary to develop techniques to effectively utilize that level of capacity. Similarly, the requirements and implications of 400 Gbps WAN services must be explored at scale. These demonstrations showcase large scale E2E 400 Gbps WAN services, intersecting with SCinet 400 Gbps LAN services at the SC19 venue.
SC19-NRE-005
DTNs Closely Integrated with WAN High Performance 100Gbps Optical Channels
Location: Booth 993 (StarLight)
Data Transfer Nodes (DTNs) have been demonstrated as key network appliances for supporting large scale LAN and WAN data intensive science worksflows. With its research partners, iCAIR is investigating techniques of optimizing DTNs for services related to such workflows. For example, currently DTNs are primarily used with L3 services, and in a few cases with L2 services. The iCAIR research project is exploring ways to directly integrate DTNs with 100 Gbps and 400 Gbps WAN channels based on optical networking. This project is using several testbeds, including an international 100 Gbps testbed designed, implemented and operated by Ciena. Recent developments showcased through demonstrations at SC19 highlight these innovations.
SC19-NRE-006
Toward SCinet DTN as-a-Service
Location: NOC 1081 (SCinet)
SC19 SCinet DTN-as-a-Service is a 3rd year X-NET project. The project provides Data Transfer Node software and hardware platform as prototype services to support SC19 SCinet community before and during the SC conference. The project supports testing, demonstration, experimentation, evaluation and other SC SCinet related activities, especially those for data intensive science. For SC19, new prototype services include: Kubernetes, NVMeoF and 400G LAN/WAN experiments. Please see SC19 INDIS Workshop paper: “SCinet DTN-as-a-Service Framework” for detail.
SC19-NRE-007
Traffic Prediction for Flow and Bandwidth
Location: Booth 925 (Department of Energy)
Predicting traffic on network links can help engineers estimate the percentage bandwidth that will be utilized. Efficiently managing this bandwidth can allow engineers to have reliable file transfers and run networks hotter to send more data on current resources. Toward this end, ESnet researchers are developed advanced deep learning LSTM powered prediction network traffic system.
SC19-NRE-008
Tracking Network Events with Write Optimized Data Structures
Location: NOC 1081 (SCinet)
Our Diventi project uses a write optimized B-Tree to index layer 3 network activity either from bro conn logs or netflow data. This system can ingest & index events at sustained rates of hundreds of thousands per second, while answering queries in milliseconds. Using this ability to answer questions in sub-second timeframe we hope to provide multiple level metrics around the traffic seen.
SC19-NRE-009
Optimized Traffic Engineering Through Bottleneck Structure Identification for High-speed Data Networks
Location: NOC 1081 (SCinet)
Reservoir Labs has developed GradientGraph (G2), a new network optimization platform to help tune the performance of flows in high speed networks. This framework is based on a new mathematical theory [RG10, RL19] and algorithms that can efficiently (in polynomial time) identify the bottleneck structure of a network, revealing key topological and structural properties of the network towards optimizing its Quality of Service (QoS). Using G2, network operators can (1) identify in real time the bottleneck links in a network, (2) make optimized traffic engineering decisions (e.g., flow re-routing or traffic shaping), (3) create a baseline to identify flow performance issues and (4) perform capacity planning to optimize network upgrade decisions.
SC19-NRE-010
Demonstrations of 400Gbps Disk-to-Disk WAN File Transfers using RDMA and NVMe Drives
Location: Booth 993 (StarLight)
NASA requires the processing and exchange of ever increasing vast amounts of scientific data, so NASA networks must scale up to ever increasing speeds, with multi-100 Gigabit per second (Gbps) networks being the current challenge. However it is not sufficient to simply have 100 Gbps network pipes, since normal data transfer rates would not even fill a 10 Gbps pipe. The NASA Goddard High End Computer Networking (HECN) team will demonstrate systems and techniques to achieve near 400G line-rate disk-to-disk data transfers between a high performance NVMe Server at SC19 to or from a pair of high performance NVMe servers across two national wide area 4x100G network paths, by utilizing RDMA technologies to transfer the data between the servers’ NVMe drives.
SC19-NRE-011
Big Data Express: A Scalable and High-performance Data Transfer Platform
Location: Booth 993 (StarLight)
Big data has emerged as a driving force for scientific discoveries. To meet data transfer challenges in big data era, DOE’s Advanced Scientific Computing Research (ASCR) office has funded the BigData Express project. BigData Express is targeted at providing schedulable, predictable, and high-performance data transfer service for DOE’s large-scale science computing facilities and their collaborators.
SC19-NRE-012
MDTMFTP: A High-Performance Data Transfer Tool
Location: Booth 993 (StarLight)
To address challenges in high performance data movement for large-scale science, the Fermilab network research group has developed mdtmFTP, a high-performance data transfer tool to optimize data transfer on multicore platforms. mdtmFTP has a number of advanced features. First, it adopts a pipelined I/O design. Data transfer tasks are carried out in a pipelined manner across multiple cores. Dedicated threads are spawned to perform network and disk I/O operations in parallel. Second, mdtmFTP uses multicore-aware data transfer middleware (MDTM) to schedule an optimal core for each thread, based on system configuration, in order to optimize throughput across the underlying multicore core platform. Third, mdtmFTP implements a large virtual file mechanism to efficiently handle lots-of-small-files (LOSF) situations. Finally, mdtmFTP unitizes optimization mechanisms such as zero copy, asynchronous I/O, batch processing, and pre-allocated buffer pools, to maximize performance.
SC19-NRE-013
SENSE: Intelligent Network Services for Science Workflows
Location: Booth 543 (California Institute of Technology /CACR)
The Software-defined network for End-to-end Networked Science at Exascale (SENSE) is a model-based orchestration system which operates between the SDN layer controlling the individual networks/end-sites, and science workflow agents/middleware. SENSE also includes Network Resource Manager and End-Site Resource Manager components which enable advanced features in the areas of multi-resource integration, real time responsiveness, and user interactions. The demonstration will highlight a new SENSE Layer 3 service which provides the mechanisms for directing desired traffic onto specific Layer 3 Virtual Private Networks for policy and/or quality of service reasons. The SENSE demonstration will also present the status of ongoing work to integrate SENSE services with domain science workflows.
SC19-NRE-014
MMCFTP’s Data Transfer Experiement Using Five 100Gbps Lines Between Japan and USA
Location: Booth 1169 (NICT)
We will try MMCFTP’s data transfers over five 100 Gbps lines between Tokyo and Denver with a pair of servers. Five 100 Gbps lines will be prepared in cooperation with NRENs around the world and SCinet. We expects 380 Gbps transfer speed, and this experiment will encourage international moving of big data in advanced science and technology.
SC19-NRE-015
Automated Tensor Analysis for Deep Network Visibility
Location: NOC 1081 (SCinet)
Reservoir Labs will demonstrate a usable and scalable network security workflow based on ENSIGN, a high-performance data analytics tool based on tensor decompositions, that can analyze huge volumes of network data and provide actionable insights into the network. The enhanced workflow provided by ENSIGN assists in identifying actors who craft their actions to subvert signature-based detection methods and automates much of the labor intensive forensic process of connecting isolated incidents into a coherent attack profile. This approach complements traditional workflows that focus on highlighting individual suspicious activities.
SC19-NRE-016
Implementing Traffic Engineering over Global SDX Testbed
Location: Booth 455 (National Center for High-Performance Computing)
The demo illustrates how traffic engineering could be enabled in multi-site SDN networks. A video is transmitted into multiple streams that follow different paths to balance the traffic load and avoid congestion.
SC19-NRE-017
5G CBRS Spectrum Demonstration
Location: Booth 1817 (University of Utah)
Science applications which use real-time control and Internet of Things (IoT) sensors typically require specific network and security characteristics for accurate functionality. Due to the nature of mobility and broad coverage, these applications are often wireless. The existing open Industrial, Science and Medical (ISM) spectrum allocations are in heavy demand, especially within dense urban areas. This demonstration is a proof-of-concept which explores emerging technology in the Citizens Broadband Radio Service (CBRS) spectrum for use of these types of scientific applications.
SC19-NRE-018
Embracing Programmable Data Planes for an Elastic Data Transfer Infrastructure
Location: Booth 993 (StarLight)
Data plane programmability has emerged as a response to the lack of flexibility in networking ASICs and the long product cycles that vendors take to introduce new protocols on their networking gear. Programmable data planes seek to allow network operators and programmers to define exactly how packets are processed in a reconfigurable switch chip. In this demo we will present the impact of in-band network telemetry using programmable data planes and the P4 (Programming Protocol-independent Packet Processors) language on the granularity of network monitoring measurements. We will compare the detection gap between a programmable data plane approach and traditional methods such as sFlow.
SC19-NRE-019
Global Petascale to Exascale Workflows for Data Intensive Science Accelerated by Next Generation Programmable SDN Architectures and Machine Learning Applications
Location: Booth 543 (California Institute of Technology/CACR)
We will demonstrate several of the latest major advances in software defined and Terabit/sec networks, intelligent global operations and monitoring systems, workflow optimization methodologies with real-time analytics, and state of the art long distance data transfer methods and tools and server designs, to meet the challenges faced by leading edge data intensive experimental programs in high energy physics, astrophysics, climate and other fields of data intensive science. The key challenges being addressed include: (1) global data distribution, processing, access and analysis, (2) the coordinated use of massive but still limited computing, storage and network resources, and (3) coordinated operation and collaboration within global scientific enterprises each encompassing hundreds to thousands of scientists.
The major programs being highlighted include the Large Hadron Collider (LHC), the Laser Interferometer Gravitational Observatory (LIGO), the Large Synoptic Space Telescope (LSST), the Event Horizon Telescope (EHT) that recently released the first black hole image, and others. Several of the SC19 demonstrations will include a fundamentally new concept of “consistent network operations,” where stable load balanced high throughput workflows crossing optimally chosen network paths, up to preset high water marks to accommodate other traffic, provided by autonomous site-resident services dynamically interacting with network-resident services, in response to demands from the science programs’ principal data distribution and management systems.
SC19-NRE-020
LHC via SENSE and TIFR
Location: Booth 543 (California Institute of Technology/CACR)
The AutoGOLE is a worldwide community of networks that supports automated network provisioning based on the Network Service Interface (NSI) standard. As a next step, a system of multi-domain and multi-resource provisioning is envisioned which includes co-scheduling/provisioning of wide area network, site network, host, and storage resources. This demonstration will show initial progress on this concept via combining the capabilities of the AutoGOLE and SENSE (Software-defined network for End-to-end Networked Science at Exascale) systems.
SC19-NRE-022
Multi-Domain, Joint Path and Resource Representation and Orchestration
Location: Booth 543 (California Institute of Technology/CACR)
The Yale, IBM, ESNet and Caltech team will demonstrate a novel, unified multi-domain resource discovery and programming system for data-intensive collaborative sciences. Specifically, this system provides a fine-grained, accurate, highly-efficient multi-domain multi-resource discovery framework and a high-level resource programming and composition framework.
SC19-NRE-023
LSST and AmLightExpress/Protect
Location: Booth 543 (California Institute of Technology/CACR)
AmLight Express and Protect (AmLight-ExP) (NSF Award #1451018) operates three 200G waves, referred to as AmLight Express, and a 100G ring and multiple 10G links, referred to as AmLight Protect, supporting big data science between the U.S. and South America. AmLight-ExP total upstream capacity is presently 630G, operating on multiple submarine cable systems. Total aggregated capacity of all segments is 1.23 Tbps.
The AtlanticWave-SDX (AW-SDX) (NSF Award #1451024) is a distributed, multi-domain, wide-area SDX platform that controls many network switches across the AmLight-ExP network. Southern Crossroads (SoX) in Atlanta, AMPATH in Miami, South America eXchange (SAX) in Fortaleza, SouthernLight in Sao Paulo, and AndesLight in Santiago are exchange points participating in the AtlanticWave-SDX project.
SC19-NRE-024
400GE Ring
Location: Booth 543 (California Institute of Technology/CACR)
400GE First Data Networks: Caltech, Starlight/NRL, USC, SCinet/XNET, Ciena, Mellanox, Arista, Dell, 2CRSI, Echostreams, DDN and Pavilion Data, as well as other supporting optical, switch and server vendor partners will demonstrate the first fully functional 3 X 400GE local ring network as well as 400GE wide area network ring, linking the Starlight and Caltech booths and Starlight in Chicago. This network will integrate storage using NVMe over Fabric, the latest high throughput methods, in-depth monitoring and realtime flow steering. As part of these demonstrations, we will make use of the latest DWDM, Waveserver Ai, and 400GE as well as 200GE switch and network interfaces from Arista, Dell, Mellanox and Juniper as part of this core set of demonstrations. Some of the data flows from multiple NRE demonstration-sets from among NRE-13, NRE-19, NRE-20, NRE-22, NRE-23, and NRE-35 will be directed across the 3 X 400GE and 400GE WAN core networks.
The partners will use approximately 15 100G and other wide area links coming into SC19, and the available on-floor and DCI links to the Caltech and partner booths. An inner 1.2 Tbps (3 X 400GE) core network on the showfloor will be composed linking the Caltech, SCinet, Starlight and potentially other partner booths, in addition to several other booths each connected with 100G links. Waveserver Ai and other data center interconnects and DWDM to SCinet. For example, the network layout highlighting the Caltech and Starlight booths, SCinet, and the many wide area network links to partners’ lab and university home sites can be seen here. The SC19 optical DWDM installations in the Caltech booth and SCinet will build on this progress and incorporate the latest advances.
SC19-NRE-030
Responsive Cache Tiering with Ceph and OSiRIS
Location: Booth 471 (Michigan State University)
In our demonstration we will be creating Ceph cache tiers on storage located at Supercomputing in response to real-time client network demands. Our cache manager module for Ceph will create and drain cache tier overlays automatically based on the geographic proximity of client traffic to any given CRUSH tree location. We hope to show responsive cache storage for clients at higher-latency edge locations that can also rapidly be reconfigured in the background if client needs change.
SC19-NRE-031
Dynamic Traffic Management for Ceph and OSiRIS
Location: Booth 471 (Michigan State University)
We explore a programmatic QoS implementation on the Open Storage Research Infrastructure (OSiRIS), a Ceph-based multi-institutional storage platform spanning three core deployment sites at Michigan research universities. The OSiRIS Network Management Abstraction Layer (NMAL) incorporates a declarative language, Flange, that uses PerfSONAR measurements to dynamically adjust traffic shaping policy based on observed network state. At SC19 we extend OSiRIS over the wide-area to Denver, CO and evaluate the effectiveness of our approach.
SC19-NRE-032
ProNet – OpenROADM
Location: Booth 381 (University of Texas at Dallas)
Software Defined Optical Networking! ProNet, OpenROADM, 100Gbps DTN’s and Virtual Reality!
SC19-NRE-033
DTN Performance with Lustre Paralled Filesystem
Location: Booth 1833 (Compute Canada)
Compute Canada hosts multi-petabytes storage systems using either Lustre parallel filesystem or IBM Spectrum Scale filesystem. CEPH storage is also installed in several sites. We will measure the performance of dedicated DTNs connected to a Lustre and CEPH filesystem, using Globus/gridftp and mtdmFTP data transfer tools.
SC19-NRE-035
SANDIE: SDN-Assisted NDN for Data Intensive Experiments
Location: Booth 543 (California Institute of Technology/CACR)
The SANDIE (SDN-Assisted NDN for Data Intensive Experiments) project is based on the novel yet well-founded Named Data Networking (NDN) architecture, supported by advanced Software Defined Network (SDN) services to meet the challenges facing data-intensive science programs such as the Large Hadron Collider (LHC) high energy physics program. The SANDIE project has developed a new and highly effective approach to data distribution, processing, gathering and analysis of results to accelerate the workflow for the CMS experiment at the LHC. This demonstration will exhibit improved performance of the SANDIE system by leveraging three key components, the high-speed NDN-DPDK forwarder, the VIP jointly optimized caching and forwarding algorithm, and an NDN-based filesystem plugin for XRootD. The demonstration results strongly support the overall SANDIE project goal of providing more rapid and reliable data delivery, with varying patterns and granularity over complex networks, progressing in scale from the Terabyte to eventually the Petabyte range in support of the LHC physics and other data-intensive programs.
SC19-NRE-037
End-to-End Container-Based Network Functions and Instrumentation for Programmatic Network Orchestration
Location: Booth 1817 (University of Utah)
Download SC19-NRE-037 PDF (Pending)
SC19-NRE-039
Resilient Distributed Processing
Location: Booth 993 (StarLight)
NRL will demonstrate dynamic distributed processing of large volumes of data across geographically dispersed HPC and network resources, with the ability to detect and locate low level failures in network path, as well as the ability to rapidly respond to changing resources to meet application demands.