This page contains support information about Science DMZ. For a general overview of Science DMZ please go to our services section.

Why Science DMZ?

A laboratory or university campus network typically supports multiple organisational missions. First, it must provide infrastructure for network traffic associated with the organisation’s normal business operations including email, procurement systems, and web browsing, among others. The network must also be built with security features that protect the financial and personnel data of the organisation. At the same time, these networks are also used as the foundation for the scientific research process as scientists depend on this infrastructure to share, store, and analyse research data from many different external sources.

In most cases, however, networks optimised for business operations are neither designed for nor capable of supporting the data movement requirements of data intensive science. When scientists attempt to run data intensive applications over these so called “general purpose” networks, the result is often poor performance - in many cases poor enough that the science mission is significantly impacted.

Since many aspects of general-purpose networks are difficult or impossible to change in the ways necessary to improve their performance for science applications, the network must be adapted to allow it to support science applications without affecting the operation of the general-purpose network.

The Science DMZ model accomplishes this by explicitly creating a portion of the network that is specifically engineered for science applications and does not include support for general-purpose use. By separating the high-performance science network (the Science DMZ) from the general-purpose network, each can be optimised without interfering with the other.

While the core mission of a Science DMZ is the support of high-performance science applications, this cannot occur in isolation. Scientific collaboration, like any other network-enabled endeavor, is inherently end-to-end. The Science DMZ can easily incorporate wide area science support services, including virtual circuits and software-defined networking (SDN), and new technologies such as 100 gigabit Ethernet. While a general-purpose network might struggle to make effective use of technologies and services such as these, a Science DMZ allows the local science resources to be connected to the network services required for the conduct of the science, without interference with the general-purpose networking infrastructure.

Development history of Science DMZ

The Science DMZ architecture has its roots in several aspects of networking – design, operations, and security. The term “Science DMZ” comes from the “DMZ networks” that are a common element in network security architectures. The traditional DMZ is a special-purpose part of the network, at or near the network perimeter, designed to host the site services facing the outside world (e.g. external web, incoming email, and authoritative DNS servers). The security policies, network device configuration, and so forth are tailored for the DMZ, and are not conflated with the security policies and configurations of the internal local area network (LAN) infrastructure.

The Science DMZ adapts this notion to the task of supporting high-performance science applications, including bulk data movement and data-intensive experimental paradigms. The Science DMZ is a dedicated portion of a site or campus network, located as close to the network perimeter as possible, which is designed and configured to provide optimal support for high-performance science applications. Included in the Science DMZ are the capabilities to characterise and troubleshoot the network so that performance problems can be resolved quickly - this is typically achieved by deploying perfSONAR hosts in the Science DMZ that can test to the perfSONAR hosts in the wide area and in other Science DMZs at collaborating laboratories and universities.


Solving TCP performance issues

TCP has been characterised as the "fragile workhorse" of the network protocol world. While most science applications that need reliable data delivery use TCP-based tools for data movement, TCP’s interpretation of packet loss can cause performance issues. TCP interprets packet loss as network congestion, and so when loss is encountered TCP dramatically reduces its sending rate. The rate slowly ramps up again, but if further loss is encountered the rate is further reduced. This becomes more dramatic as the distance between communicating hosts is increased. In practice even a tiny amount of loss (much less than 1%) is enough to reduce TCP performance by over a factor of 100. One example of this is shown here.

It is typically much easier to architect the network to accommodate TCP rather than to fix TCP to be more loss-tolerant. This means that the network infrastructure that supports high-performance TCP-based science applications must provide loss-free IP service to TCP in the general case. The Science DMZ model allows a laboratory, campus, or scientific facility to build a special-purpose infrastructure that can provide the necessary services to allow high-performance applications to be successful.

Science DMZ architecture

The capabilities required to effectively deploy and support high-performance science applications include high bandwidth, advanced features, and capable gear that does not compromise on performance. Operational requirements drive the need for simplicity, accountability, accuracy, and the easy integration of test and measurement services. Security requirements come from the need to ensure correctness, prevent misuse, and the avoid embarrassment or other negative publicity that can compromise the reputation of the site or the science.

The Science DMZ architecture meets these needs by instantiating a simple, scalable network enclave that explicitly accommodates high-performance science applications while explicitly excluding general-purpose computing and the additional complexities that go with it.

Ideally, the Science DMZ is connected directly to the border router in order to minimise the number of devices that must be configured to support high-performance data transfer and other scientific applications. Achieving high performance is very difficult to do with system and network device configuration defaults, and the location of the Science DMZ at the site perimeter simplifies the system and network tuning processes. Also, if there is a performance problem, it is much easier to troubleshoot a handful of devices rather than a large general-purpose LAN infrastructure.

Network components

When choosing the network components for a Science DMZ, you should consider the following issues:

  • Make sure your routers and switches have enough buffer space to handle "fan-in" issues, and are configured to use this buffer space.
  • Check to see if the hardware is supported by OSCARS virtual circuit system, to allow the ability to easily extend layer-2 circuits all the way to the DTN hosts.
  • Look for devices that have flexible ACL (Access Control List) support to eliminate the need for stateful firewalls that will slow down the DTN hosts.
  • Consider deploying devices that will support OpenFlow, as software-defined networking via OpenFlow is a promising new technology for Science DMZs in the future.

Performance monitoring

The Science DMZ architecture includes a test and measurement hosts based on perfSONAR. This host helps with fault diagnosis on the Science DMZ, and with end-to-end testing with collaborating sites if they have perfSONAR installed. The perfSONAR host can run continuous checks for latency changes and packet loss using OWAMP, as well as periodic throughput tests to remote locations using BWCTL. If a problem arises that requires a network engineer to troubleshoot the routing and switching infrastructure, the tools necessary to work the problem are already deployed - they need not be installed before troubleshooting can begin.

Security - firewalls vs. router ACLs

It is our suggestion that firewalls not be used to protect science DMZs due to the negative impact they have on performance. Instead router ACLs and other security best practices be used. This may seem a controversial statement and hence we explain our stance in the remainder of this page.

The defence of information systems is an essential function of a modern enterprise. This is true whether the information systems are used for human resources and other business applications, scientific discovery, or any other function. One of the great workhorses of network security is the stateful firewall appliance, and firewalls work well for standard business applications - this is the primary purposed for which they are designed. However, many scientific applications require very high network performance - not just in link speed, but in throughput delivered to the application.

Firewalls are typically designed and built in ways that make them ill-suited to high-performance science environments. As described below, firewalls have no special analysis features for scientific applications - all they can do is filter network traffic for science applications by IP address and port number. This is very similar to the ways in which a router Access Control List (ACL) is used to provide security protections - namely, the filtering of traffic by IP address and port number.

One great advantage of the Science DMZ model is that it allows network and security architects to optimize the tools and technologies employed in the defense of science-critical systems. In the Science DMZ model, ACLs are used to defend high-performance scientific applications, and institutional or departmental firewalls are used to defend business and end-user systems - just as they are today. Since ACLs are usually implemented in the router's forwarding hardware, they typically do not compromise the performance of high-performance applications.

As data-intensive science becomes the norm in many fields of science, high-performance data mobility is rapidly becoming a core scientific infrastructure requirement. By deploying a Science DMZ, a research institution can both achieve high performance and defend its systems without having to make the choice between network security and the science mission of the institution.

Security for a data-intensive science environment located on the Science DMZ can be tailored for the data transfer systems on the Science DMZ. These hosts typically run a well-defined and limited set of special-purpose applications rather than the usual array of user applications. Since the Science DMZ resources are assumed to interact with external systems and are isolated from, or have carefully managed access to, internal systems, the security policy for the Science DMZ is tailored for these functions rather than to protect in interior of the general site LAN.

Firewall functions

The primary function of a firewall ruleset is to permit or deny network traffic using packet header information in a process where each packet is typically matched against the firewall ruleset. The primary criteria used to decide whether a packet conforms to security policy or not are source IP address, source port (if the packet is a TCP or UDP packet), destination IP address, and destination port. This section describes, in general terms, the high-level operations performed by most firewalls.

The firewall maintains a lookup table that tracks the protocol state of the individual permitted connections (identified by the 4-tuple of source/destination address and port) traversing the firewall in real time. When a packet arrives at the firewall, the firewall looks up the address/port 4-tuple of the incoming packet in its connection state table. If the packet matches a state table entry, the state table entry is updated and the packet is permitted. If there is no state table entry, the packet is matched against the firewall ruleset. If the packet is permitted by the firewall ruleset, a new state table entry is created and the packet is permitted. If the packet is not permitted by the ruleset, the packet is dropped. The state table is central to the operation of the firewall - if the state table fills, new entries cannot be created (and therefore no new connections can be established across the firewall). Also, the state table allows for significant performance leverage, because an address/port 4-tuple lookup is a fast operation. However, the resources of the state table are finite, and so the state table must be managed.

When a connection terminates normally (e.g. if the connection is a TCP connection and the firewall sees a FIN/ACK sequence in both directions, or if the firewall sees a reset for a TCP connection), the firewall removes the associated connection state from the state table. Some protocols such as UDP do not have explicit connection state, and so it is harder to tell when to clean up the connection state. Also, TCP connections often do not terminate normally (e.g. if I'm late for a meeting and just close my laptop, open connections just stop sending traffic without cleaning up). In order to prevent the state table from filling up, state table entries are managed by a timer. If a state table entry has not been updated after a timeout interval (typically 5 to 15 minutes, and potentially different for different protocols), the firewall assumes that the connection is dead and removes the connection from the state table. However, once the state table entry is gone, packets from that connection will be denied by the firewall - the firewall will not re-establish a state table entry for packets from the middle of an established connection.

Modern firewalls can manage very large state tables - millions of connections are typically supported. This traffic processing model is well-matched to the traffic profile of the modern business enterprise, which typically consists of a large number of short duration connections of relatively low data volume. The firewall can be built from many parallel packet processing engines that can be combined to create a firewall capable of processing millions of connections at 10 gigabits per second. This design pattern - the parallel processing of a large number of simultaneous connections by using a set of processing engines - is common to many firewalls.

In addition to address/port matching and connection state management, many more advanced firewalls are able to use deep packet inspection to track application-layer behaviour. They can detect email traffic, and scan emails for viruses on the fly, they can analyse web traffic to look for hostile behaviour, and so on. These application-layer functions exist because of the large number of enterprise customers that run the applications - there is a broad market for this functionality, and it is worth the investment in R&D to build more advanced analysis for common protocols into a firewall. In contrast, firewall vendors typically do not include application-layer analysis for scientific applications - the market is too small to make it worth building the analysis tools into the firewall appliance.

Interaction of firewalls with data-intensive science

A common task in data-intensive science is the movement of a large amount of data (several terabytes or more) from one location to another. The reason is typically to get the data to a storage or analysis resource of some kind. The transfer of large data sets typically involves a small number of TCP connections that use a significant fraction of the available path bandwidth. Also, such transfers can take a long time (sometimes hours), and the data transfer applications need to communicate at the beginning and end of the transfer, but not always in the middle while the bulk data movement is occurring. This traffic profile is a poor match for common firewall designs in several different respects.

  • Firewalls are designed to manage a large number of connections - data transfer applications typically use only a few.
  • Firewalls are often composed of many processing engines with a peak performance that is significantly less than the overall device bandwidth (e.g. on a 10Gbps firewall, a set of 8 packet processors capable of 1.2Gbps each is typical).

When the internal data path for a network device is slower than the interface speed of the device (as is the case for the 10Gbps firewall described above), high-performance applications can induce packet loss at data rates significantly less than the nominal bandwidth of the network. Because of the bursty nature of TCP it is often easy to cause loss inside a firewall that is built in this way. Consider the example of a data transfer host with 10GE interfaces - the host will send 10Gbps packet bursts which the firewall above can process at 1.2Gbps. The firewall must buffer the 10Gbps burst while the packets are processed at the lower rate, and some packets will be dropped unless the firewall's buffer can hold the burst until the firewall can process the packets. Unless the firewall has reliable loss counters (and unless the security group that owns the firewall can be persuaded to publish those counters), all that the scientists can see is that "the network" performs poorly because of packet loss caused by the firewall.

The management of the firewall's state table, particularly the removal of idle connections after a short time interval of inactivity, is a critical point of tension between long-running scientific applications and stateful firewalls. If the firewall is configured to prevent the exhaustion of its state table, it breaks long-running applications with network connections that are idle while the application is doing other work, often on terabyte-scale data sets. If the firewall is configured with a connection state timeout that matches the standard keepalive timer in host TCP/IP stacks (2 hours), the firewall is vulnerable to failure due to state table exhaustion. This is not a theoretical concern - we have seen data corruption and transfer failures because of this problem (one case involved a large data transfer where the control connections were aged out of the firewall state table before the data connections completed, resulting in transfer failures).

Note that hosts behind the site firewall that try to access their own local Science DMZ can often achieve reasonable performance. The reason is that the very low latency between the local Science DMZ and the local users results in some of the issues caused by the site perimeter firewall being much less of a problem in practical terms. TCP recovers from loss quickly at low latencies, and short-distance TCP dynamics are different enough from the TCP dynamics in long-distance transfers that packet loss that would exist if the wide area data transfers traversed the firewall may not even exist when local users access Science DMZ resources. The key is to provide the long-distance TCP connections with un-congested, loss-free service.

More information is available in several presentations:

Report a fault

If you are having issues with Science DMZ that are not listed in the maintenance calendar, please call our 24/7 service desk on 0508 466 466 or complete the report a fault form.