Research / Forschung

Motivation: Networking more than the sum of its pieces

One may ask what is networking, and can it ever be understood? For some networking corresponds to protocols such as IP, UDP, and TCP or to the routing protocols such as OSPF and BGP. For others networking corresponds to network elements such as hubs, switches, and routers or even to the way that information is transmitted, optically, electronically, or wireless. Another group views networking as building, operating, and maintaining an infrastructure such as the LAN or the ISP backbone. Others are interested in observing and characterizing the traffic on such networks. While for others especially the billions of users of networking technology, networking is equivalent to applications enabled by networking technology, the Web, SMS, telephony, to name just a few. For yet another group networking is a social phenomena that has changed and is changing how humans communicate, business work and the military operates. Expectations about interactions between humans, computers and humans, and between computers have been redefined. This huge spectrum of different views is one of the main aspects of why networking has been such a success. But it also points out that in order to understand networking, including its possibilities, its capabilities, its limitations, its scale, and its dangers, one should view networking as more than the sum of its separate and unique parts.

In the past networking research has tended to focus on the individual parts of a network and has resulted in good designs and a good general understanding of each individual part. Compared to other sciences networking is fortunate in that networks and their applications have been and are designed and implemented by humans. Therefore the protocols and the operation of the devices are known (or can be reverse engineered). Nevertheless there appears to be, at least in this authors view, a big gap between our understanding of the individual parts and networking as a whole. A good understanding of networking as a whole would for example enable performance debugging. With more and more users and applications relying on a well performing network locating the reason for a performance problem can be crucial both for the military as well as for the private sector. Unfortunately a performance problem can have many causes: the protocols themself, a misperforming link, a bad application design, a problem in the access network, a misconfiguration, a denial of service attack, interactions between two protocols, scale in the sense of a success disaster, etc. A complicating factor is that the current instrumentation of the network infrastructure is, if it exists, a component based one and not an service oriented one. Furthermore the relationship between component performance and service performance is illunderstood.

There are several tools at our hands that can be expected to be helpful in our understanding networking as a whole. Since the network is designed and operated by humans it is at least theoretically possible to instrument the network at all levels. The challenges here are to integrate the measurements into the design process, to collect data at a variety of different locations and levels of the protocol hierarchy, to analyze the data and look for invariants, and most importantly to correlate various datasets with each other. The last step is the most difficult one but also the most rewarding one as it can help clarify the interactions between the various network components. Some of the main problems are huge datasets and uncertainty in the data itself. Furthermore questions about access to operational networks vs. protection of data privacy vs. obtaining the necessary knowledge in order to avoid misinterpretations of the datasets have to be resolved.

But analyzing data in itself is not sufficient. Possible explanations of phenomena observed in measurements have to be checked and verified. Here one can use another tool enabled by the good fortune that humans design networks: network simulators. While network simulators allow us all kinds of what if studies care is needed in the design of the experiments. This is necessary in order to capture the interactions of the various network components, the scale, and variability of the network. So far simulators have mainly been used to study each individual network component but not the network as a whole. But with the increasing capabilities of network simulators I expect this to change and open new opportunities for networking research. But simulation is not the only tool available to us: abstraction and corresponding theoretical models that allow us to explain certain phenomena are another tool if used appropriately and checked against reality.

Another aspect of improving our understanding of networking as a whole is an improved testing strategy for network components. Ideally each component should be tested for its correctness and evaluated for its effectiveness in a test environment before it is deployed in a network. Unfortunately the ability to test many features is limited by the simplicity of current test setups. Typical test-beds consist of a small number of devices and test-traffic generators all suffering a severe shortcoming: the traffic they generate is not necessarily consistent with the traffic in the network, e.g., in the Internet. Identifying the necessary components of a test-bed and ways to implement them are open problems.