11–12 Sep 2012
Washington DC
US/Eastern timezone
The US Department of Energy (DOE), in conjuction with academic researchers, is organizing an exploratory workshop on computational modeling of big networks. The purpose of this workshop is to investigate the major challenges and potential impact of a dedicated research program in this area. It will take place on September 11-12, 2012, in Washington DC.
To participate, submit an extended abstract that either summarizes your prior relevant work in the area of large-scale computational modeling and analysis of networks or describes novel ideas, approaches or applications in this area.


Starts
Ends
US/Eastern
Washington DC
Room A
American Geophysical Union 2000 Florida Ave NW 20009 Washington DC
Science of Networking
Most networking research is driven by the creation of artifacts such as protocols, applications, middleware, routers and other network elements. The Internet itself is one amazingly successful outcome of this approach. Despite this success, we have come to realize that we only partly understand how the Internet works. This is brought into sharp focus when an underlying component of the Internet malfunctions or is under attack.
If we were to look at the Internet in the same way that a biologist looks at a living organism or a geophysicist looks at the climate system, what would be the Internet research agenda? How would we try to understand, monitor, and control the Internet, considering the full extent of its complexity? Would our approach change? Should it?

Vertical understanding
Networking research typically focuses on individual components or layers of the overall Internet architecture. For example, if the focus of a research project is on congestion control, there is typically little or no consideration about how applications actually use the network or how users react to congestion events. This is a direct outcome of a reductionist stance in conducting a scientific investigation. In reality, however, users, applications, and transport protocols are all interdependent and it is precisely their interactions that create much of the complexity in  determining end-to-end performance.
By "vertical understanding" we refer to a research agenda that aims to understand networks in a holistic manner, starting from the users and applications, and socio-economic structures at the top all the way down to effects that occur at the physical layer. Through a careful analysis of this vertical path, we may be able to discover complex interactions and important effects in network behavior and performance that we can only suspect at this point.

Horizontal understanding
A typical end-to-end Internet path today is highly heterogeneous in terms of the infrastructure as well as the policy boundaries that it traverses (think of the path between a mobile user carrying a smart phone and downloading media-rich content from a set of different sites served by different CDNs and data centers). Most models for Internet performance are still based on simplistic models of individual queues, small-scale simulation topologies, and interdomain routing topologies that collapse an entire Autonomous System to a single node.
By "horizontal understanding" we refer to a research agenda that aims to capture the diverse nature of end-to-end Internet paths, considering both the technological heterogeneity along a path as well as the policy and economic boundaries that are crossed by those paths.

Large-scale computational modeling and analysis
The two objectives of vertical and horizontal understanding will most likely demand new research methods and tools. We suspect that existing analytical tools or experimental approaches (such as testbeds) will not be sufficient in capturing the vertical and horizontal complexity of the Internet.
Instead, we believe that large-scale computational modeling, a powerful research tool that has been largely unexplored by networking researchers, may be the right approach to study these objectives. This is motivated by other disciplines, such as climate science or physics, that have been using supercomputers and large-scale computational modeling for a long time with many successful results. What if we could construct large-scale computational models that can capture what happens to a computer network all the way from the transmission of bits to the complex and multifaceted latest Internet applications? What would be the major challenges and objectives of that research agenda? This, then, is the focus of this workshop.

Abstracts:
Submitted abstracts must be in plain-text and they should not be longer than 1500 words. The submitted abstracts will not be peer-reviewed, but they will be used, based on their relevance and/or novelty, to determine a maximum of about 30 invited participants. The invited abstracts will be distributed informally at the workshop to facilitate discussion, but they will not be formally published.

Important dates:
Extended Abstract Submission: May 15, 2012
Invitation of Participants: June 15, 2012
Workshop: September 11-12, 2012, in Washington DC.

Organizers:
k. Claffy, CAIDA
David Clark, MIT
Constantine Dovrolis, Georgia Tech  (Chair)
John Heidemann, USC-ISI
Richard Fujimoto, Georgia Tech
Srinivisan Keshan, University of Waterloo
Don Towsley, University of Massachusetts, Amherst
Zhi-Li Zhang, University of Minnesota

For any additional information, please email combine2012@cc.gatech.edu.