Tier 3 Science Data/Network requirements workshop - NJ (East Coast region)

US/Central
A10 Jadwin Hall (Princeton University)

A10 Jadwin Hall

Princeton University

Princeton, NJ
Richard Carlson (Internet2)
Description

This is the third in a series of 1 day workshops to develop a common understanding of how university-based physicists will obtain and use LHC data. This workshop will bring together the university-based physicists and the Campus/Regional/Connector/National network providers to develop a common understanding of what is required to meet the science needs. All LHC-related scientists and support staff for whom this conference is convenient in the surrounding area are invited. Video connection provided via VRVS under Internet2 Community and the virtual room MAUI.

Internet2 - Advanced tools page
Internet2 - Knoppix LiveCD NPToolkit ISO
LBNL - Host Tuning Guide
PSC - Host Tuning Guide
Participants
  • Anne Shelton
  • Aret Carlsen
  • Austin Napier
  • Charles von Lichtenberg
  • Chris Tully
  • Curtis Hillegas
  • Davide Gerbaudo
  • Dmitry Malyshev
  • Gregory Palmer
  • Jeff Edwards
  • Jimmy Kyriannis
  • John Bigrow
  • ken tindall
  • Leo Donnelly
  • Matt Crawford
  • Mohammad Alam
  • Paul Henderson
  • Peter Gutierrez
  • Peter Heverin
  • Peter Olenick
  • Rich Carlson
  • salvatore torquato
  • Samya Zain
  • Timothy Lance
  • Vinod Gupta
  • William Wichser
  • Wilson Dillaway
    • 1
      Welcome
      Greetings from local hosts
      Speakers: Prof. Chris Tully (Princeton University), Mr Vinod Gupta (Princeton University)
    • 2
      Background on LHC Computing and Networking
      Background information describing the LHC ATLAS and CMS computing models
      Speaker: Mr Rich Carlson (Internet2)
      • a) LHC computing models
        The ATLAS and CMS groups have different computing models. Representatives from each group will present their model.
        Speaker: Mr Rich Carlson (Internet2)
        Slides
      • b) ATLAS computing model
        A brief overview on how the US-ATLAS computing model
        Speaker: Dr Shawn McKee (Univ. of Michigan)
        Slides
      • c) CMS computing model
        A brief overview of the US-CMS computing model
        Speaker: Matt Crawford (Fermilab)
        Slides
    • 09:45
      break
    • 3
      Overview of compute resources and usage plans
      Attendees will hear from physicists representating the LHC ALICE, ATLAS, and CMS experiments. Presentors will discuss thir institutional , regional, and local computational resources, data storage, and processing facilities. They will also discuss their usage plans for LHC-related Monte Carlo and detector data.
      • a) Resource usage on CMS
        Accessing data and doing analysis on CMS
        Speaker: Ian Fisk (Fermilab)
        Slides
      • b) LHC Data Analysis
        Current usage pattern for Princeton researchers and the growing need for network speed and computing resources
        Speaker: Prof. Chris Tully (Princeton University)
        Slides
      • c) Discussion of Analysis Needs
        Physicists in attendance comment on what is done at their labs and universities
    • 4
      Overview of network infrastructures
      Network operators should give a description of each of the facility infrastructures presently available to transmit and receive LHC-related data
      • a) State/Regional Infrastructure Issues
        A brief overview of network infrastructure issues facing state/regional operators
        Speaker: Greg Palmer (MAGPI, Director)
        Slides
      • b) Campus Infrastructure Issues
        A brief overview of network infrastructure issues facing campus operators
        Speaker: Mr Peter Olenick (Princeton University)
        Slides
      • c) National Infrastructure issues
        A brief overview of network infrastructure issues facing national operators
        Speaker: Mr Joe Metzger (ESnet)
        Slides
    • 12:00
      Lunch
    • 5
      Technology updates
      Attendees will learn about new/emerging technologies that can be used by LHC experimenters
      • a) TeraPaths - Network Path configuration tool
        The TeraPaths project is developing tools and procedures that will allow scientists to request end-to-end virtual network paths between LHC sites
        Speaker: Dr Dimitrios Katramatos (BNL)
        Slides
      • b) UltraLight - a new science network
        The UltraLight project is exploring new network technoloties with a focus on high-end science
        Speaker: Richard Cavanaugh (University of Florida)
        Slides
      • c) Advanced Diagnostic and tuning tools
        New tools and documentation can help scientists and support staff quickly find and fix poor performance problems.
        Speaker: Mr Rich Carlson (Internet2)
        Slides
    • 6
      Regional and local grid projects
      Discussion of grid projects that can provide resources to the LHC experimenters.
      Speaker: Ruth Pordes (Fermilab)
      more information
      Slides
    • 15:00
      break
    • 7
      Science/Technology match
      Guided discussion on using network services
    • 8
      Roadmap
      Guided discussion on next steps, services, and missing pieces.