CERN/Fermilab/DUNE Interface

US/Central
Description
This meeting will be held using ReadyTalk Conferencing: https://www.readytalk.com. The Meeting ID is 9763914. A list of international phone numbers can be found at: https://www.readytalk.com/rt/an.php?tfnum=8667401260.
Slides
CERN/Fermilab/DUNE Interface (06 Apr 2016)

- https://indico.fnal.gov/conferenceDisplay.py?confId=11878

- attendance: Amir, Maxim, Dario, Tom, Steve, Stu, OLI (and others I forgot)

- changing the meeting schedule
    - moving the bi-weekly meeting by 1 week, next meeting next week, April 13th
    - moving meeting to 8 AM CST

- Question: who is representing Fermilab and not DUNE
    - Steve and Stu are present who are both, Fermilab and DUNE
    - OLI is here for Fermilab as well

- Action items from last week
    - questions about first resource request to CERN and FNAL (see below)
    - First list of computing contact person(s) at CERN
        - DUNE S&C Organization discussions resulted in proposal to DUNE management
        - Waiting for spokespeople decision

- data handling
    - Maxim made some edits and improved explanations
    - Brett had comments to the diagrams
    - Maxim hopes to have some changes in that warrants a release in the next days
    - One example: FTS/root is current coice, needs to be evaluated

- Testing
    - first functional test between FNAL and BNL
        - time scale: order of weeks
    - then deploy at CERN and test
        - functional equivalent of the production system
        - time scale: by the end of the year
    - then we plan scale tests

- 3/1/1 is more a technical tests, no beam, just cosmics
    - no particularly sophisticated data handling plan

- resource requests
    - CERN, Bernd mailed around what he reserved for the neutrino platform
        - 200 TB disk space in EOS  (disk only)
        - 400 TB in Castor (tape space)
        - 500 cores (== 500 concurrent running jobs) in our batch system
        - a few DB on demand systems
        - clarifications from Bernd in a follow up mail:
            - the resource numbers I gave are a bit of guesswork and have a size which I can easily provide without starting
            - a serious money/budget discussion  (this discussion is ‘postponed’, but not cancelled … J)
            - every user gets a share of AFS space anyway (up to 100 GB), plus each ‘experiment’ can ask for AFS project space (1-2 TB)
            - which can be used for software installation or calibration data etc.     AFS and CVMFS are by default available on all batch nodes, thus you can also ask for some space in the common CVMFS area for software distribution
            - for ntuple space one could use AFS, but one has to cope with some performance limitations. I would suggest to rather use EOS also for ntuple storage (or a mixture).
            - one can create a project in our virtualized environment which could be used for analysis. the interactive lxplus facility is also used for this.  the VMs would need to be managed by yourself (i.e. which linux version, tools, compiler, etc.)
            - there is currently some activity to build a prototype DAQ+online farm in building 182 (Dario). IT will provide some ‘older’ equipment for this purpose.  can one agree on using this as a general testbench or do we need one per experiment ?
            - the analysis at CERN is running in general on lxplus and the batch system, plus end-user root analysis on the ‘desktop’ the mentioned 500 cores would be the pledge for you split across the batch system and the VM system
    - Fermilab
        - Discussed SC PMT recommendations
        - Tom clarified that DUNE is not being scaled back, only older experiments
            - It was also acknowledged that DUNE will ramp up significantly in 2018
        - Still Tom would like to put some contingency into the plan if the software is not able to stay within 2 GB of memory and/or OSG utilization is not possible because of libraries, etc.
        - Amir said that DUNE will need their own operations team because OPOS ➜ workload management and data management systems are important
    - General
        - Many questions on the processing model and analysis model and what is possible/default operation at CERN
            - Amir: is doing analysis only at Fermilab sufficient?
            - To define protoDUNE requirements (for example, what is the latency acceptable for online-processing to inform data taking) protoDUNE needs to know the possibilities and standard operations procedures at CERN (where do we write out to, where do we read in from, user group directories are where, …) and Fermilab
        - Maxim reminded that we need to flesh out the computing model in terms of processing and analysis
        - Need to be open to all possibilities, for example analysis/processing at CERN or other sites in addition to Fermilab
            - Maxim: discussion on how to increase the resources at Brookhaven (FTE and resources) to help, good candidate that BNL to share the production with Fermi
            - Amir and Maxim want to query especially European collaborators if they are willing to contribute computing resources

- next meeting: April 13th, 8 AM CST
There are minutes attached to this event. Show them.
The agenda of this meeting is empty