Weekly CCE-IOS tele-conference

US/Central
Peter van Gemmeren (ANL), Rob Ross (ANL)
Description
BlueJeans Link: https://bluejeans.com/102100194

Attended: Paolo Calafiura, Salman Habib, Rob Ross, Peter van Gemmeren, Chris Jones, Liz Sexton-Kennedy, Matthieu Dorier, Philippe Canal, Torre Wenaus, Suren Byna, Doug Benjamin, Rob Latham, Saba Sehrish

 

Management News:

Salman: Lali happy with the QR. Need to ensure that staffing is worked out, that requests match with reality.

Chris Jones: CMS Workflows

 

Slide 2) Workflow starts from a related set of input, applies a transformation to the input, creates an output

CMS only uses the workflow term when discussing multi-node processing, often multi-site

 

Slide 3) Two categories of workflows: production and analysis.

 

Slide 4) Production workflow groups

- group based on the starting point of the workflow:

  - data -- processing raw data from the detector

  - monte carlo -- creating and procecssing data from simulation

Slide 5) Data Workflows

- one step: reconstruction "RECO"

  - calibrates and aligns RAW data and creates DIGIs

  - applies pattern recognition algorithms to create physics objects

Slide 6) Monte Carlo Workflows:

- multiple steps

  GEN - generate the physics particles using theoretical models

  SIM - simulate detector response to the generated particles -- generates RAW data

  RECO - same as the data workflow

- sometimes the steps are combined in a single application, or run one after another

Slide 7) Characteristics of Data/Monte Carlo:

Events

- actual or simulated collisions in the detector

  each is statistically independent

LuminosityBlock (also "LumiBlock")

- set of consecutive events

  - Events grouped into 23 second periods in the data context

  - Similarly grouped in Monte Carlo, could be arbitrary amount of time

- Events are only in one of these, useful for accounting

- Also, in CMS, this is the unit of work handed to a workflow "job"

Run

- Consecutive LuminosityBlocks

  - For Data, represents a continuous run of the detector, usually several hours, no configuration change

  - Not used in Monte Carlo

DataSet

- grouping of events by some criteria

  - for Data, criteria selected when recorded (the triggering)

    - so DataSets can share runs, LuminosityBlocks, and Events

    - basically a sort of secondary indexing

  - for Monte Carlo, no sharing, based on configuration of the generator and simulated detector

- 10K - 10B events in a DataSet

Files

- Files hold Events for a given LuminosityBlock

- CMS never splits a LuminosityBlock across Files

- File belongs to exactly one DataSet

Slide 9) cmsRun

- all production workflows run cmsRun

- python based configuration file

- dynamically loads components

- sources, producers, sinks

  - source - read input

  - producers - transform

  - sinks - write data to output

- 10s to 1000s of components

- multi-threaded, single node/process

  - spin up multiple of these for different nodes or within a node

Slide 10) Jobs run by Workflow use a Wrapper

- Wrapper handles interactions with cmsRun

- deals with output from the job

- might move data to another site

Slide 11) Workflow setup

- decide on input:

  - Data: a DataSet, forms the RAW data

  - Monte Carlo: a particular generator configuration and # of events

- decide on CMS software release, based on the data (e.g., 2018 data should use 2018 software)

- Steps:

  - GEN or GEN+SIM

  - this determines (or helps determine) the configuration template to use

- Chain of steps?

  - a job can run multiple cmsRun jobs in series

- Where to run

- How many LuminosityBlocks per each job

  - based on back of the envelope of events/sec and so forth

- Set name of the DataSet to generate -- every workflow creates a new DataSet

Slide 13) Running

- job "pilot" runs on a site.

  - talks to the workflow system, gets a task, executes the Wrapper for the task

Slide 14) Finishing

- some things fail, try again some number of times

- once enough tasks have finished, file merge tasks added to the system

  - CMS likes output files to be ~10GB

  - merging consecutive LuminosityBlocks into these files

 

Discussion:

Discussion of Cori workflows, possibly using burst buffer for intermediate datasets (prior to merge)?

Also thinking about things in terms of data lakes. CERN and FNAL holding specific datasets for Digitization.

Library of pre-made "uninteresting"/background events that are used to create the "pile-up" that has been discussed in other contexts.

Doug: ATLAS typically doesn't do GEN and SIM in the same application. They're talking about FastChain where they would do all steps in a row, in development.

- one pass, different steps in a row

Q: post-processing to equalize file size?

A: during the merging step, yes, although there's some question about whether everyone does the merging.

Merging is done on the single node (multi thread), which helps mitigate file count some.

Restriction of a single process managing a whole LuminosityBlock is limiting their parallelism.

Not sure they're get to the event granularity (as in ATLAS EventService for simulation) due to accounting issues

Discussion of ATLAS Simulation and Event Service in two weeks (Doug & Torre).

Torre: if we're breaking into little files, perhaps we should just keep all the little files around. Thought about putting these into an object store, but they didn't scale to this workload.

There are minutes attached to this event. Show them.