LBNE Simulations/Reconstruction

US/Central
WH 4NW "Req. Room"

WH 4NW "Req. Room"

Brian Rebel (Fermilab), Eric Church (Yale), Matthew Szydagis (UC Davis), Michael Kirby (FNAL), Stan Seibert (University of Pennsylvania), Thomas Junk (Fermilab)
Description
ReadyTalk: 1-866-740-1260
Meeting ID: 3872183
http://www.readytalk.com

Minutes of the July 10, 2013 LBNE Sim/Reco group meeting Present: Tom Junk, Jonathan Insler, Eric Church, Tyler Alion, Kevin Wood, Brett Viren Apologies to those omitted We have been desiring our own software repository for quite a long time now. Tyler has been working on disambiguation algorithms for some time now and would like to share code with collaborators (such as Jae), and also not have to worry about accidentally colliding with ArgoNeuT and MicroBooNE. At the LArSoft stakeholders' meeting July 9, we were largely in favor of moving forwards to using git in place of svn for the LArSoft repository. We can go ahead and use git with our new repository to reduce migration in the future and to get a head start on using the new system. Tyler suggests calling the new repository lbne-fd, and Brett has created it as a subproject to lbne-software. The granularity of what goes into a separate repository vs. packages is an interesting question. Lynn Garren has a proposed granular breakup for the LArSoft repository https://cdcvs.fnal.gov/redmine/projects/larsoftsvn/wiki/Current_LarSoft_Structure We are considering a similar fine-graining of LBNE packages, but would like some flexibility as we learn as we go along. Since you clone an entire repository as the minimum checkout in git, it pays to have small repositories, while packages that are not being worked on can be gotten out of ups. We need setup and build scripts, as well as a way for Redmine to show checkin history and documentation. It may be easy to write a script to update the DOxygen content from all the repositories within a Redmine project so that searching for keywords is easier. Tyler ran into an issue with fuzzyclustering's centroid finder. The distance formula wraps easily, finding an average position to compute a cluster centroid doesn't map well onto a wrapped space. Tyler proposes mapping the channel number in an induction plane to a unit circle, and then doing a charge-weighted average of the x and y positions in this space to get the centroid. Tom is concerned about events that wrap all the way around this circle like throughgoing cosmics. Tyler is talking with Jae about making an analysis module that characterizes the performance of the disambiguation. On the list of things to put in it are the fractions of correctly disambiguated induction hits, where the denominator is all induction hits. Another is to see how the tracking is affected by comparing the MC track ID with reconstructed track ID's to see if disambiguation is contributing to splitting or merging tracks. We plan on having a meeting on Friday for Jae's disambiguation presentation. Tyler would like samples of GENIE + CRY events for 10 kT on the surface. Eric says that MicroBooNE did a multi-step simulation -- GENIE and CRY were run in subsequent jobs, each adding their output to the ART-formatted rootfile. A third job ran the simulation, where LArG4 knew to read in all the particles made with GENIE and CRY. Tyler says that it is easy to get correct disambiguation rates exceeding 90-95% with single particles, and wants to try a harder problem. Eric sent around a link to a fcl file from MicroBooNE that does this. Jonathan is working on a fast hit-finder that runs on raw digits instead of deconvoluted recob::Wire data, as we will be needing that for a 35T online trigger. It runs, but finds fewer hits than the GausHitFinder. Tom suspects that GausHitFinder is better at pulling apart multiple hits that are bunched up together, and that for triggering purposes for 35T, this it may not be that necessary to get them all. But we do want basic timing resolution for the hits and to make sure that hits aren't simply lost. Calwire is there to unpack the raw digits and to get the pedestal from the input data products. Tyler would prefer that the combined unpack/deconvolute/hit-finder module Jonathan has been working on moves forwards so that we can start reconstructing 10 kT MC. As it is, it is quite slow and takes a lot of memory (8 Gb). Even if it is slow, if we can get it under 2 Gb we can run the job on Fermigrid. Jonathan is putting that module together and is working on the time-domain deconvolution part.
There are minutes attached to this event. Show them.
    • 13:00 13:20
      Disambiguation 20m
      Speaker: Tyler Alion (University of South Carolina)
    • 13:20 13:40
      Hit Finding 20m
      Speaker: Jonathan Insler (Louisiana State University)