To connect via Zoom: Meeting ID 630-840-2296
Password distributed with meeting announcement
(See instructions for setting Zoom default to join a meeting with audio and video off: https://larsoft.org/zoom-info/)
PC, Mac, Linux, iOS, Android: https://fnal.zoom.us/j/831443820
Phone:
https://fnal.zoom.us/zoomconference?m=SvP8nd8sBN4intZiUh6nLkW0-N16p5_b
H.323:
162.255.37.11 (US West)
162.255.36.11 (US East)
213.19.144.110 (EMEA)
See https://fnal.zoom.us/ for more information
At Fermilab: no in-person presence at the lab for this meeting
Erica: Release and project report
larg4#14 will be closed
Muve: Fast photon simulation based on GAN
Code in larsim/PhotonPropagation
Based on computable graph from TensorFlow
Interface similar to PDFastSimPVS or PDFastSimPAR modules
Compared fcl configs for PDFastSimGAN and PDFastSimPAR
A tool developed (TFLoader) to load the computable graph w/in PDFastSimGAN module
Computable graph
Checked into related products, eg, dunetpc
Each takes into account diff geometry and optical detector config
Training samples built from full simulation
Currently trained and produced from TensorFlow 1.12
Processed in python script. No requirement from LArSoft in training
TFLoader tool
In larsim/PhotonPropagation/TFLoaderTools
Used to load computable graph and generate "photon visibilities" for each step
Need to call TensorFlow 1.12
Two includes from TF
Updates to CMakeLists.txt
Added "include_directories" directive to pick up TF includes
Based on related case: support #22504 which suggests adding include_directories (...)
During compilation
works when checked out larsim + dunetpc
Does not when only larsim
Showed CMakeLists.txt snippet
Question: how can this be fixed?
Discussion
Lynn: LArSoft now requires anything that uses machine learning to be completely modular and go into special repositories dedicated to DL algorithms (so not larsim)
For instance: larrecodnn. Used to keep TF separated from other reco algorithms
Otherwise, cannot guarantee that you will have a functioning TF product on any given platform
larsimrad has similar modular dependency. Put it there? And from there we can address the linking / header problem
Prefer a new package / repository with more relevant name. Will need to be at the larsim level
Q: what about observation that build is ok with larsim + dunetpc, but not just larsim?
Can't address this in the current PR. Need it in it's final home first
Propose larsimdnn as the new repository name
SciSoft will set this up and check documentation to make sure that others can replicate the procedure for provisioning a new repository
Kyle Knoepfel: Complications in supporting TensorFlow
Noted that TF installations best supported in non-relocatable environments
Allows for hardware optimization at build-time, etc
Producing relocatable TF build via UPS is difficult
Bazel support
Requires internal installations of software that are also supplied as UPS products
But building the library using native scripts downloads libraries from web
But this is fragile when using ups
Need to change build scripts in TF in order to avoid binary incompatibilities. For instance, uses Eigen internally (embedded?), while we also provide Eigen as an external. So need to change namespaces in TF in order to avoid name conflicts
HW opts hard to provide
SciSoft meeting later today to discuss how to use TF
Discussion
Erica: How does this picture change with Spack?
Build configurability will be easier. Different builds for instance.
But relocatability will likely remain a problem
Will have the same problems we now have with ProtoBuf and Eigen, for instance
Alex Himmel: has anyone engaged with Snowmass process computing groups to address this issue? Propose changes to TF to make it more supportable, for instance.
Not yet. Limited time for SciSoft team
Other SCD people have been engaged w Snowmass process
Generally, however, this type of issue differs across experiments. CMS for instance has home-grown solution to configuring dependencies. They are unlikely to want to go the spack route
AH: Can TF be modified to make relocatability easier?
Yes, but not agreement w/in HEP that code should be relocatable.
At recent WLCG/HSF, a speaker even claimed that majority of products in HEP were not relocatable. Challenged on basis of Fermilab experiments, where all require relocatability
But agree w idea that changes to TF and PyTorch would be useful. And that a focused community effort would be helpful in achieving those changes
Lynn: Noted that there is a working group outside HEP working on TensorFlow 2 build. But still working in Bazel environment