Scaling ML meeting

US/Central

Strong scaling seems to degrade performance, i.e., it takes longer to get the same model performance although more GPUs are being used. 

Mini-batch on even on GPU convergences significantly faster.

 

There are minutes attached to this event. Show them.
    • 1
      Intro
      Speakers: Paolo Calafiura (LBNL), Walter Hopkins (Argonne National Laboratory)
    • 2
      Distributed GNN training
      Speaker: Prof. Alina Lazar (Youngstown State University)
    • 3
      AOB