Scaling ML meeting

US/Central

Strong scaling seems to degrade performance, i.e., it takes longer to get the same model performance although more GPUs are being used. 

Mini-batch on even on GPU convergences significantly faster.

 

There are minutes attached to this event. Show them.
    • 15:00 15:15
      Intro 15m
      Speakers: Paolo Calafiura (LBNL), Walter Hopkins (Argonne National Laboratory)
    • 15:15 15:35
      Distributed GNN training 20m
      Speaker: Prof. Alina Lazar (Youngstown State University)
    • 15:35 15:50
      AOB 15m