Loading…
Back To Schedule
Wednesday, May 13 • 9:30am - 10:00am
DLRM Workloads with Implications on Hardware and System Platforms - presented by Facebook

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

In this talk we discuss the computation and communication patterns exercised in the deep learning recommendation models (DLRMs). DLRMs are widely deployed at scale for a number business critical applications and exercises all parts of the HW infrastructure – memory, compute, storage and network. In particular, we highlight the differences in terms of the balance between compute, memory and network when compared to the more common computer vision and natural language processing counterparts. We outline different strategies for asynchronous and synchronous distributed training, including the use of model and data parallelism and their impact on system design. The latter requiring high performance interconnects with optimal topology and efficient fabric, supporting Alltoall communication in addition to Allreduce. Finally, we do a deep dive into the open-source implementation of DLRM in Pytorch framework and its use for HW/SW co-design for this important class of applications.

Speakers
avatar for Dheevatsa Mudigere

Dheevatsa Mudigere

Research Scientist, Facebook
- Deep Learning / AI- Scientific Computing- Parallel computing/High performance numerical computing- HW / SW co-design
avatar for Maxim Naumov

Maxim Naumov

Research Scientist, Facebook
Maxim Naumov joined Facebook in January 2018. His interests include deep learning, parallel algorithms and numerical methods. In the past, he held different positions at Nvidia Research, Emerging Applications and Platform teams. He has also worked at Intel Corporation Microprocessor... Read More →


Wednesday May 13, 2020 9:30am - 10:00am
Executive Tracks