Learn how distributed training works in pytorch: data parallel, distributed data parallel and automatic mixed precision. Train your deep learning models with massive speedups.
How to optimize the data processing pipeline using batching, prefetching, streaming, caching and iterators
How to develop high performance input pipelines in Tensorflow using the ETL pattern and functional programming