PyTorch: An Imperative Style, High-Performance Deep Learning Library
Abstract
Deep learning frameworks have often focused on either usability or speed, but not both. PyTorch is a machine learning library that shows that these two goals are in fact compatible: it was designed from first principles to support an imperative and Pythonic programming style that supports code as a model, makes debugging easy and is consistent with other popular scientific computing libraries, while remaining efficient and supporting hardware accelerators such as GPUs. In this paper, we detail the principles that drove the implementation of PyTorch and how they are reflected in its architecture. We emphasize that every aspect of PyTorch is a regular Python program under the full control of its user. We also explain how the careful and pragmatic implementation of the key components of its runtime enables them to work together to achieve compelling performance. We demonstrate the efficiency of individual subsystems, as well as the overall speed of PyTorch on several commonly used benchmarks.
Cited in this thesis
Frequently Cited Together
- One model to learn them all4 chapters
- From Laboratory Exploration to Practice: Applications, Challenges, and Developme4 chapters
- Stacked generalization4 chapters
- Grad-cam: Visual explanations from deep networks via gradient-based localization4 chapters
- Bert: Pre-training of deep bidirectional transformers for language understanding4 chapters
- Accurate lamb origin identification and molecular differentiation analysis using4 chapters
BibTeX
@book{Goodfellow2016,
title = {Deep Learning},
author = {Ian Goodfellow and Yoshua Bengio and Aaron Courville},
publisher = {MIT Press},
note = {http://www.deeplearningbook.org},
year = {2016},
}