TensorFlow, PyTorch, and manylinux1
As some of you know, there is a standard in Python called manylinux (
https://www.python.org/dev/peps/pep-0513/) to package binary executables
and libraries into a “wheel” in a way that allows the code to be run on a
wide variety of Linux distributions. This is very convenient for Python
users, since such libraries can be easily installed via pip.
This standard is also important for a second reason: If many different
wheels are used together in a single Python process, adhering to manylinux
ensures that these libraries work together well and don’t trip on each
other’s toes (this could easily happen if different versions of libstdc++
are used for example). Therefore *even if support for only a single
distribution like Ubuntu is desired*, it is important to be manylinux
compatible to make sure everybody’s wheels work together well.
TensorFlow and PyTorch unfortunately don’t produce manylinux compatible
wheels. The challenge is due, at least in part, to the need to use
nvidia-docker to build GPU binaries . This causes various levels of
pain for the rest of the Python community, see for example    
   .
The purpose of the e-mail is to get a discussion started on how we can make
TensorFlow and PyTorch manylinux compliant. There is a new standard in the
works  so hopefully we can discuss what would be necessary to make sure
TensorFlow and PyTorch can adhere to this standard in the future.
It would make everybody’s lives just a little bit better! Any ideas are
@soumith: Could you cc the relevant list? I couldn't find a pytorch dev