Tensorflow
GPU support on native-Windows is only available for 2.10 or earlier versions, starting in TF 2.11, CUDA build is not supported for Windows. For using TensorFlow GPU on Windows, you will need to build/install TensorFlow in WSL2 or use tensorflow-cpu with TensorFlow-DirectML-Plugin
from Tensorflow
According to the website, Tensorflow v2.10 is the last version available. This version requires the following installations:
- Python 3.10
- CUDA v11.2
- cuDNN v8.1
The CUDA Toolkit installation package can be downloaded from this link. We only need to select the correct version and OS and follow the instructions.
For cuDNN, the library files can be found at this link. Registration is required to download the file. The file contains three subfolders, namely bin
, include
and lib
. We can just copy these folders to the corresponding subfolder of the CUDA installation. Alternatively, we can create separate folders for these files and add the path to bin
folder to the environment variable.
When the installation is done, we can install Python 3.10. We have to download the Python from Python website. The one in Microsoft Store may not work. Someone said that the Python in Microsoft Store is run in sandbox and does not have access to the GPU resources.
After Python installation, we can install tensorflow v2.10 using the following command.
1 |
pip install 'tensorflow<2.11' |
After installation, we can test if tensorflow has access to the GPUs
1 2 3 |
import tensorflow as tf print(tf.config.list_physical_devices('GPU')) |
PyTorch
The installation instructions for PyTorch can be found here. Choose the corresponding OS and CUDA version. The installation of PyTorch for CUDA v11.8 on Windows is:
1 |
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118 |
If the above is to be specified in requriements.txt
, the following lines shall be added.
1 2 3 4 |
--extra-index-url torch torchvision torchaudio |
After installation, it can be tested using the following commands:
1 2 3 |
import torch torch.cuda.is_available() |