Skip to main content

Command Palette

Search for a command to run...

CUDA Configuration for Windows

Published
2 min read

Step 1: NVIDIA Video Driver

You should install the latest version of your GPUs driver. - Check which GPU is present in your system

You can download drivers here:

How to check if already present

1. Device Manager Press Win + X
2. click Device Manager 
3. Expand Display adapters 
4. Look for something like: NVIDIA GeForce / RTX / GTX…

Step 2: Visual Studio C++

You will need Visual Studio, with C++ installed.

🛑IMP: By default, C++ is not installed with Visual Studio, so make sure you select ALL of the C++ options.


Step 3: Anaconda/Miniconda

You will need anaconda to install all deep learning packages


Step 4: CUDA Toolkit

Which version to choice:

Go to https://pytorch.org/get-started/locally/

Check which stable version is available for pytorch: example CUDA 11.8 and CUDA 12.1 but not CUDA 12.4

Recommended as per following image: So you can ether install 11.8 or 12.1 [If you plan to use pytorch]

How to verify if downloaded or not: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA Here you will see the downloaded version folder


Step 5: cuDNN

GPU accelerated library of primitives for deep neural networks

Download the version which is suitable for your CUDA version

Next Step:

Unzip the downlaoded folder

Copy all bin folder files from cuDNN to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8 bin folder --- [Note I have CUDA 12.8 so by dir path include v12.8 check yours]

same copy paste all files for include and lib folder too


How to verify if everything is correctly done

Check for the environment variable set or not

If it is set automatically then everything is perfect if not then set manually

Step 6 Verification if system is using CUDA

Install PyTorch

select the cuda version you installed and all other options as per requirement

Recommendation: You can test this in virtual env too

Run the following script to test your GPU

import torch

print("Number of GPU: ", torch.cuda.device_count())
print("GPU Name: ", torch.cuda.get_device_name())


device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print('Using device:', device)

-x- 😊 THANK YOU 😊 -x-