One important thing of working with PyTorch is specifying the device on which tensors and models should be located, such as CPU or GPU. Setting the default device globally in PyTorch can be useful for several reasons. It can simplify the code by eliminating the need to manually specify the device for every tensor or model. Also, the default device ensures that the tensors and models will be allocated on the appropriate device because it is possibility to miss specifying a device for every tensor or model. This tutorial explains how to set default device globally in PyTorch.
Code
In the following code, we set the default device globally to cuda
, which refers to the GPU. This means that any tensor or model created after this line will be allocated on the GPU by default, if available. For testing purpose, we create a simple tensor filled with random number. Then, we print the device on which this tensor is allocated.
import torch
torch.set_default_device('cuda')
print(torch.randn(1).device)
Leave a Comment
Cancel reply