import torch
PyTorch tensors
Before information can be processed by algorithms, it needs to be converted to floating point numbers. Indeed, you don’t pass a sentence or an image through a model; instead you input numbers representing a sequence of words or pixel values.
All these floating point numbers need to be stored in a data structure. The most suited structure is multidimensional (to hold several layers of information) and homogeneous—all data of the same type—for efficiency.
Python already has several multidimensional array structures (e.g. NumPy’s ndarray) but the particularities of deep learning call for special characteristics such as the ability to run operations on GPUs and/or in a distributed fashion, the ability to keep track of computation graphs for automatic differentiation, and different defaults (lower precision for improved training performance).
The PyTorch tensor is a Python data structure with these characteristics that can easily be converted to/from NumPy’s ndarray and integrates well with other Python libraries such as Pandas.
In this section, we will explore the basics of PyTorch tensors.
Importing PyTorch
First of all, we need to import the torch
library:
We can check its version with:
torch.__version__
'2.0.1'
Creating tensors
There are many ways to create tensors:
torch.tensor
: Input individual valuestorch.arange
: 1D tensor with a sequence of integerstorch.linspace
: 1D linear scale tensortorch.logspace
: 1D log scale tensortorch.rand
: Random numbers from a uniform distribution on[0, 1)
torch.randn
: Numbers from the standard normal distributiontorch.randperm
: Random permutation of integerstorch.empty
: Uninitialized tensortorch.zeros
: Tensor filled with0
torch.ones
: Tensor filled with1
torch.eye
: Identity matrix
From input values
= torch.tensor(3) t
Your turn:
Without using the shape
descriptor, try to get the shape of the following tensors:
0.9704, 0.1339, 0.4841])
torch.tensor([
0.9524, 0.0354],
torch.tensor([[0.9833, 0.2562],
[0.0607, 0.6420]])
[
0.4604, 0.2699],
torch.tensor([[[0.8360, 0.0317],
[0.3289, 0.1171]]])
[
0.0730, 0.8737],
torch.tensor([[[[0.2305, 0.4719],
[0.0796, 0.2745]]],
[
0.1534, 0.9442],
[[[0.3287, 0.9040],
[0.0948, 0.1480]]]]) [
Let’s create a random tensor with a single element:
= torch.rand(1)
t t
tensor([0.2453])
We can extract the value from a tensor with one element:
t.item()
0.24530160427093506
All these tensors have a single element, but an increasing number of dimensions:
1) torch.rand(
tensor([0.0539])
1, 1) torch.rand(
tensor([[0.1673]])
1, 1, 1) torch.rand(
tensor([[[0.8521]]])
1, 1, 1, 1) torch.rand(
tensor([[[[0.4083]]]])
You can tell the number of dimensions of a tensor easily by counting the number of opening square brackets.
1, 1, 1, 1).dim() torch.rand(
4
Tensors can have multiple elements in one dimension:
6) torch.rand(
tensor([0.8350, 0.6043, 0.7573, 0.4312, 0.7121, 0.4655])
6).dim() torch.rand(
1
And multiple elements in multiple dimensions:
2, 3, 4, 5) torch.rand(
tensor([[[[0.5859, 0.2400, 0.3767, 0.4702, 0.9221],
[0.7233, 0.6061, 0.1631, 0.7917, 0.0443],
[0.1168, 0.4800, 0.7009, 0.7263, 0.1385],
[0.8393, 0.6878, 0.6244, 0.3847, 0.0753]],
[[0.8900, 0.3066, 0.4179, 0.4101, 0.2723],
[0.3421, 0.8357, 0.0095, 0.6633, 0.6996],
[0.5569, 0.8118, 0.5589, 0.1866, 0.7879],
[0.9157, 0.3878, 0.9922, 0.6381, 0.8169]],
[[0.0905, 0.1064, 0.2240, 0.4941, 0.5899],
[0.6486, 0.5417, 0.6501, 0.5308, 0.0921],
[0.7476, 0.6017, 0.0442, 0.8713, 0.7512],
[0.0471, 0.4094, 0.3592, 0.0056, 0.2595]]],
[[[0.0631, 0.2496, 0.8283, 0.6834, 0.0634],
[0.7097, 0.6103, 0.5564, 0.7229, 0.7310],
[0.3733, 0.0948, 0.7278, 0.6178, 0.1813],
[0.1389, 0.2747, 0.5813, 0.3785, 0.6673]],
[[0.9272, 0.8388, 0.9478, 0.9429, 0.4678],
[0.6505, 0.8999, 0.3476, 0.9522, 0.2604],
[0.0610, 0.6362, 0.9197, 0.1989, 0.9982],
[0.0363, 0.6803, 0.1684, 0.5105, 0.7155]],
[[0.6207, 0.5458, 0.0307, 0.0814, 0.8699],
[0.0394, 0.2205, 0.4426, 0.1263, 0.1217],
[0.6955, 0.6588, 0.8799, 0.8124, 0.3122],
[0.9162, 0.5047, 0.9164, 0.4405, 0.9604]]]])
2, 3, 4, 5).dim() torch.rand(
4
2, 3, 4, 5).numel() torch.rand(
120
2, 4) torch.ones(
tensor([[1., 1., 1., 1.],
[1., 1., 1., 1.]])
= torch.rand(2, 3)
t # Matches the size of t torch.zeros_like(t)
tensor([[0., 0., 0.],
[0., 0., 0.]])
torch.ones_like(t)
tensor([[1., 1., 1.],
[1., 1., 1.]])
torch.randn_like(t)
tensor([[-0.0162, -0.6145, 0.2569],
[-1.0712, -0.3585, 0.2040]])
2, 10, 3) # From 2 to 10 in increments of 3 torch.arange(
tensor([2, 5, 8])
2, 10, 3) # 3 elements from 2 to 10 on the linear scale torch.linspace(
tensor([ 2., 6., 10.])
2, 10, 3) # Same on the log scale torch.logspace(
tensor([1.0000e+02, 1.0000e+06, 1.0000e+10])
3) torch.randperm(
tensor([1, 0, 2])
3) torch.eye(
tensor([[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.]])
Conversion to/from NumPy
PyTorch tensors can be converted to NumPy ndarrays and vice-versa in a very efficient manner as both objects share the same memory.
From PyTorch tensor to NumPy ndarray
= torch.rand(2, 3)
t t
tensor([[0.3260, 0.9113, 0.9419],
[0.4003, 0.0047, 0.3370]])
= t.numpy()
t_np t_np
array([[0.32604218, 0.9112866 , 0.94185126],
[0.40029067, 0.00469434, 0.33698457]], dtype=float32)
From NumPy ndarray to PyTorch tensor
import numpy as np
= np.random.rand(2, 3)
a a
array([[0.17371149, 0.49316171, 0.12868488],
[0.17542802, 0.39272544, 0.27179452]])
= torch.from_numpy(a)
a_pt a_pt
tensor([[0.1737, 0.4932, 0.1287],
[0.1754, 0.3927, 0.2718]], dtype=torch.float64)
Note the different default data types.
Indexing tensors
= torch.rand(3, 4)
t t
tensor([[0.4746, 0.6279, 0.7102, 0.2380],
[0.0749, 0.7746, 0.6106, 0.1359],
[0.6712, 0.0485, 0.7267, 0.3550]])
2] t[:,
tensor([0.7102, 0.6106, 0.7267])
1, :] t[
tensor([0.0749, 0.7746, 0.6106, 0.1359])
2, 3] t[
tensor(0.3550)
A word of caution about indexing
While indexing elements of a tensor to extract some of the data as a final step of some computation is fine, you should not use indexing to run operations on tensor elements in a loop as this would be extremely inefficient.
Instead, you want to use vectorized operations.
Vectorized operations
Since PyTorch tensors are homogeneous (i.e. made of a single data type), as with NumPy’s ndarrays, operations are vectorized and thus fast.
NumPy is mostly written in C, PyTorch in C++. With either library, when you run vectorized operations on arrays/tensors, you don’t use raw Python (slow) but compiled C/C++ code (much faster).
Here is an excellent post explaining Python vectorization & why it makes such a big difference.
Data types
Default data type
Since PyTorch tensors were built with efficiency in mind for neural networks, the default data type is 32-bit floating points.
This is sufficient for accuracy and much faster than 64-bit floating points.
By contrast, NumPy ndarrays use 64-bit as their default.
= torch.rand(2, 4)
t t.dtype
torch.float32
Setting data type at creation
The type can be set with the dtype
argument:
= torch.rand(2, 4, dtype=torch.float64)
t t
tensor([[0.5718, 0.4599, 0.2844, 0.5194],
[0.6349, 0.7234, 0.1780, 0.0882]], dtype=torch.float64)
Printed tensors display attributes with values ≠ default values.
t.dtype
torch.float64
Changing data type
= torch.rand(2, 4)
t t.dtype
torch.float32
= t.type(torch.float64)
t2 t2.dtype
torch.float64
List of data types
dtype | Description |
---|---|
torch.float16 / torch.half | 16-bit / half-precision floating-point |
torch.float32 / torch.float | 32-bit / single-precision floating-point |
torch.float64 / torch.double | 64-bit / double-precision floating-point |
torch.uint8 | unsigned 8-bit integers |
torch.int8 | signed 8-bit integers |
torch.int16 / torch.short | signed 16-bit integers |
torch.int32 / torch.int | signed 32-bit integers |
torch.int64 / torch.long | signed 64-bit integers |
torch.bool | boolean |
Simple operations
= torch.tensor([[1, 2], [3, 4]])
t1 t1
tensor([[1, 2],
[3, 4]])
= torch.tensor([[1, 1], [0, 0]])
t2 t2
tensor([[1, 1],
[0, 0]])
Operation performed between elements at corresponding locations:
+ t2 t1
tensor([[2, 3],
[3, 4]])
Operation applied to each element of the tensor:
+ 1 t1
tensor([[2, 3],
[4, 5]])
Reduction
= torch.ones(2, 3, 4);
t t
tensor([[[1., 1., 1., 1.],
[1., 1., 1., 1.],
[1., 1., 1., 1.]],
[[1., 1., 1., 1.],
[1., 1., 1., 1.],
[1., 1., 1., 1.]]])
sum() # Reduction over all entries t.
tensor(24.)
Other reduction functions (e.g. mean) behave the same way.
Reduction over a specific dimension:
sum(0) t.
tensor([[2., 2., 2., 2.],
[2., 2., 2., 2.],
[2., 2., 2., 2.]])
sum(1) t.
tensor([[3., 3., 3., 3.],
[3., 3., 3., 3.]])
sum(2) t.
tensor([[4., 4., 4.],
[4., 4., 4.]])
Reduction over multiple dimensions:
sum((0, 1)) t.
tensor([6., 6., 6., 6.])
sum((0, 2)) t.
tensor([8., 8., 8.])
sum((1, 2)) t.
tensor([12., 12.])
In-place operations
With operators post-fixed with _
:
= torch.tensor([1, 2])
t1 t1
tensor([1, 2])
= torch.tensor([1, 1])
t2 t2
tensor([1, 1])
t1.add_(t2) t1
tensor([2, 3])
t1.zero_() t1
tensor([0, 0])
While reassignments will use new addresses in memory, in-place operations will use the same addresses.
Tensor views
= torch.tensor([[1, 2, 3], [4, 5, 6]]); print(t)
t
t.size()6)
t.view(3, 2)
t.view(3, -1) # Same: with -1, the size is inferred from other dimensions t.view(
Note the difference
= torch.tensor([[1, 2, 3], [4, 5, 6]])
t1 t1
tensor([[1, 2, 3],
[4, 5, 6]])
= t1.t()
t2 t2
tensor([[1, 4],
[2, 5],
[3, 6]])
= t1.view(3, 2)
t3 t3
tensor([[1, 2],
[3, 4],
[5, 6]])
Logical operations
= torch.randperm(5)
t1 t1
tensor([1, 2, 0, 4, 3])
= torch.randperm(5)
t2 t2
tensor([1, 2, 4, 0, 3])
Test each element:
> 3 t1
tensor([False, False, False, True, False])
Test corresponding pairs of elements:
< t2 t1
tensor([False, False, True, False, False])
Device attribute
Tensor data can be placed in the memory of various processor types:
- the RAM of CPU,
- the RAM of a GPU with CUDA support,
- the RAM of a GPU with AMD’s ROCm support,
- the RAM of an XLA device (e.g. Cloud TPU) with the torch_xla package.
The values for the device attributes are:
- CPU:
'cpu'
, - GPU (CUDA & AMD’s ROCm):
'cuda'
, - XLA:
xm.xla_device()
.
This last option requires to load the torch_xla package first:
import torch_xla
import torch_xla.core.xla_model as xm
Creating a tensor on a specific device
By default, tensors are created on the CPU.
You can create a tensor on an accelerator by specifying the device attribute (our current training cluster does not have GPUs, so don’t run this on it):
= torch.rand(2, device='cuda') t_gpu
Copying a tensor to a specific device
You can also make copies of a tensor on other devices:
# Make a copy of t on the GPU
= t.to(device='cuda')
t_gpu = t.cuda() # Alternative syntax
t_gpu
# Make a copy of t_gpu on the CPU
= t_gpu.to(device='cpu')
t = t_gpu.cpu() # Alternative syntax t
Multiple GPUs
If you have multiple GPUs, you can optionally specify which one a tensor should be created on or copied to:
= torch.rand(2, device='cuda:0') # Create a tensor on 1st GPU
t1 = t1.to(device='cuda:0') # Make a copy of t1 on 1st GPU
t2 = t1.to(device='cuda:1') # Make a copy of t1 on 2nd GPU t3
Or the equivalent short forms:
= t1.cuda(0)
t2 = t1.cuda(1) t3