Tools for Machine Learning and Natural Language Processing — PyTorch part 1

David Sasu
6 min readJul 15, 2021

In this lecture, we will be learning about an open source deep learning framework called PyTorch. PyTorch can be viewed as a library that provides us with packages that we can use to manipulate tensors.

A tensor is basically a mathematical object that is used to hold multidimensional data. Tensors can be represented as n-dimensional arrays of scalars. For instance, a tensor of order 0 is a scalar, a tensor of order 1 is a vector, a tensor of order 2 is a matrix and so on.

In this tutorial, we will be using the PyTorch framework to look at the following activities with regards to tensor manipulation:

  1. Creating tensors
  2. Operations with tensors.
  3. Indexing, slicing and joining with tensors
  4. Computing gradients with tensors.

Creating tensors:

Tensors can be created in PyTorch by executing the torch.Tensor(x,y) command, where x and y represent the dimensions of the tensor that is to be created.

To create a tensor filled with randomly generated numbers we can execute the torch.rand(x,y) command, where x and y represent the dimensions of the tensor that we want to create.

To create a tensor filled with numbers selected at random from the normal distribution, we can execute the torch.randn(x,y) command, where x and y represent the dimensions of the tensor that we want to create.

Note that after a tensor has been created, you can display the dimensions of the created tensor by executing the shape function on the tensor.

You can create a tensor filled with zeros by executing the zeros function. For instance, torch.zeros(2,3) will generate a tensor of dimensions 2 by 3 with each cell in the tensor filled with a zero.

You can also create a tensor filled with ones by executing the ones function. For instance, torch.ones(2,3) will generate a tensor of dimensions 2 by 3 with each cell in the tensor filled with a one.

After a tensor has been created, all the values within the tensor can be replaced without necessarily creating a new tensor. This can be done using the fill_ function. The underscore (_) after the fill command ensures that the fill command is executed “in place”, meaning that the command is executed without creating a new tensor. For instance, after a tensor T filled with zeros has been created with the T = torch.zeros(2,3) command, the command T.fill_(5) can be used to replace all of the zeros in T with 5.

Another nice possibility when working with tensors is that you can personalise the tensors by initialising them explicitly with regular python lists. For instance a tensor can be created to house the values 1,2,3,4,5,6 by executing the following command torch.Tensor([1,2,3],[4,5,6]).

You can also create a tensor by converting an already existing numpy array into a tensor. This can be done by using the from_numpy(na) function, where ‘na’ represents the numpy array that you wish to convert into a tensor.

Tensor types

Whenever you create a tensor, the default type of the elements in the cells of the tensor is ‘float’. However, you can create a tensor to house elements from any type that you desire. You can do this by either specifying the type of the elements that you want your tensor to contain at the point of tensor initialisation, or you can simply cast an already existing tensor from one type to another.

For instance, to initialise a tensor containing ‘longs’ instead of ‘floats’, you can execute the following command torch.LongTensor(x,y). This command will create a tensor with dimensions x and y that contain longs instead of floats.

However, if you have already created a tensor T = torch.Tensor(x,y) and you want to cast it to contain longs, you can execute the following command T.long().

Tensor operations

Introduction

You can perform regular mathematical operations such as +, — , * and / using tensor objects as the operands of these operations. You can also perform these same operations by using functions from the PyTorch library.

For instance, consider the following 2 tensor objects, tensor1 and tensor2, where tensor1= torch.Tensor([1,2,3]) and tensor2 = torch.Tensor([1,1,2]). We can perform the addition operation on these 2 tensor objects in the following ways:

a) tensor1 + tensor2

b) torch.add(tensor1, tensor2)

c) tensor1.add(tensor2)

Dimension-based tensor operations

  • the arange function: This operation returns a one dimensional tensor, populated by values within the range specified in the function’s parameters. For instance, torch.arange(6) would yield tensor([0., 1., 2., 3., 4., 5.])
  • the view function: This operation reshapes the tensor that it is applied on according to the dimensions that it is provided as its parameters. For instance, given the tensor T = torch.arange(6), T.view(2,3) will reshape T into a tensor that has 2 rows and 3 columns.
  • the sum function: This operation is used to add the values within the tensors either along the rows or along the columns. In two dimensional arrays we represent rows as the dimension 0 and columns as dimension 1. Therefore if we apply the sum function on a two dimensional tensor across the dimension 0, a summation would be performed across the rows in the tensor and if we apply the sum function across the dimension 1, a summation would be performed across the columns. For example, consider the following tensor T = torch.Tensor([1,2,3],[4,5,6]) if the following command is executed torch.sum(T, dim=0), the resulting tensor would be tensor([5,7,9]). However, if using the same tensor T, the following command torch.sum(T, dim=1) is rather executed, the resulting tensor would be tensor([6, 15]).
  • the transpose function: This operation is used to swap the dimensions of a given tensor. For instance, consider the tensor T = torch.Tensor(2,3) and assume that the tensor T generates the following tensor([1,2,3], [4,5,6]), torch.transpose(T, 0, 1) would yield the following result tensor([1, 4],[2, 5], [3, 6]). This is because, the tensor function would ensure the resulting tensor has the dimension 0 of the original tensor as its dimension 1 and the dimension 1 of the original tensor as its dimension 0.

Slicing a tensor

Slicing enables to obtain a particular segment of a tensor under consideration. Consider the following T = tensor([1,2,3],[4,5,6]), supposing we wanted to obtain the first two elements in the first row of the tensor, we would first have to slice the tensor in such a way that we can obtain only the first row, after this we would then have to slice that result to obtain just the first two elements. This is accomplished with the following command T[:1, :2], where the ‘:1’ obtains the first row and ‘:2’ obtains the first 2 elements of that row. The command would yield tensor([1, 2]).

Indexing a tensor

Indexing enables us to select a value from a particular row within the tensor. Consider the following T = tensor([1,2,3],[4,5,6]), to select the value 2 from the first row in the tensor, we would first have to obtain the first row and then index that result. This is accomplished by executing the following command T[0, 1].

Complex indexing: noncontiguous indexing of a tensor

You can also use the index_select() function to select specific rows or columns in the tensor. Consider a tensor T = tensor([1,2,3],[4,5,6],[7,8,9]), we can use index_select to obtain the first and third rows of the tensor by executing the following command:

indices = torch.LongTensor(0, 2)

torch.index_select(T, dim=0, index=indices)

Where the ‘indices’ parameter in the index_select function indicates which indices should be selected from our tensor T and the ‘dim’ parameter indicates along which dimension(row or column, with row being represented by 0 and column being represented by 1) in the tensor that the indexing should happen.

When the code is executed, the expected output is:

tensor([1,2,3],[7,8,9])

Concatenating tensors

You can concatenate a tensor T to another tensor A along the row dimension by using the torch.cat([T,A], dim=0) command.

Alternatively, you can concatenate the tensor T to the tensor A along the column dimension by using the torch.cat([T,A], dim=1) command.

You can also stack the values of a tensor object T on top of another tensor object A by using the following command, torch.stack([T,A])

I hope that this tutorial has given you a basic overview of the Pytorch library. We are going to be using this library to build fun projects together :)

--

--