# Support Vector Machines — Lecture series — Karush-Kahn-Tucker conditions part 6

In the last post, we looked at the idea of “complementary slackness” and how it showcases the relationship between the variables of the primal problem and the constraints of the dual problem and vice versa. In this post, we would be looking at the reason why the concept of complementary slackness holds true.

Learning objective:

Understand why the concept of ‘complementary slackness’ is indeed true.

Main question:

How can we ensure that the concept of ‘complementary slackness’ would hold up every single time?

To answer this question, we would first have to recall the idea of the duality theorem, which…

# Tools for Machine Learning and Natural Language Processing — PyTorch part 1

In this lecture, we will be learning about an open source deep learning framework called PyTorch. PyTorch can be viewed as a library that provides us with packages that we can use to manipulate tensors.

A tensor is basically a mathematical object that is used to hold multidimensional data. Tensors can be represented as n-dimensional arrays of scalars. For instance, a tensor of order 0 is a scalar, a tensor of order 1 is a vector, a tensor of order 2 is a matrix and so on.

In this tutorial, we will be using the PyTorch framework to look at…

# Natural Language Processing — Neural Networks and Neural Language Models Lecture series — Training a feed forward neural network part 2 (The cross-entropy loss function)

In the previous post, we spoke about what it means to ‘train’ a feed-forward neural network. We also briefly touched on the different tasks that are performed in the training process of a feed-forward neural network. In this post, we will be solely focused on the loss function, the cross-entropy loss function to be precise.

What is a loss function and what role does it play in training a neural network?

As briefly stated in the previous post, the main purpose of the loss function is to indicate how close the predicted output value of the neural network is to…

# Support Vector Machines — Lecture series — Karush-Kahn-Tucker conditions part 5

In the previous post, we spoke about the duality theorem and we gained an understanding of how the duality theorem showcases the relationship between primal problems and their corresponding dual problems. In this post, we would be looking at the concept of ‘complementary slackness’.

Learning objective:

Understand the concept of ‘complementary slackness’.

Main question:

Is there a relationship between the variables in the primal problem and the constraints in the dual problem? or vice versa? And how is this relationship expressed?

Firstly, to gain a refreshed understanding on how primal problems and dual problems are related you can read this…

# Natural Language Processing — Neural Networks and Neural Language Models Lecture series — Training a Feed Forward Neural Network part 1

In the previous post, we looked at an overview of a feed forward neural network and we also had a look at its graphical representation. In this post, I would be introducing you to the basic ideas that entail training a feed-forward neural network.

What does it mean to ‘train’ a feed-forward neural network?

A feed-forward neural network is a supervised machine learning algorithm. …

# Support Vector Machines — Lecture series — Karush-Kahn-Tucker conditions part 4

In the previous post, we looked at what primal and dual problems were. In this post, we will be looking at a theorem which expresses the inferences that we can derive from the relationship between the dual and the primal linear programming problems. This theorem is called the ‘duality’ theorem.

Learning objective:

The main objective of this post is to gain an understanding of the duality theorem.

Main question:

Can we draw any conjectures from the relationship between primal problems and their corresponding dual problems?

The answer to this question is yes and the duality theorem aids us in doing…

# Natural Language Processing — Neural Networks and Neural Language Models Lecture series — Feed-Forward Neural Networks

In this post, we will be learning about the concept of feed-forward neural networks and the mathematical representation of this concept.

Feed-Forward Neural Network:

A feed-forward neural network is simply a multi-layer network of neural units in which the outputs from the units in each layer are passed to the units in the next higher layer. These networks do not have any cycles within them. That is, the outputs within the network do not flow in a cyclical manner.

To gain a refreshed understanding of what neural units are and how they work, you can read about them here.

Graphical…

# Support Vector Machines — Lecture series — Karush-Kahn-Tucker conditions part 3

In the previous post, I explained the various what the various KKT conditions were about with the exception of the ‘complementary slackness’ condition. In the subsequent series of posts, I intend to break down this concept to enable you to have maximum understanding. To begin this quest, I will be first be talking about the concept of ‘Primal’ and ‘Dual’ linear programming problems.

Learning objective:

To have an understanding of the complementary slackness condition, I will first be explaining the concept of ‘Primal’ and ‘Dual’ problems.

Main question:

What are the ‘primal’ and ‘dual’ natures of linear programming problems?

Consider…

# Natural Language Processing — Neural Networks and Neural Language Models Lecture series — The XOR problem part 2

In the previous post, we spoke about the XOR problem and we also mentioned how neural networks could be used to solve the XOR problem. In this post, we will be talking about how we can formulate a neural network structure to solve the XOR problem.

The neural network structure that we will be discussing was formulated by Goodfellow and it is demonstrated in Fig. 1 below:

The architecture of the neural network in Fig. 1 is made up of an input layer, consisting of x1 and x2, a middle layer, consisting of h1 and h2, and an output layer…

# Support Vector Machines — Lecture series — Karush-Kahn-Tucker conditions part2

In the previous post, we looked at the various conditions that make up the Karush-Kahn-Tucker (KKT) conditions. In this post, we would explore each of those conditions and explain what they mean.

Learning objective:

Understand the main ideas behind the various KKT conditions.

Main questions:

What do the following KKT conditions mean:

1. The stationary condition
2. The prime feasibility condition
3. The dual feasibility condition
4. The Complementary slackness condition

The stationary condition:

The stationary condition just states that the selected point must be a stationary point. A stationary point refers to the point at which the function stops increasing or decreasing. When… 