Support Vector Machines — Lecture series — Karush-Kahn-Tucker conditions part 6
In the last post, we looked at the idea of “complementary slackness” and how it showcases the relationship between the variables of the primal problem and the constraints of the dual problem and vice versa. In this post, we would be looking at the reason why the concept of complementary slackness holds true.
Understand why the concept of ‘complementary slackness’ is indeed true.
How can we ensure that the concept of ‘complementary slackness’ would hold up every single time?
To answer this question, we would first have to recall the idea of the duality theorem, which states that if x is feasible for the primal problem and y is feasible for the dual problem, then as showcased in Fig. 1:
However, if x* solves the primal problem and y* solves the dual problem, the first and last terms of the equation in Fig. 1 are equal. This is demonstrated in Fig. 2:
The equation in Fig. 2 can be rewritten as:
And further expanded in more detail as:
Where (c-y*A)j represents the jth component of the y*A. For each j, we know that (c-y*A) is a non-negative number because (c-y*A) y* is feasible for the dual and x*j is also non-negative. So it just makes logical sense that when we multiply them we would get a non-negative number.
The only way you can sum up a bunch of numbers and get a zero is if all of the numbers in the summation are zero. Therefore, if the summation of (c-y*A)j x*j = 0, it means that the product of (c-y*A)j and x*j is zero. This can only happen when either (c-y*A)j is zero or x*j is zero or both of them are zero.
And this is exactly what the complementary slackness condition says: if x*j > 0, then the jth dual constraint binds (that is, (c-y*A)j=0) and if the jth dual constraint does not bind (that is, (c-y*A)j > 0), then x*j = 0.
We are almost at the end of our lecture series on Support Vector Machines! :)
In the next post, we would be looking at a very important concept in the topic of Support Vector Machines called Kernels.