Support Vector Machines — Lecture series — Sequential Minimal Optimization Part 3
In the previous post, I spoke about the two main steps involved in the implementation of the sequential minimal optimization algorithm, which include first selecting the Lagrange multipliers to optimize and then optimizing the chosen Lagrange multipliers through the implementation of an analytical method. In this post, I would be focusing on the process involved in selecting which Lagrange multipliers to optimize. To properly understand this post, it would be helpful to get a second look at the posts about KKT conditions and Lagrange multipliers.
Understand the process involved in choosing which Lagrange multipliers to optimize.
Supposing you are given a dataset containing different examples, each with its own set of Lagrange multipliers, how would you know which Lagrange multipliers to optimize?
Well, the process involved in the selecting of the multipliers to optimize is as follows:
- Iterate through all of the examples in the dataset to find the examples whose Lagrange multipliers are not bound, since those examples are more likely to violate the KKT conditions. Lagrange multipliers that are not bound are those that are neither 0 nor C.
- Go through the subset of examples obtained from the first pass over the dataset in the first step to identify examples that actually do not satisfy the KKT conditions.
- The examples generated from step 2 are the examples whose Lagrange multipliers should be chosen and optimized.
- After the Lagrange multipliers of the examples in step 3 have been optimized, go through all of the examples in the dataset again to ensure that none of the examples violate the KKT conditions.
So the above 4 action steps can be taken to ensure that the right Lagrange multipliers are chosen for optimization in the sequential minimal optimization problem. In the next post, I would be delving deep into how exactly these chosen Lagrange multipliers can be optimized.