Welcome to the 29th part of our machine learning tutorial series and the next part in our Support Vector Machine section. In this tutorial, we're going to talk about the concept of kernels with machine learning.
Recall back in the very beginning on the topic of the Support Vector Machine our question about whether or not you could use an SVM with data like:
At least with what we know so far, is it possible? No, it is not, at least not like this. One option, however, is to take a new perspective. One can do this by adding a new dimension. For example with the data above, we could add a 3rd dimension using some sort of function. Something like X3 =
X3 = X1*X2. That might work here, but, it may not. Also, what about in cases like with image analysis, where you might have hundreds, or more, dimensions? It's already the case that performance is an issue, and, should you need to add a bunch more dimensions to data that is already highly dimensional, we're only going to further significantly slow things down.
What if I told you that you could do calculations in plausibly infinite dimensions, or, better yet, you could have those calculations done for you in those dimensions, without you needing to work within those dimensions and still get the result back?
It turns out, that we actually can do this with what are known as Kernels. Many people first come into contact, and maybe lastly too, with kernels with respect to the Support Vector Machine. This could lead to thinking that kernels are mainly for use with Support Vector Machines, but this is actually not the case.
Kernels are similarity functions, which take two inputs and return a similarity using inner products. Since this is a machine learning tutorial, some of you might be wondering why people don't use kernels for machine learning algorithms, and, I am here to tell you that they do! Not only can you create your own new machine learning algorithms with Kernels, you can also translate existing machine learning algorithms into using Kernels.
What kernels are going to allow us to do, possibly, is work in many dimensions, without actually paying the processing costs to do it. Kernels do have a requirement: They rely on inner products. For the purposes of this tutorial, "dot product" and "inner product" are entirely interchangeable.
What we need to do in order to verify whether or not we can get away with using kernels is confirm that every interaction with our featurespace is an inner product. We'll start at the end, and work our way back to confirm this.
First, how did we determine classification of a featureset after training?
Is that interaction an inner product? Sure is! We can interchange vector x with vector z:
Moving along, we're going to revisit our constraints. Recall the constraint equation:
How about here? Is the interaction an inner product? Yes! Recall that
yi(xi.w+b)-1 >= 0 is identical to
yi(xi.w+b) >= 1. So here we can easily replace our x-sub-i value with our new z-sub-i.
Finally, what about the other formal optimization equation for w?
There's yet another dot product/inner product! Any other equations? How about:
All good! Awesome, we can use Kernels. You're probably sitting there wondering though, what about this whole "calculations in infinite dimensions for free thing?" Well, first we needed to make sure we could do it. As for the free processing, you'll have to stick around until the next tutorial to get that!