-
Book Overview & Buying
-
Table Of Contents
-
Feedback & Rating

Machine Learning for OpenCV
By :

In order to understand how SVMs work, we have to think about decision boundaries. When we used linear classifiers or decision trees in earlier chapters, our goal was always to minimize the classification error. We did this by accuracy or mean squared error. An SVM tries to achieve low classification errors too, but it does so only implicitly. An SVM's explicit objective is to maximize the margins between data points of one class versus the other. This is the reason SVMs are sometimes also called maximum-margin classifiers.
Let's look at a simple example. Consider some training samples with only two features (x and y values) and a corresponding target label (positive (+) or negative (-)). Since the labels are categorical, we know that this is a classification task. Moreover, because we only have two distinct classes (+ and -), it's a binary classification task.
In a binary classification task, a decision boundary...