I know how it feels. SVM's are an old and tried classification method that was in fashion before the DL craze.
> An SVM model is a representation of the examples as points in space, mapped so that the examples of the separate categories are divided by a clear gap that is as wide as possible. New examples are then mapped into that same space and predicted to belong to a category based on which side of the gap they fall.
One advantage of SVM's is that they don't use all data points to decide on the separation plane, just the closest points to the gap (the support vectors), making it more invariant.
Another advantage is that they can efficiently perform a non linear classification using the kernel trick, implicitly mapping their inputs into high dimensional feature spaces (here kernel means a distance function between two data points).
> An SVM model is a representation of the examples as points in space, mapped so that the examples of the separate categories are divided by a clear gap that is as wide as possible. New examples are then mapped into that same space and predicted to belong to a category based on which side of the gap they fall.
One advantage of SVM's is that they don't use all data points to decide on the separation plane, just the closest points to the gap (the support vectors), making it more invariant.
Another advantage is that they can efficiently perform a non linear classification using the kernel trick, implicitly mapping their inputs into high dimensional feature spaces (here kernel means a distance function between two data points).
https://en.wikipedia.org/wiki/Support_vector_machine
https://en.wikipedia.org/wiki/Positive-definite_kernel
I recommend Andrew Ng's ML course for learning about SVM's.