Log in

No account? Create an account

Previous Entry | Next Entry


Wikipedia just updated their article on SVMs and made it a lot better and clearer.

SVMs are ways of classifying a bunch of things into groups by measuring their attributes and making each attribute a different vector. This means you end up with an n dimensional space where n is the number of attributes you care about. Objects will be distributed in this space so you want to classify them, and you do that by putting a hyperplane between the clouds of objects. A good hyperplane doesn't touch any of the objects. Then marginal planes can move away from it to push back and find out how much n dimensional space between the clusters there is along the hyperplane. the best hyperplane has the widest margins around it.

And of course, vectors can be represented as matrices.

Anyway what manages and computes on large matrices really quickly? A Graphics card.

So that's why theres cuSVM - a CUDA implementation of Support Vector Machine to run on graphics card hardware.


If you installed it on a bank of Zotac ION Atom N330 mini itx mobos
(just held off with standoffs) and you'd have a very, very compact low
power AI rig.



use one x86_64 core to handle communication, so the other core is
nonblocking on managing the integrated ION graphics card. Result -
rapid machine learning and classification.

Could be fun.