It turns out blobs are very important to computer vision.
No, not the 50's horror classic starring Steve McQueen, but rather pixel blobs - groupings in an image that share some common feature, such as proximity, color, or a host of other properties.
I've spend most of this past week learning about a Python library called OpenCV, short for "Open Computer Vision" and it's amazing.
Within Python it's possible to call all sorts of functions to open an modify virtually any image source (in my case, thermal data being pulled from the Lepton camera).
I think for many people, even individuals like me who try to keep up on technology trends, the emerging power of computer vision is astounding.
Many of the underlying techniques have been around for a long time. What's new is the low cost and form of processors to run the software (such as the Raspberry Pi) and the availability of network access. Suddenly, it's very cheap and relatively easy to put sensors just about everywhere.
For an example, he's a quick video of myself testing out some blob-tracking code. Specifically, I'm looking for moving hotspots in the Lepton's feed that are also roundish:
So, something that's interesting (other than my sweet dance moves) is the ability of the computer to keep a focus point on the human in the frame. And all this on a $35 Raspberry Pi!
I know, I know, if you work in computer science, this is probably yawn-inducing and if you're under the age of 15 you've grown up expecting that semi-magical computer things "just work" all by themselves, but I can remember not too long ago where this was the stuff of movie magic, way beyond the actual state of the art. But here we are: $35 bucks and 10 lines of open-source software will get you computer aided object tracking.
Next: let's do something cool with it!