Using Open CV Computer Vision to detect the colour of M&Ms
This post is part of our series on How we built an Internet of Things based M&M Sorter using Lego. See the overarching blog here.
Computer Vision is really cool, the idea of a computer being able to “see” is fascinating not just for tech heads like me but for everyone I speak to about it! I’m especially excited when I actually get to play with cool new technology and actually show non-tech friends a practical example of what I have built. Beats out the other times I’ve talked about my work, I learnt the hard way that showing off logs from a blockchain isn’t all that exciting.
When it comes to computer vision, however, showing how a facial expression detector algorithm or a self-driving car works means my friends can get just as excited as I am. So when I heard we were developing an IoT device to sort M&M’s at Sheda I got really excited to help, jumping on the project to develop the computer vision algorithm to sort M&Ms.
To get a proof of concept up, we decided to use object detection using OpenCV(Open Source Computer Vision) with colour filtering on a Raspberry Pi. In this post, I will go through the technical details on how I set that up.
To follow along with this you will need a Raspberry Pi with a camera module, however, if you don’t have that you can follow along with any computer with a webcam.
Set up Raspberry Pi with OpenCV
Develop an algorithm for filtering colours and detecting the contours of an object using OpenCV
Sheda uses Design, Artificial Intelligence and technology to solve complex problems with an emphasis on delivering solutions that make and accelerate impact and bring about social change.
Assuming you are now working in the cv virtualenvironment, you’ll need to install NumPy as a dependency. This might take about 10 minutes on a Raspberry Pi, so it may look like the install has hung but if you just let it run it should finish, remember a Raspberry Pi is much slower than your computer.
After running make install , your OpenCV + Python bindings should be installed in /usr/local/lib/python3.5/site-packages . You can verify this with the ls command and then move the output to global packages:
Perfect, we are now officially all set up to start developing our colour detection algorithm.
For this example, we are using colour filtering to sort M&M’s by colour. To achieve this with OpenCV we will use the HSV colour space. The HSV colour space is widely used in computer vision due to its ability to separate images via colour even when shadows are being cast.
A bit of background on colour spaces:
RGB: which is a colour space that’s widely used, splits images into three colour channels: Red, Green, and Blue. This is all well and good when trying to represent an image on a screen as most screen displays use RBG or something close to representing an image to a screen. However, RGB has limited capacity to detect and distinguish an image based on a colour range, it can be hard to get specifics on an image of an object shown in different environments or conditions. For example, when shadows are being cast or bring lights are on an object it can change the specific colour channel you are trying to zone in on and results in poor detection of the object’s colour e.g. red can be detected as orange.
HSV: This colour space consists of splitting an image into to three channels as well, but these three channels are Hue (Colour Shade), Value (“Brightness”: how far off of black it is), and Saturation (How “colourful” the image is). This puts us in a better position when we are trying to sort via a colour range, as the shade of the given object can be in one channel. Using HSV as our colour space allows for a better chance of consistent colour detection as the shade of a particular object usually doesn’t change when an object is exposed to different kinds of light. This allows us to specify a colour range just based on the shade of the object and our algorithm should be able to detect it in any lighting or environment.
With OpenCV we can filter out an image using colour ranges, meaning, if we give OpenCV two colours or a colour range, it will blackout (effectively ignore) the image when it is not between the input colours in the HSV spectrum. This allows us to give a max hue and min hue value for an object, OpenCV will filter out anything that isn’t in that range, blocking out other objects.
Although OpenCV will be able to detect the object in any lighting with just the hue, there is a possibility of also detecting other objects in the background with a similar colour. To prevent this, I suggest lightly tweaking the saturation and hue values to match your object as closely as possible, this will give you the ability to focus in on just the object you want to detect.
We can fine-tune the processed images by using OpenCV’s erode and dilatefunctions. The erode function takes away any small points on screen from the image while the dilate function will take any remaining points on the screen and expand their size slightly. This will help reduce noise and increase the size of the object wanting to be detected.
So now we know how the colour filtering works in OpenCV but all that we’ve done so far is the ability to filter out an image via a colour. In order to get our RaspberryPi to detect an M&M’s colour using a camera, we’ll also need to take into consideration the size and outline or shape (in our case a spherical object) of the object you want to detect.
For this we use OpenCV contour:
Contours can be explained simply as a curve joining all the continuous points (along the boundary), having same color or intensity. The contours are a useful tool for shape analysis and object detection and recognition.
Contours are useful because they give us the ability to not only have an image with a colour range detected but also be able to show a clear outline of the detected object assuming we calibrated correctly.
Looking at the code above, I use the contours function to find objects that match the colour range we want to detect, I then looped through each contour and use OpenCV’s minEnclosingCircle functionto get a circle with a centre and radius of the contours. I used this to filter circles with a minimum radius length, this allows us to filter out any random noise in the image and can focus our object.
If an object has a radius size that matches our expected length, I draw a circle on top of the original image to highlight what object got detected on the video displayed on a screen and call an objectFound function.
The outline and code above give a foundation of how the colour detection will work. To get this working in real time, I cleaned up the code a little bit and use a loop with the image detection code to work with video. What we end up with is close to the script below which loops through every video frame to detect the M&M’s colour in real time.