- The method assumes that erosion will eliminate all significant noise, but that cannot always be the case. Therefore, if there are 2 or 3 or 4 significant patches of white pixels in the image, then the averaging will probably return an estimated target location in the blank region between them, where the target cannot be.
- It is difficult to estimate the radius of a given marker, because noise is not separable from what it thinks to be the bounds of the marker.
- Since it is hard to estimate the radius of the marker, then it is also difficult to estimate its circularity, which I would find by averaging the absolute difference between each radius along the 4 cardinal lines and the average of these 4 radii (average difference in radius length around the circle).
My new method would replace the location averaging following erosion, and instead identify "blobs" (connected groups) of white pixels, and then compare them. This would be done through recursion, with the function looking for white pixels, then looking for pixels connected to each white pixel found until a blob is finished, then moving on to find the next blob. By the end, the program would have an array of blobs stored, each containing an array of pixels within it. Here are the improvements with this new approach:
- This new function does not rely on erosion to eliminate all contending noise. Instead, it accepts that there may be more than one significant blob of white pixels remaining, and can choose which among them is most likely to be the marker. So the marker location will always be surrounded by white pixels
- Now that the given marker location will assuredly be within a blob of white pixels, it is easy to estimate the radius of the marker.
- Like #2, it is now also possible to estimate:
- Circularity (average difference in radius length around the blob's edge)
- Mass (# of pixels in the blob)
- Density (how well each blob fits into the color range)
The only significant drawback to my new method is its decreased speed. This new function is MUCH slower than the old one. However, I tested the method myself in Processing with video input from my laptop's built-in camera and was able to bypass this problem. I simply checked every 5 pixels, and then displayed the final image with each pixel actually being made up of 25 pixels. Here is a video of my testing, holding a printed yellow-orange target:
I thought it worked really well, so I'll try incorporating the new algorithm into my processing function. I did it in processing and not in node.js, where my project is written, because I am much more familiar with processing, and because processing has a much easier way to display outputs visually (like drawing a circle over the target).
NOTE: In the program I used here, I thresholded pixels based on proportions
— for example, keep pixel if: (red/blue) > 2.2 AND (green/blue) > 1.8 —
instead of a number range to account for lighting.
I will put the code for the program in the google doc for future reference.
No comments:
Post a Comment