Thursday, April 7, 2016

Presentation 5 Link

Presentation 5

Presentation 4 Link

Presentation 4

Presentation 3 Link

Presentation 3

Plan to Fix Call Stack Error

  • Array blobs[]           
  • Array goodPixels[]    //the white pixels that were marked during thresholding
  • for (goodPixel in goodPixels) {
    • if (there aren’t any blobs with goodPixel) {
      • Array links[]
      • Array news[]
      • news.append(goodPixel)
      • while (news.length > 0) {
        • int num = news.length
        • for (int index = num-1; index > -1; index--) {
          • checkForNewLinks(news[index])      
          • links.append(news[index])  
          • news.remove(news[index])  
        • }
      • }
      • blobs.append(new Blob(links)) 
    • }
  • }

Wednesday, April 6, 2016

Issues as of 4/5/16

Though I edited the program to find a line on the floor and hover over it, and I’m fairly certain everything I wrote makes logical sense, the drone still behaves in weird ways. There are two main problems we are having with controlling the drone:

  1. It drifts off course even when its image processing seems to be precise.
  2. It has been returning an error that reads: “maximum call stack size exceeded” almost every run now.
What I’ve learned is that, to address the second issue, node only allows a certain number of function calls in a single step. This is to prevent a system from locking into an endless function and crashing.

My current image processing method uses recursion to group image pixels into different blobs after thresholding, and this method is the one that has been returning the call stack error. Therefore, I believe I have to change this function to avoid using recursion (the blob grouping function currently, when creating a blob, calls itself hundreds of times before allowing the program to continue).

My current goal, in conclusion, is to create a new findBlobs() function that avoids the use of recursion.

As to the first problem, I still don’t know how we’re going to fix it, so I’m open to suggestions.

Sunday, March 20, 2016

Progress Report: 3/20/16

Progress:

  • Figured out how to access the ar drone’s navdata (cardinal orientation and angles, velocity, altitude, etc.)
  • Can read in this information and store it in local variables on the connected computer


Problems:

  • Since I still haven’t tested out this new sensor data, I’m not sure how effective or accurate it will be
  • My program, which should find the most likely path and align the drone along it, doesn’t return the correct result when multiple lines are detected on the floor


Plans:

  • Test out the navdata and store all relevant information (for now, that’s mostly just the drone’s velocity and altitude)
  • Use the velocity data to counter drifting
  • Fix the program to return the correct path results when given multiple possibilities
  • Incorporate a second color-thresholding method to find junctions, where the drone will stop and switch paths (a turn or fork in the path)

Tuesday, March 15, 2016

Progress Report: 3/13/16

News:
- Fixed up path detection on ground to find the best path within the blob that looks most like a line

Plans:
- Still need to fix the algorithm so I find the direction of that path (pick one of the two possible directions)
- Then have the drone align itself to the path direction
- Then test drone alignment by flying it over a tape line and having the drone align to the path

Tuesday, March 8, 2016

Progress Report: 3/6/16

We have decided to switch our strategy of autonomous flight by using the bottom camera and following a line on the floor. This was for the following reasons:

  • The main camera couldn't detect markers that was further from a certain distance, which would eliminate the possibility of a marker at the end of hallways
  • Following markers on the floor could possibly increase the accuracy of the flight
  • Image processing could possibly be more accurate due to the fact it is just finding a line rather than a shape
  • The drone will not fly as high, hopefully to increase the accuracy of the flight and line detection

We are now focused on switching the existing image processing to detect lines rather than a circle, and also thinking of different ways to deal with intersections and turning. So far, we are thinking that different colored circles will be used at the intersection and help to determine the direction the drone will turn to. This will be helpful once we implement flight paths to different rooms. Multi-color detection will also have to be utilized in order for this to be successful.

One problem that we have is that when detecting a line, the algorithm will highlight the border of the image for unknown reasons. Hopefully this isn't a significant issue and will be resolved soon. 

Sunday, February 28, 2016

Progress Report: 2/28/16

This week I focused on trying to fix some of the unexpected behavior in the drone's flight path, given a marker in frame to follow. It still doesn't work 100% accurately, but I did have some ideas while tweaking the image processing programs:

  • The environment in which I tested did result in a lot of unwanted noise. To help counter this inevitability, I want to start introducing the tracking of two different colors, and use a marker that contains more than a single identifying color. I have tested this possibility and linked a picture as a proof-of-concept below, which is a screen shot of the output I got while trying 2-color detection at home.
  • I removed erosion from the algorithm, which was slower than it was worth.
  • I added in a function which marks all blobs that the drone finds within the frame. I noticed that the drone actually seems to skip over certain candidates for the marker sometimes. I think this may be an important part of the problem.
  • The distance from the drone to the marker cannot be too large, or the resulting representation in the video appears insignificant. Therefore, it will probably be necessary to use floor markers in between junctions, perhaps using the bottom camera and a separate color + color detection algorithm.
  • Rethinking our whole approach, it may be better to keep all markers on the floor, with, for example, blue circles connected by red lines. It seems kind of rediculous to rethink the approach now, but I think this new method has potential to be better...

Detecting 2 Colors

Tuesday, February 23, 2016

Notes: 2/23/16


  • Problem:
    • The drone, though it seems to locate the target accurately enough, does not fly in an expected manner, given a target placed in front of it.
  • Possible Improvements:
    • Remove erosion (it may be that slowness is more of a hindrance than noise reduction is an improvement) and detection at edges of the image
    • Introduce multiple-color detection (make target-identification more accurate), and make a target of concentric, differently-colored circles
    • Account for the tilt of the drone, by rotating the X and Y axes of the incoming video images to fix what is “horizontal” and “vertical” 
    • Find better, more distinct color (green-blue?)
  • Possible Diagnoses:
    • The drone is not given enough time to respond to commands, and is thus responding late
    • The drone is reacting to noise (unwanted areas that are registered as within the color range)
    • The algorithm for finding the marker does not consider every blob, or does not create a blob around every cluster of pixels
  • Plans for the Near Future: 
    • Move to an area which has no possibility of noise (maybe put white posters up on hallway lockers while testing)
    • Mark all blobs, to make sure every candidate is considered
    • Look over the program thoroughly, to see what commands are paired to which conditions
    • Try removing erosion and check for improvement
    • Try putting a lag on sending new commands, and check for improvement
    • Try using target’s radius to control Z-axis movement (forward, backward)

Tuesday, February 9, 2016

Progress Report 2/8/16


  • Progress:
    • Created a more accurate image processing program, which detects "blobs" of pixels. Blobs have to be a certain size or they will be disregarded. This helps to get rid of noise and unwanted detection.
    • Added a prompt script into the code, asking the user for a room before the drone proceeds with the movement. 
    • The marker resembles a light skin color, which often gets the drone confused when a face is present within the photo. Changing the color or creating a marker with two different colors could probably fix this issue. 
    • Some future plans:
      • Blob detection through recursion
      • Creating pre-determined flight plans for rooms
    • Problems:
      • The image processing program is not constantly accurate
      • We can't get horizontal movement to work alongside with vertical movement 
      • Pre-determined flight plans, while successful, could take a long period of time to accomplish

Tuesday, January 19, 2016

Notes: 1/19/16

I had an idea for improving my image processing algorithm. Up until now, my program, after it erodes the edge pixels that fell within the color range, has been averaging the white pixels' locations to return an estimate of the target's location within an image. There are a number of weaknesses to this approach:

  1. The method assumes that erosion will eliminate all significant noise, but that cannot always be the case. Therefore, if there are 2 or 3 or 4 significant patches of white pixels in the image, then the averaging will probably return an estimated target location in the blank region between them, where the target cannot be.
  2. It is difficult to estimate the radius of a given marker, because noise is not separable from what it thinks to be the bounds of the marker.
  3. Since it is hard to estimate the radius of the marker, then it is also difficult to estimate its circularity, which I would find by averaging the absolute difference between each radius along the 4 cardinal lines and the average of these 4 radii (average difference in radius length around the circle).
My new method would replace the location averaging following erosion, and instead identify "blobs" (connected groups) of white pixels, and then compare them. This would be done through recursion, with the function looking for white pixels, then looking for pixels connected to each white pixel found until a blob is finished, then moving on to find the next blob. By the end, the program would have an array of blobs stored, each containing an array of pixels within it. Here are the improvements with this new approach:
  1. This new function does not rely on erosion to eliminate all contending noise. Instead, it accepts that there may be more than one significant blob of white pixels remaining, and can choose which among them is most likely to be the marker. So the marker location will always be surrounded by white pixels
  2. Now that the given marker location will assuredly be within a blob of white pixels, it is easy to estimate the radius of the marker.
  3. Like #2, it is now also possible to estimate:
    1. Circularity (average difference in radius length around the blob's edge)
    2. Mass (# of pixels in the blob)
    3. Density (how well each blob fits into the color range)
The only significant drawback to my new method is its decreased speed. This new function is MUCH slower than the old one. However, I tested the method myself in Processing with video input from my laptop's built-in camera and was able to bypass this problem. I simply checked every 5 pixels, and then displayed the final image with each pixel actually being made up of 25 pixels. Here is a video of my testing, holding a printed yellow-orange target:



I thought it worked really well, so I'll try incorporating the new algorithm into my processing function. I did it in processing and not in node.js, where my project is written, because I am much more familiar with processing, and because processing has a much easier way to display outputs visually (like drawing a circle over the target).

NOTE: In the program I used here, I thresholded pixels based on proportions 
            — for example, keep pixel if:  (red/blue) > 2.2   AND   (green/blue) > 1.8 —
            instead of a number range to account for lighting.

I will put the code for the program in the google doc for future reference.