Sunday, December 20, 2015

Progress Report - 12/20/15

  • Progress:
    • I downloaded the jimp module, an alternative/complement to opencv whose library seems to be documented and explained more thoroughly.
    • The file I required in the node test program is weird, called index.js or something. I couldn’t find a file named jimp.js, which would usually signal the file with all the others required. However, it seems to work; I’ve tried a few methods w/o error-"not a function".
    • I created a rudimentary tracking function given an single image (the image type [.png] matches the images which the drone should feed to the computer):
      • 1) I use opencv to threshold the image for red
      • 2) Then I feed that output to jimp
      • 3) I use jimp to analyze the pixels and their neighbors (for now, horizontally) at a set distance
      • 4) I use jimp to find average X and Y of retained pixels
      • 5) I now have the marker’s location in the image, along with its horizontal and vertical limits (I draw a point at that location).
      • 6) I can now approximate the radius of the marker by finding the average difference between each limit and the center.

  • Problems: (Crossed out, because I resolved the problem)
    • I tried testing jimp, but the png image I read which was the output of an opencv method came up undefined in pixel color when I tried to read pixel(0,0).
    • I think it’s because png images can be formatted in different ways, with different numbers of bytes per pixel.
      • I also tried jpg, but that also came up as undefined. I haven’t researched jpg formatting, but perhaps there’s a similar problem.
      • Perhaps there’s a way to convert an image to rgba before reading it with jimp.
      • Perhaps there’s a way to output an image with a specific data format.
    • Solution:
      • the color RGBA values returned are not stored in a matrix; they are stored as properties of the color object. 
      • So color[0] = undefined, but color.r = the amount of red in the image 
      • :)

  • Plans:
    • Test
      • stream = new opencv.ImageStream()
      • stream.on(‘data’, function(matrix) {
        • //image conversion
        • //image analysis (retrieve marker location)
      • }
      • ardrone.createPngStream().pipe(stream)
      • //Perhaps learn how to output the imageStream to a window display OR send the imageStream to a web browser to display. I know for a fact that the latter is possible; I’ve seen it done.
    • Improve Tracking Function
      • Incorporate vertical neighbors
      • Check neighboring pixels at intermediate distances to improve erosion function’s accuracy, so I can lower the distance
      • Incorporate ardrone.createPngStream() to input video
      • Display video output in browser?
        • This isn't really necessary, but would be nice
    • Drone
      • Test command loops in the drone by putting land() and takeoff() in a repeating loop
      • Use location and marker radius in command sequence for drone
        • put location.x in center of image by setting yaw value
        • move until marker.radius is greater than a certain value
          • I assume marker.radius graphed per distance from marker would be a “1/x” kind of function...
      • Fix leaning in drone flight
        • HOW???
        • Try another drone?
        • Perhaps leaning is OK, once the drone starts following a marker path to correct is drifting
    • Pseudocode for Final Program
      • Store command string specified by user
      • Take off
      • variable: Image = empty
      • variable: radius = 0
      • variable: Y = constant //used to spin un a direction until marker is found
      • for (command in command string) {
        • while (no marker is detected in the following image) {
          • Get image from drone
          • Detect marker location, using the tracking function
          • Send image to user //Maybe; not necessary
          • Yaw += Y
        • }
        • while (radius < R) {
          • if (location.y is too far from equator) change height
          • if (location.x is too far from meridian) change yaw
          • if (neither of the above) move forward
          • while no marker is detected in the following image {
            • Get image from drone
            • Detect marker location
            • Send image to client
            • Yaw += Y
          • }
        • }
      • Stop
      • if (command == end) {
        • Land
      • }
      • else if (command == left/right) {
        • Change yaw left/right
      • }

Sunday, December 6, 2015

RE: Patent Assignment

An excellent patent list! I like your idea of inducing positive buoyancy. Does any one patent it? Is it feasible while people are looking ways to miniaturize the drones? How much volume is required to reduce the net weight of the drone? I remember that the German company Festo has done something similar to float their robots. Here are the links of AirJelly and AirPenguin.

Monday, November 30, 2015

Patent Assignment


1.      Methods for adaptive control in a UAV (drone) carrying a hanging load
a.      1000000800044
b.      with a feedback linearization controller (FLC)
2.      A micro-aerial vehicle with built-in methods for running on land, landing, takeoff, and flying
a.      US 20140319266 A1
3.      Method for automatically changing the altitude of a drone in anticipation of doing a flip, and then doing the flip
a.      US 20130006448 A1
4.      Method to send drone information to users connected via iOS devices
a.      US 20120299751 A1
5.      A device with a touchscreen used to communicate with the drone, and it’s appropriate methods
a.      US 20110221692 A1
6.      An application that allows user to receive real-time aerial imagery from drones
a.      US 20120299751 A1
7.      A modular drone
a.      US 20140277854 A1
8.      Method for deploying a parachute on a drone
a.      7643887
9.      A type of drone made for investigating closed environments
a.      51752823
10.   A device for launching and recovering a drone, and a drone designed to complement the device
a.      US 20120292430 A1
11.   Generic drone control system
a.      25449387
12.   System for sending virtual shots between drones fitted with ID beacons
a.      US 20110299732 A1
b.      the victim recognizes itself in the image of the attacker
13.   A device shaped like a ring which absorbs vibrations of the drone and can hold a sensory device within it
a.      US 20120234969 A1
14.   A drone designed for railway maintenance
a.      US 20130220162 A1
15.   Control system for drones using a pair of stick controls
a.      25021972
16.   Controlling drone’s acceleration through feedback from another accelerating object
a.      22577543
17.   Target Seeking Simulator
a.      25503929
18.   Fire Detection System Using AI
a.      24206901
19.   Precision parachute recovery system
a.      24961033
20.   Methods and apparatus for vision-based object-tracking
a.      US 20130272570 A1
b.      library of images
c.      optical flow module
d.      decision forest module
e.      color tracking module
21.   Systems and methods for vision-based target tracking, designed for use by a UAV
a.      54290321
22.   Methods and systems for swapping the battery in a drone, using an energy station
a.      54106948
23.   Methods for having a UAV extract energy from a transmission line (power line?)
a.      39596616
24.   Abstract idea for resources to be passed from a self-sacrificing drone to a main drone (i.e. a power supply)
a.      US 20130132317 A1
25.   Methods, systems, and apparatus for simulating swarm behaviour in drones
a.      US patent: 9,104,201
b.      also supports controlling the drones to move along a path while maintaining the formation through a minimum-distance-to-other-drones requirement

AREAS OF FUTURE INNOVATION

1.   Reference #23 and #22 remind me of the recent ideas relating to charging a battery remotely (probably something with a strong magnetic field around it). I’m not sure exactly how it works, but I think this method of wireless charging could be used to help drones fly without having to land
2.   References #20,  #21,  #12, and #25 suggest that UAVs could collectively send data to a single control center, which could create maps, etc. with the total data and send individualized controls to each of the drones based upon what the center knows about the environment. Basically, the drones could use each other's video feeds to know what they cannot see.
3.   In general, a problem with drones seems to be the low amount of power they can store. Future innovation could offer energy-reducing methods for flying the drone, like
a.      making it lighter
i.       lighter materials
ii.      perhaps incorporate a form of induced positive buoyancy, like attaching a small zeppelin-like component filled with helium
b.      lessening motor friction
c.      shrinking components
d.      creating a more efficient battery

Progress Report - 12/01/15

Progress:
  • We successfully downloaded OpenCV to work with the corresponding node.js module
  • We successfully downloaded the opencv module for node.js, which can be combined with the ardrone module
  • We’ve gone through some of the examples for image processing
    • conversion to grayscale
    • copying images
    • saving and importing images
    • creating a binary image highlighting pixels within a certain RGB range
    • finding faces
    • drawing circles
Problems:
  • The functions that enabled us to complete the image processing examples are very self-contained, and I have found it hard to learn to do things at a basic level:
    • How to get the color of a pixel at a given location in the image matrix
    • How to set the value of a pixel at a given location
    • The format in which the pixel values are stored
    • The coordinate system used to create the matrices
Plans:

  • Figure out the answers to the questions posed in the “Problems” section
  • Feed in the drone’s video into a method using the opencv module, and display the video
  • Run the image processing examples on the drone video pipeline
  • Create a command loop that runs forever
    • necessary to analyze images and send commands indefinitely
    • Perhaps this would be in a form as simple as:
      • for var i=0; i<N; i++ {
      •      delay 100
      •      turn 10˚ clock-wise
      • }

Sunday, November 1, 2015

Progress Report - 11/1/15

Progress:
  • Learned some more terminology for the ar-drone module of node.js
  • Downloaded OpenCV on Mac computers successfully
  • We were able to write complete programs using node.js and run them, so that the drone could do a sequence of actions simply by running the script.
  • Got further in the install process of opencv after realizing opencv 3.0 wasn’t going to work out
Problems:
  • The opencv module is proving difficult to download, much less integrate into a program with the ar-drone module
  • Downloading opencv 3.0 was impossible, coming across errors before the install even started. Once downloading opencv 2.4.1, the install seemed to be going well until the end where Owen came across some warnings, and Theo came across errors.
  • Still unaware how to fix the drifting problem of the drone, but we plan to look further into that as the programming is developed a little more.
Plans:

  • Check if Linux computer already has the opencv module installed — if so, try writing a program (as described in previous report) with it.
  • Hopefully fixing the opencv problems on the mac laptops, and if that is fixed we will integrate it with node.js on OSX instead of Linux.
  • Start to learn and write programs using both opencv and node.js
    • Unsure of which platform this will ultimately take place on, Linux or OSX
  • Once we become experienced with the basic examples of opencv we can start to move towards the more complex programs that will allow autonomous flight.


Monday, October 26, 2015

Progress Report - 10/25/15

Progress:
  • We flew the drone using the computer through ardrone_navigation in the SDK with an XBox controller
  • We flew the drone using the computer through Node.js with the ar-drone module
    • took off
    • spun
    • flipped
    • landed
Problems:
  • The modules for node.js are fairly new, so there are possibilities of this not working, if the functions themselves are just not reliable, but I think it can work.
    • If the above turns out to be a real problem, we may have to return to the original plan A with the SDK.
  • There is a small drifting problem when yaw is changed. This should be fixable (after turn, drone adjusts position by putting the marker in the center of the image) by compensating for the movement.
Plans:
  • Learn all of the terms and functions associated with the ar-drone module
  • Do the same for the opencv module
  • Create a basic program which goes through the following:
    • take-off
    • roll/pitch movement
    • yaw turning
    • getting image data
    • displaying data in browser window
  • If all this can be accomplished, then add in:
    • pass png stream to opencv
    • perform matrix transformations to the images (binary thresholding)
    • draw over the spot of highest concentration
    • pass new image to browser

Tuesday, September 29, 2015

RE: Gantt Chart Link —11/8

Feedback:
  1. The first part of Gannt chart is decent. You can just start working on your project accordingly.
  2. The sencond part of the Gantt chart is obviously missing probably due to the limited understanding of the task. I suggest that you revise it while you are working on the first part. and notify me through email when you are done.

Thursday, September 24, 2015

Gantt Chart Link —11/8

Gantt Chart — 11/8

Since so far we are both doing the same things, we are not assigned separate tasks.

Wednesday, September 23, 2015

The Air Force Collaboratory: Mind of a Quadrotor

The Air Force Collaboratory launched a quadrotor project two years ago. The goal is to develop an autonomous quadrotor that can think and make decision by itself like a human.

The program has mostly closed already. However, there are some general information and discussion on the website, which may be of interest to you. Please click the link and browse through the site for further info.

Monday, September 21, 2015

RE: Initial Planning & Coordination

Project Description
  • We are aiming to create a system by which a drone can autonomously fly in an indoor environment and deliver packages between specified locations.
  • Though this has essentially already been done by others, this project could potentially help implement this system in ElRo and further the practical knowledge and experience of our group members.
  • You can still improve the performance of the vision-based navigation by improving the algorithms.
Communication
  • Team: Owen G., Theo C. We communicate by texting and google docs.
  • Group: Our group is fairly independent. Though the drone take-down teams are related to ours in that we share the same equipment, we shouldn’t really need to communicate very much (if needed, email should suffice).
  • You can copy Theo's comments here for the Prior Work/Resource Inventory.
Tech Analysis
  • Navigate Linux
  • Run AR Drone SDK examples
  • Edit AR Drone SDK examples
  • Navigate AR Drone SDK
  • Use OpenCV
  • Incorporate OpenCV in AR Drone SDK

Competence
  • What we already know:
    • Minor experience in C
  • What we don’t already know:
    • Experience in Linux
    • Experience with the AR Drone
    • Experience with the SDK
      • Running examples
      • Editing examples
    • Experience with OpenCV
    • Further knowledge of C programming

Safety
  • Flying the drone offers a safety hazard; we need open, unoccupied spaces to test the flight of the drone.

Equipment, Materials, Budget
  • Parrot AR Drone (already purchased)
  • AR Drone SDK (free)
  • OpenCV (free)
  • Computer with Linux OS (free)

Schedule
  • This week we want to focus on having everything working as it was when the previous group finished. We want to run the SDK examples sdk_demo and navigation, as well as run the OpenCV application created by last year’s group on both of our computers. If we run into any problems, we want to resolve them by the end of this week.
  • We also, at the same time, want to work on our gantt chart. However, if it was not intended that we jump into the project yet, we can just try to finish the gantt chart before starting.
  • The first task will be done by both of us, and the second (gantt chart) will be edited by both of us, but I will share the gantt chart.

  • Research project planning is more like an iterative process, especially for beginners. The more you jump into the project, the more solid your planning will be. So, go ahead!