Thursday, July 30, 2015

Some Notes

Choice between two methods:
    • Create application (more desirable, but maybe has less capabilities in terms of creating automatic control)
    • Create software from scratch (pretty much anything is possible, but it’s much more difficult)

What am I doing?
    • Altering interaction between drone and client controller so that the client gives a pre-compiled list of markers/commands to follow. Previously, the client controller gave instructions for maneuvers and changes in orientation.
      • Perhaps the user selects a path consisting of a number of markers, which is then sent to the drone as a list of commands.
        • The commands could contain a string of goals.
          • Markers have attributed locations (x,y,z).
          • Locations are created in relation to an origin point (i.e. the center of the music room).
          • We would compile a list of locations in relation to the origin, which is used to write the commands list.
          • The drone would keep track of its position relative to the origin point.
          • This "mental image” could be aided by sensory input of the external environment (i.e. the drone pauses while obstructions block the way).
            • However, there is no distance sensor that faces forward.
          • Mr. Lin says that the self-derived location will probably worsen over time.
            • TEST: accuracy of the altimeter (the sonar sensor).
            • TEST: accuracy of the accelerometer.
            • TEST: orientation of the sonar distance sensor.
              • ANSWER: pointed downwards, but perhaps it can be reoriented.
    • The commands could be IDs which respond to physical markers.
      • In this case, the markers are placed throughout the school to be followed.
      • The markers may have to have recognizable ID tags, which probably would require a more sophisticated seeking function than simply finding the concentration of a certain color.
        • The ID could be a series of shapes and/or colors…?


Resources
- On simultaneous localization and mapping:
  http://www.nickd.nl/dl/thesis_Nick_Dijkshoorn.pdf 
- On using openCV within the SDK to stream video from the drone to a computer: 
http://petrkout.com/linux/parrot-ardrone-2-0-video-streaming-through-opencv-in-linux/

Thursday, July 2, 2015

Re: Method Question

Sorry to miss your post. There are a few thoughts about your proposal:
  1. Since the drone will be navigating in a known indoor environment, you should take advantage of that knowledge and integrate the geometry (or map) of the building into your navigation algorithm.
  2. The Inertial Measurement Unit (IMU) of the drone can definitely gives you valuable real-time information of navigation. Use it to control the drone motion should be part of the learning curve of drone programming and worth trying to solve our problem. We have used it indirectly (through third-party tools) before. Somehow it seems not very reliable, and the drone will drift gradually. However, it shouldn't prevent you from testing the idea under your direct control (i.e., your own program).
  3. The ultrasound sensor on the drone is currently used to determine the height, not the distance of obstacles in front of the drone. It is possible to hack the drone and add a ultrasound sensor (and other circuits such as Arduino board) pointing toward the front direction to detect objects.
  4. The camera on the drone is actually another powerful sensor which can provide a lot information. Just like human navigating around the indoor environment, we rely heavily on the vision. The color marks guided navigation is just the first step of our project. By putting color marks around the strategic locations of the building will ease the task of visual detection. It is a simplified, intermediate step to test the image processing and navigation. Once that have been done, we can explore more sophisticated algorithms for navigation without any marks, and even handling the moving obstacle avoidance.