Using third-party tool to create applications
- You can refer to the "Third-Party Tools" in the project resource page, and find several tools you can use. Some of them are no longer actively maintained.
- Last year students have used AutoFlight (running on Windows PC) form LBPC Labs to fly drones and write autonomous flight routines using its Python-like script language, called AutoScript. Students also use its function to acquire ultrasound sensor data to perform autonomous landing. (You can find the example at 3:21 of City of STEM video.) However, based on students' feedback, several flight control functions are not reliable/working. Also, the documentation of using OpenCV with AutoFlight is missing.
- Last year students have also tried to use Nodecopter to program the motion. It's a newly developed tool and started getting popular. However, they encountered some problems and didn't get through it.
- There are many existing AR Drone-based research projects/capstone projects, so, practically speaking, we don't need to start everything from scratch. Since Linux is probably the most popular platform in academia, many of the projects are Linux-based.
- Last year students followed a dissertation A.R Drone Vision-Guided Searching (2013) form Derek Long, and coded the computer vision part of the autonomous drone navigation. (Some results can be found at 4:12 of City of STEM video.) The goal is to use a series of simple circular visual marks to guide the drone. They have acquired the drone video, adapted it to OpenCV format, transformed it to HSV color space, converted it to binary by thresholding. Then, based on the binary video, they calculated the size and center of mass of the white pixels, and use them to estimate the distance and direction of the visual marks. What they haven't got time to finish is marry the computer vision part with the motion control part to make the drone navigate autonomously by following the marks.
- As far as the path planning, the drone should be able to navigate from any room to any other room on the second floor autonomously (whether starting from the middle of the room or just at the door are both ok). In other word, some UI for users to assign the navigation tasks is required.
Four Stages of the Project
- Stage I: As you suggested, after testing and calibrating all the relevant sensors, program the drone motion based on the sensor data (Linux preferred).
- Stage II: If general sensors are not good enough, you can follow up and continue the work left from last year to do vision-based navigation (more precisely, visual mark-based navigation).
- Stage III: After having some experience on vision-based navigation, we can let go the visual marks. Instead, use pre-stored visual features to guide the drone navigation. Those features can be acquired and created in the training phase (we will hold the drone, walk through the hallway, and capture the crucial scenes for navigation). When drone fly by itself, the program will process the real-time video and find matches to the stored features. Based on the matching information, the drone knows where it is and navigate toward the destination accordingly.
- Stage IV: If you can reach stage III, and still have time, you can try to implement the whole SLAM (much more advanced math involved). We don't even need to manually train the drone. The drone will map the unknown environment and locate itself accordingly. Based on the map created by the done, we can ask drone to go wherever we want.
If you agree with the picture I just painted, you can start studying/coding/experimenting:
- Sensors: Study the information about all the relevant sensors including accelerometers, gyroscopes, magnetometers, ultrasound sensors, pressure sensor, etc. in AR Drone Developer Guide.
- AR Drone Linux SDK
1) Read and take notes on Chapter 10 of AR Drone Developer Guide.
2) Download and install Ubuntu 12.04 (LTS).
3) Install AR Drone SDK 2.0.1
4) Run examples "navigation" & "sdk_demo", and study their code. - Start coding based on those examples.