Discipline: Technology and Engineering
Subcategory: Computer Engineering
Session: 3
Room: Exhibit Hall A
Conor Green - Loyola Marymount University
Co-Author(s): Brenden Stevens, University of California, Los Angeles, Los Angeles, California
Small, quadrotor helicopter, or quadcopter, unmanned aerial vehicles (UAVs) have unique abilities to map environments, particularly utilizing 3D flash light detection and radar (LiDAR) technologies. However, the majority of applications to date use global navigation satellite systems (GNSS) to determine location in order to stitch together LiDAR frames, which excludes mapping in environments without readily available or reliable GNSS (e.g. inside concrete buildings or underground.) Previous projects have been able to confirm the viability of mapping with LiDAR and achieve simultaneous localization and mapping without GNSS. The project presented equipped a quadcopter UAV with a LiDAR sensor and manually navigated through an interior environment to provide navigation and point cloud data. With estimates of pose, a virtual mock-up was generated in post from the point cloud data to provide a comprehensive, 3D representation of the interior. The proposed system was able to create extensive and accurate models of the interior of a building in constrained circumstances. Comprehensive models can be created if the UAV is not actually flying but instead placed on a chair and rotated/translated. Due to the extra weight of the LiDAR and single board computer, the UAV could not fly properly and the data collected in flight was so low quality it could not generate a reasonable model. Despite difficulties in flight, the project clearly demonstrates the efficacy of LiDAR mapping and provides justification for further research and development in the area using new 3D ToF LiDAR cameras. Improved position estimation through a Kalman filter may yield more accurate estimations. Furthermore, kerneling techniques on the point cloud data could significantly improve and even replace iterative point cloud techniques for the transform estimations from the point cloud.
Funder Acknowledgement(s): I would like to thank the National Science Foundation and Department of Defense for funding the REU for Smart UAVs at Auburn University. I would also like to thank Dr. Chapman, Dr. Biaz, and the host university, Auburn University, for running the program.
Faculty Advisor: Dr. Richard Chapman, chapmro@auburn.edu
Role: The entire system was well balanced between two subjects: GPS-less location/orientation (or pose) estimation and point cloud registration. I focused on drone control and interfacing, sensor filtering, and pose estimation. We used a small quadcopter that I wrote an extensive Python script to fly the drone and query sensor data via a wireless connection. I applied filters to the sensor values by identifying the noise and applying a complementary and various low pass filters. Finally, I wrote an algorithm to fuse sensor values and estimate pose.