Discipline: Technology and Engineering
Subcategory: Computer Engineering
Jeury Mejia - Borough of Manhattan Community College
Co-Author(s): Feng Hu, The Graduate Center, CUNY Hao Tang, Borough of Manhattan Community College, CUNY
Localizing visually impaired people within an indoor environment is of major importance, since this information could be used to help reduce the risk they take while mobilizing and navigating. In this work, we implement an iPhone-based indoor navigation system to help the blind to deal with localization and navigation issues. The system consists of a smartphone with a special designed omnidirectional lens mounted on its case. The lens takes omnidirectional images, which represent panoramic information of the environment in a single shot. All the indoor environment information covered within this take are compacted, and concise features are extracted from them and further stored in a remote GPUenabled server. When users want to localize themselves, they just need to capture a short period video and the app will send the compressed information to the server and the server will calculate the location and feedback this information back to the user. In the server, this information is compared to information already stored in a large frame-based database and the current location is identified finding the best matched frame. We apply TCP/IP socket communication to establish a connection between the front-end iPhone app and the server; the latter can potentially provide service to multiple users simultaneously, which makes the system scalable. The application provides real time data acquisition, transmission and query, and few phone’s resources is needed as most of the processing takes place in the server. Currently, the app’s UI (User Interface) consists of three main panels: user interface, admin interface, and network interface. The user’s panel, which is for the visually impaired, is simple and intuitive to use and allows the blind user to record a short period video by pressing a big button in the middle of the screen; as they release the button the app will provide audio feedback with the current location in the environment. The admin and network interfaces are intended for researchpurpose and is to help the developers tune in different parameters (e.g., video quality, video recording, IP address, etc) that take place in the whole process. Our system is low-cost and easy-to-use, factors that make it a viable solution to issues regarding safety and accessibility of blind people.
Funder Acknowledgement(s): This work is supported by NSF EFRI-REM under Award # EFRI-1137172, to the City College of New York.
Faculty Advisor: Zhigang Zhu,