Emerging Researchers National (ERN) Conference

nsf-logo[1]

  • About
    • About AAAS
    • About the NSF
    • About the Conference
    • Partners/Supporters
    • Project Team
  • Registration
    • Conference Registration
    • Exhibitor Registration
    • Hotel Reservations
  • Abstracts
    • Abstract Submission Process
    • Presentation Schedules
    • Abstract Submission Guidelines
    • Presentation Guidelines
    • Undergraduate Abstract Locator (2020)
    • Graduate Abstract Locator (2020)
    • Faculty Abstract Locator (2020)
  • Travel Awards
  • Resources
    • App
    • Award Winners
    • Code of Conduct-AAAS Meetings
    • Code of Conduct-ERN Conference
    • Conference Agenda
    • Conference Materials
    • Conference Program Books
    • ERN Photo Galleries
    • Events | Opportunities
    • Exhibitor Info
    • HBCU-UP/CREST PI/PD Meeting
    • In the News
    • NSF Harassment Policy
    • Plenary Session Videos
    • Professional Development
    • Science Careers Handbook
    • Additional Resources
    • Archives
  • Engage
    • Webinars
    • Video Contest
    • Video Contest Winners
    • ERN 10-Year Anniversary Videos
    • Plenary Session Videos
  • Contact Us
  • App View

Real-Time Facial Computing Web Service Construction and Mobile Device Implementation

Undergraduate #64
Discipline: Computer Sciences and Information Management
Subcategory: Computer Science & Information Systems

Christina Tsangouri - City College of New York
Co-Author(s): Wei Li, City College of New York, NY



Emotions are an incredibly important aspect of human communication. For visually impaired people, their inability to see the emotions of other people severely impairs their ability for social interaction. In order to help visually impaired people be able to recognize the emotions of the people around them, a system was designed which is able to inform users in real-time the emotions of the people around them. The emotions able to be recognized are anger, surprise, neutral, happy, disgust, fear, and sad. The system implemented, was designed as a framework consisting of a mobile application developed for Google’s Android operating system using Java and the Android Software Development Kit, a web service on the main server developed using Ruby on Rails, and an emotion-computing server.

The framework designed is based on a loop between a mobile device and a server. The mobile device captures real-time scenes through the built-in camera and detects faces continuously, which is implemented using OpenCV. Upon face detection, the face region is sent using HTTP protocol to the main server, which handles all the requests from the mobile device and assigns the image to the emotion-computing server for emotion recognition. The emotion-computing server is designed to compute the probability of each of the 7 emotions in the image based on a Convolutional Neural Network we trained. The main server then sends these probabilities to the mobile application and the most probable emotion is displayed to the user. This entire loop executes at about 2 frames per second. The application is designed with a visually impaired user in mind and includes features such as vibration upon face detection, and communicating the most probable emotion result through audio using music tunes in addition to displaying the result visually. The application developed, EmoComputing, is available on Android’s Google Play Market. Future work includes also implementing the framework on Apple’s iOS platform to aid with data collection for creating a more accurate emotion recognition model, an evaluation of the performance of the emotion recognition system in a real-time environment, and an evaluation of the user experience.

References: Li, Wei, Min Li, Zhong Su, and Zhigang Zhu. “A deep learning approach to facial expression recognition with candid images.” In Machine Vision Applications (MVA) 2015, 14th IAPR International Conference on, pp.279-282. IEEE, 2015.

Funder Acknowledgement(s): This work is supported by the National Science Foundation under Award #EFRI - 1137172, and the 2015 NSF EFRI-REM pilot program at the City College of New York.

Faculty Advisor: Zhigang Zhu, Tony Ro,

ERN Conference

Celebrating 10 years of ERN!

What’s New

  • Webinars
  • Events|Opportunities
  • AAAS CEO Comments on Social Unrest, Racism, and Inequality
  • Maintaining Accessibility in Online Teaching During COVID-19
  • In the News
  • #ShutDownSTEM
  • HBCU/CREST PI/PD Meeting

Conference Photos

ERN Conference Photo Galleries

Awards

ERN Conference Award Winners

Checking In

Navigation

  • About the ERN Conference
  • Partners/Supporters
  • Abstracts
  • Travel Awards
  • Conference Registration
  • Exhibitor Registration
  • Hotel Reservations

nsf-logo[1]

This material is based upon work supported by the National Science Foundation (NSF) under Grant No. DUE-1930047. Any opinions, findings, interpretations, conclusions or recommendations expressed in this material are those of its authors and do not represent the views of the AAAS Board of Directors, the Council of AAAS, AAAS’ membership or the National Science Foundation.

AAAS

1200 New York Ave, NW Washington,DC 20005
202-326-6400
Contact Us
About Us

The World's Largest General Scientific Society

Useful Links

  • Membership
  • Careers at AAAS
  • Privacy Policy
  • Terms of Use

Focus Areas

  • Science Education
  • Science Diplomacy
  • Public Engagement
  • Careers in STEM

 

  • Shaping Science Policy
  • Advocacy for Evidence
  • R&D Budget Analysis
  • Human Rights, Ethics & Law
© 2021 American Association for the Advancement of Science