• Skip to main content
  • Skip to after header navigation
  • Skip to site footer
ERN: Emerging Researchers National Conference in STEM

ERN: Emerging Researchers National Conference in STEM

  • About
    • About AAAS
    • About NSF
    • About the Conference
    • Project Team
    • Advisory Board
  • Conference
  • Abstracts
    • Abstract Submission Process
    • Abstract Submission Guidelines
    • Presentation Guidelines
  • Travel Awards
  • Resources
    • Award Winners
    • Code of Conduct-AAAS Meetings
    • Code of Conduct-ERN Conference
    • Conference Agenda
    • Conference Materials
    • Conference Program Books
    • ERN Photo Galleries
    • Events | Opportunities
    • Exhibitor Info
    • HBCU-UP PI/PD Meeting
    • In the News
    • NSF Harassment Policy
    • Plenary Session Videos
    • Professional Development
    • Science Careers Handbook
    • Additional Resources
    • Archives
  • Engage
    • Webinars
    • ERN 10-Year Anniversary Videos
    • Plenary Session Videos
  • Contact Us
  • Login

Developing a License Plate Detection and Character Recognition Model for Performance Evaluation and Optimization

Undergraduate #18
Discipline: Computer Sciences and Information Management
Subcategory: Computer Science & Information Systems
Session: 1
Room: Private Dining

Jarred Johnson - Fort Valley State University


In recent years, there has been an increasing demand for reliable automatic license plate recognition technology in both private and governmental organizations. Based on recent innovations in machine learning and image processing, it was hypothesized that it would be possible to develop a smart detection technology to identify vehicle license plates under normal and adverse environmental conditions with relatively good accuracy. To investigate this hypothesis, we developed a license plate recognition model (LPRM) to examine the reliability of this technology under aforementioned conditions. The Python language was used to develop the code for the LPRM. Several libraries such as TensorFlow, NumPy, Matplotlib, and OpenCV in conjunction with a previously developed object detection model were employed in the model development. An Optical Character Recognition (OCR) technology, EasyOCR, was integrated into the model in order to extract the characters from the license plate.The license plate recognition aspect of the model involves resizing the image to a width of 600 pixels while maintaining the image’s aspect ratio. The resized image is further processed to create detection boxes around all the detected objects within the image. Finally, the detected objects are filtered in order to find the license plate specifically.The OCR process involves receiving the processed images in the license plate detection model, extracing the detection boxes, classifying every detected object within the image with a confidence score, and finally filtering through them, and keeping only detection boxes that meet or exceed a predefined confidence score threshold. Finally, the obtained boxes passes through EasyOCR’s readtext function, which extracts every text from the image using a Bidirectional Long-Short Term Memory Neural Network. Upon further filtering, the peripheral texts are removed to obtain just the license plate characters.For real time license plate detection, live webcam footage are passed through the model instead of passing static images. From there, the license plate detection and OCR methods remain the same, the only difference being that we now record the extracted characters into a .csv file. Once we created the model, we used 22 different images of vehicle license plates under normal imaging conditions to determine the accuracy of the LPRM. The model was able to detect the license plate characters at 75.52% accuracy. We then distorted some images to simulate adverse environmental conditions. The distortion was done by using a Gaussian Blur on the X and Y-axes. The model was able to detect the license plate characters with an accuracy of 67%.In conclusion, we found that our license plate model was reasonably accurate. Although our OCR engine was not as advanced as many others, we were still able to produce relatively accurate results, even under non-ideal conditions. Mentor: Dr. Naghedolfeizi.

Funder Acknowledgement(s): The author gratefully acknowledges Fort Valley State University NSF HBCU-UP program directed by Dr. Dhir for the financial support of this project.

Faculty Advisor: Masoud Naghedolfeizi, feizim@fvsu.edu

Role: Developed code Performed model testingAnalyzed results

Sidebar

Abstract Locators

  • Undergraduate Abstract Locator
  • Graduate Abstract Locator

This material is based upon work supported by the National Science Foundation (NSF) under Grant No. DUE-1930047. Any opinions, findings, interpretations, conclusions or recommendations expressed in this material are those of its authors and do not represent the views of the AAAS Board of Directors, the Council of AAAS, AAAS’ membership or the National Science Foundation.

AAAS

1200 New York Ave, NW
Washington,DC 20005
202-326-6400
Contact Us
About Us

  • LinkedIn
  • Facebook
  • Instagram
  • Twitter
  • YouTube

The World’s Largest General Scientific Society

Useful Links

  • Membership
  • Careers at AAAS
  • Privacy Policy
  • Terms of Use

Focus Areas

  • Science Education
  • Science Diplomacy
  • Public Engagement
  • Careers in STEM

Focus Areas

  • Shaping Science Policy
  • Advocacy for Evidence
  • R&D Budget Analysis
  • Human Rights, Ethics & Law

© 2023 American Association for the Advancement of Science