• Skip to main content
  • Skip to after header navigation
  • Skip to site footer
ERN: Emerging Researchers National Conference in STEM

ERN: Emerging Researchers National Conference in STEM

  • About
    • About AAAS
    • About NSF
    • About the Conference
    • Project Team
    • Advisory Board
  • Conference
  • Abstracts
    • Abstract Submission Process
    • Abstract Submission Guidelines
    • Presentation Guidelines
  • Travel Awards
  • Resources
    • Award Winners
    • Code of Conduct-AAAS Meetings
    • Code of Conduct-ERN Conference
    • Conference Agenda
    • Conference Materials
    • Conference Program Books
    • ERN Photo Galleries
    • Events | Opportunities
    • Exhibitor Info
    • HBCU-UP PI/PD Meeting
    • In the News
    • NSF Harassment Policy
    • Plenary Session Videos
    • Professional Development
    • Science Careers Handbook
    • Additional Resources
    • Archives
  • Engage
    • Webinars
    • ERN 10-Year Anniversary Videos
    • Plenary Session Videos
  • Contact Us
  • Login

Biomedical Image Region of Interests Detection Based Transfer Learning and Deep Feature Extraction and Classification

Undergraduate #240
Discipline: Computer Sciences and Information Management
Subcategory: Computer Science & Information Systems
Session: 1
Room: Exhibit Hall A

Jose Dixon - Morgan State University
Co-Author(s): Md Mahmudur Rahman, Morgan State University, Baltimore, Maryland



Biomedical images are frequently used in articles to illustrate the medical concepts or to highlight regions-of-interests (ROIs) by using annotation markers (pointers) such as arrows, letters or symbols overlaid on figures. It is possible to identify specific local patches in medical images as ROIs, which are perceptually and/or semantically distinguishable, such as homogeneous texture patterns in grey level radiological images, differential color and texture structures in microscopic pathology and dermoscopic images. The variation in these local patches can be effectively modeled as ?concepts? by using supervised learning technique. Deep learning algorithms, and in particular convolutional neural networks (CNNs), have achieved unprecedented accuracies and speeds across a large variety of image classification tasks. The superior performance of CNNs comes from its ability to recognize visual patterns directly from raw pixels with little to none pre-processing. Using a pre-trained CNN as a feature extractor also provides an alternative to the handcrafted features that do not learned directly from observation of images in general machine learning classifiers. This work is focusing on extracting such features from medical image patches by applying deep transfer learning with pre-trained VGGNet, ResNet, GoogleNet, and AlexNet networks using Python with OpenCV and Keras libraries. For this experiment, we used two datasets; the first one (CT-Chest) has 2645 patches (64×64) with 12 different segmented CT lung patches (such as, honeycomb, cyst, bronchi, etc.) In addition, the second dataset (MedPatch60) consists of 18854 patches from different medical imaging modalities with 60 different categories. 80% of the patches are represented in both datasets as the testing set and 20% percent as the training set. For the CT-Chest dataset, we obtained an accuracy between of 50% from AlexNet to the highest of 100% from GoogleNet and for the MedPatch60 dataset; we obtained an accuracy between at 50% from AlexNet to 97% from ResNet. In future, we will explore classifier combination techniques with different deep features as input for further improvement of our classification system and ultimately integrate it with a medical retrieval system. This work eventually would improve the retrieval of biomedical literature by targeting the visual content in articles, a rich source of information not typically exploited by conventional bibliographic or full-text databases.

Funder Acknowledgement(s): Funding was provided by an NSF/ HBCU-UP (Research Initiation Award) grant # 1601044 for Dr. Rahman

Faculty Advisor: Md Mahmudur Rahman, md.rahman@morgan.edu

Role: I performed the research on myself by installing necessary libraries for Python and OpenCV, doing programming, and validating the results. My research mentor assists in providing datasets and guidance.

Sidebar

Abstract Locators

  • Undergraduate Abstract Locator
  • Graduate Abstract Locator

This material is based upon work supported by the National Science Foundation (NSF) under Grant No. DUE-1930047. Any opinions, findings, interpretations, conclusions or recommendations expressed in this material are those of its authors and do not represent the views of the AAAS Board of Directors, the Council of AAAS, AAAS’ membership or the National Science Foundation.

AAAS

1200 New York Ave, NW
Washington,DC 20005
202-326-6400
Contact Us
About Us

  • LinkedIn
  • Facebook
  • Instagram
  • Twitter
  • YouTube

The World’s Largest General Scientific Society

Useful Links

  • Membership
  • Careers at AAAS
  • Privacy Policy
  • Terms of Use

Focus Areas

  • Science Education
  • Science Diplomacy
  • Public Engagement
  • Careers in STEM

Focus Areas

  • Shaping Science Policy
  • Advocacy for Evidence
  • R&D Budget Analysis
  • Human Rights, Ethics & Law

© 2023 American Association for the Advancement of Science