• Skip to main content
  • Skip to after header navigation
  • Skip to site footer
ERN: Emerging Researchers National Conference in STEM

ERN: Emerging Researchers National Conference in STEM

  • About
    • About AAAS
    • About the NSF
    • About the Conference
    • Partners/Supporters
    • Project Team
  • Conference
  • Abstracts
    • Undergraduate Abstract Locator
    • Graduate Abstract Locator
    • Abstract Submission Process
    • Presentation Schedules
    • Abstract Submission Guidelines
    • Presentation Guidelines
  • Travel Awards
  • Resources
    • Award Winners
    • Code of Conduct-AAAS Meetings
    • Code of Conduct-ERN Conference
    • Conference Agenda
    • Conference Materials
    • Conference Program Books
    • ERN Photo Galleries
    • Events | Opportunities
    • Exhibitor Info
    • HBCU-UP/CREST PI/PD Meeting
    • In the News
    • NSF Harassment Policy
    • Plenary Session Videos
    • Professional Development
    • Science Careers Handbook
    • Additional Resources
    • Archives
  • Engage
    • Webinars
    • ERN 10-Year Anniversary Videos
    • Plenary Session Videos
  • Contact Us
  • Login

Musical Instrument Classification Using a Neural Network

Undergraduate #211
Discipline: Computer Sciences and Information Management
Subcategory: Computer Science & Information Systems

Therrick-Ari Anderson - Lincoln University of Missouri


Musical instrument classification provides a system for creating and assessing features for an overall examination of musical signals. Through this process of examining and analyzing musical signals, key differences can be acknowledged. Identifying the most important features is the categorizing factor that distinguishes this research study from others in the past. I propose to identify the most significant features to evaluate within my neural network. I also propose to create a neural network that will quickly detect an instrument from another based on descriptive feature. Feature extraction and selection are crucial steps in helping distinguish musical signals. Feature extraction is the process of obtaining specific characteristics from a data sample which will help distinguish it from other samples. Feature selection is the process that follows extraction in which the most relevant features are chosen to represent each sample. Once relevant features are selected they are applied to the neural network as possible inputs. In this work, the neural network will distinguish between two classes of instrument (e.g., trumpet or tuba). Based on the results of applying new data the feature set will go through numerous changes to identify which features worked best. Utilizing a well-known data set such as the University of Iowa Musical Instrument Samples is essential. Specifying the numbers of samples to be analyzed is also important. In this study, it is controlled to a total of twelve samples: six notes from the trumpet and six notes from the tuba. These musical instrument samples wav files have been evaluated using Mirtoolbox and Sound Analysis Tools. A collection of audio features (e.g. numerical, graphical) have been recorded and documented. Mirtoolbox is a musical information retrieval toolbox that can be used within Matlab software. Sound Analysis Tools is a collections of tools that allow the extraction of audio features and the comparison samples. I am currently in the process of figuring out how to represent these features as possible inputs for my neural network. Once this process is completed the implementation of the neural network will begin. Following the basic steps of a neural network, inputs and outputs are specified, the number of hidden neurons is selected, the network is trained until satisfied and finally new data is applied to the network. I intend to gather results from the neural network and identify which features worked. Also, I plan to apply my own data utilizing recordings playing my trumpet. Future research intends to culminate in a trained neural network that can detect which instrument is being applied.
References: Fritts, L. (n.d.). Musical Instrument Samples. Retrieved October 13, 2016; Lartillot, O., Toiviainen, P., & Eerola, T. (n.d.). MIRtoolbox. Retrieved October 13, 2016; Sound Analysis Tools for Matlab. (n.d.). Retrieved October 13, 2016

Funder Acknowledgement(s): Funding was provided by the National Science Foundation, (HBCU-UP) Award # HRD-1410586 to David Heise, Lincoln University, Jefferson City, MO.

Faculty Advisor: David Heise, heised@lincolnu.edu

Role: For my research I collected data using Mirtoolbox and Sound Analysis Tools within Matlab.

Sidebar

Abstract Locators

  • Undergraduate Abstract Locator
  • Graduate Abstract Locator

This material is based upon work supported by the National Science Foundation (NSF) under Grant No. DUE-1930047. Any opinions, findings, interpretations, conclusions or recommendations expressed in this material are those of its authors and do not represent the views of the AAAS Board of Directors, the Council of AAAS, AAAS’ membership or the National Science Foundation.

AAAS

1200 New York Ave, NW
Washington,DC 20005
202-326-6400
Contact Us
About Us

  • LinkedIn
  • Facebook
  • Instagram
  • Twitter
  • YouTube

The World’s Largest General Scientific Society

Useful Links

  • Membership
  • Careers at AAAS
  • Privacy Policy
  • Terms of Use

Focus Areas

  • Science Education
  • Science Diplomacy
  • Public Engagement
  • Careers in STEM

Focus Areas

  • Shaping Science Policy
  • Advocacy for Evidence
  • R&D Budget Analysis
  • Human Rights, Ethics & Law

© 2023 American Association for the Advancement of Science