Stefan Kahl, Ph.D.

Dr Stefan Kahl
Dr Stefan Kahl

How can computers learn to recognize birds from their sounds? As a Lead for BirdNET Technology within the K. Lisa Yang Center for Conservation Bioacoustics, I am trying to find an answer to this question. My research is mainly focused on the detection and classification of avian sounds using machine learning. Automated observation of avian vocal activity and species diversity can be a transformative tool for ornithologists, conservation biologists, and birdwatchers to assist in long-term monitoring of critical environmental niches.

With a background in computer vision and deep learning, I am mainly focusing on developing new methods to process large data collections of environmental sounds. After completing my master’s degree in Applied Computer Science in 2014, I became a research assistant at the Chemnitz University of Technology, Germany. I was involved in research projects covering human-computer and human-robot interaction, multimodal media retrieval, and mobile application development.

I joined the Yang Center in 2019, continuing my work on a bird sound recognition system I call BirdNET. My goal is to assist experts and citizen scientists in their work of monitoring and protecting our birds by developing a wide range of applications such as smartphone apps, public demonstrators, web interfaces, and robust analysis frameworks.

Year Hired: 2019

Contact Information
K. Lisa Yang Center for Conservation Bioacoustics
Cornell Lab of Ornithology
159 Sapsucker Woods Road, Ithaca, NY 14850, USA.

Social Media: Google Scholar, GitHub, Twitter: @kahst

Ph.D. Deep learning for bioacoustics, Chemnitz University of Technology, Germany, in progress
M.Sc. Applied Computer Science, Chemnitz University of Technology, Germany, 2013
B.Sc. Applied Computer Science, Chemnitz University of Technology, Germany, 2011

Recent Publications

Kelly, K.G. et al. (2023) ‘Estimating population size for California spotted owls and barred owls across the Sierra Nevada ecosystem with bioacoustics’, Ecological Indicators, 154, p. 110851. Available at:
McGinn, K. et al. (2023) ‘Feature embeddings from the BirdNET algorithm provide insights into avian ecology’, Ecological Informatics, 74, p. 101995. Available at:
Brunk, K.M. et al. (2023) ‘Quail on fire: changing fire regimes may benefit mountain quail in fire-adapted forests’, Fire Ecology, 19(1), p. 19. Available at:
Kahl, S. et al. (2022) ‘Overview of BirdCLEF 2022: Endangered bird species recognition in soundscape recordings’, in. CLEF 2022 - Conference and Labs of the Evaluation Forum, p. 1929. Available at: (Accessed: 16 December 2022).
Joly, A. et al. (2022) ‘Overview of LifeCLEF 2022: an evaluation of Machine-Learning based Species Identification and Species Distribution Prediction’, CLEF 2022 - Conference and Labs of the Evaluation Forum [Preprint].
Wood, C.M. et al. (2022) ‘The machine learning–powered BirdNET App reduces barriers to global bird research by enabling citizen science participation’, PLOS Biology, 20(6), p. e3001670. Available at:
Kahl, S. et al. (2021) ‘Overview of BirdCLEF 2021: Bird call identification in soundscape recordings’, CLEF 2021 [Preprint].
Wood, C.M. et al. (2021) ‘Survey coverage, recording duration and community composition affect observed species richness in passive acoustic surveys’, Methods in Ecology and Evolution [Preprint]. Available at:
Kahl, S. et al. (2021) ‘BirdNET: A deep learning solution for avian diversity monitoring’, Ecological Informatics [Preprint]. Available at:
Kahl, S. et al. (2018) ‘Recognizing Birds from Sound - The 2018 BirdCLEF Baseline System’, Computer Vision and Pattern Recognition [Preprint]. Available at:
Kahl, S. et al. (2017) ‘Large-Scale Bird Sound Classification using Convolutional Neural Networks.’, LifeCLEF 2017, Dublin, Ireland, 12-13 September 2017.