{"id":11636,"date":"2026-02-13T16:49:14","date_gmt":"2026-02-13T16:49:14","guid":{"rendered":"https:\/\/www.birds.cornell.edu\/ccb\/?p=11636"},"modified":"2026-04-02T19:20:33","modified_gmt":"2026-04-02T19:20:33","slug":"getting-started-with-raven-intelligence","status":"publish","type":"post","link":"https:\/\/www.birds.cornell.edu\/ccb\/getting-started-with-raven-intelligence\/","title":{"rendered":"Getting started with Raven Intelligence"},"content":{"rendered":"\n<div class=\"wp-block-group has-sand-background-color has-background is-layout-constrained wp-block-group-is-layout-constrained\">\n<h2 class=\"wp-block-heading\">Installing<\/h2>\n\n\n\n<p>Download the latest version (1.0.5) for your operating system:<\/p>\n\n\n\n<div class=\"wp-block-buttons is-layout-flex wp-block-buttons-is-layout-flex\">\n<div class=\"wp-block-button\"><a class=\"wp-block-button__link has-white-background-color has-background wp-element-button\" href=\"https:\/\/updates.ravensoundsoftware.com\/updates\/workbench\/raven_intelligence\/win_x64\/Raven-Intelligence-1.0.5.exe\">Windows<\/a><\/div>\n\n\n\n<div class=\"wp-block-button\"><a class=\"wp-block-button__link has-white-background-color has-background wp-element-button\" href=\"https:\/\/updates.ravensoundsoftware.com\/updates\/workbench\/raven_intelligence\/mac_arm64\/Raven-Intelligence-1.0.5.dmg\">MacOS<\/a><\/div>\n\n\n\n<div class=\"wp-block-button\"><a class=\"wp-block-button__link has-white-background-color has-background wp-element-button\" href=\"https:\/\/updates.ravensoundsoftware.com\/updates\/workbench\/raven_intelligence\/linux_x64\/raven-intelligence_1.0.5-1_amd64.deb\">Linux<\/a><\/div>\n<\/div>\n<\/div>\n\n\n\n<p><\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"RavenIntelligenceDocumentation-Part1:Runningamodel\"><strong>Part 1: Running a model<\/strong><\/h2>\n\n\n\n<p>To run a model, simply select the model settings, choose your audio data and postprocessing steps, and click run. Part 1 of the documentation provides details on each of these steps.<\/p>\n\n\n\n<p>Note: throughout this documentation we will refer to the&nbsp;<strong>Models folder.<\/strong>&nbsp;This folder is located under the user home directory, under &#8220;Raven Workbench\/Raven Intelligence\/Models&#8221;. For example, on a Windows machine it might be on the C drive at:&nbsp;C:\\Users\\yourusername\\Raven Workbench\\Raven Intelligence\\Models<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"RavenIntelligenceDocumentation-ModelSettings\">Model Settings<\/h3>\n\n\n\n<p><strong>Select .ravenmodel file<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Purpose: Indicate which model you&#8217;d like to use by selecting its corresponding .ravenmodel configuration file. &nbsp;The dropdown also includes options to add an existing .ravenmodel file or create a new one. To create new .ravenmodel files, see Part 2: Configuring a new model.&nbsp;<\/li>\n\n\n\n<li>Allowed values: .ravenmodel files located in the Models folder (the dropdown will automatically show all .ravenmodel folders in the Models folder)<\/li>\n<\/ul>\n\n\n\n<p><strong>Threshold<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Purpose: Specify a score threshold for classification scores. Check the &#8220;skip thresholding&#8221; checkbox to bypass applying the threshold and show all scores.<\/li>\n\n\n\n<li>Allowed Values: Float (decimal) values greater than 0<\/li>\n<\/ul>\n\n\n\n<p><strong>Segment Duration<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Purpose: Specify the length of audio segments to feed into a model. Currently, segments do not span across files. If there is a partial segment at the end of the file and it&#8217;s at least half the segment duration, we pad with zeroes and include the segment. If it&#8217;s less than half the segment duration, we discard the partial segment.<\/li>\n\n\n\n<li>Allowed Values: Float (decimal) values greater than 0<\/li>\n<\/ul>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1280\" height=\"113\" data-src=\"https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/image-2025-9-12_16-41-5-1280x113.png\" alt=\"\" class=\"wp-image-11637 lazyload\" data-srcset=\"https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/image-2025-9-12_16-41-5-1280x113.png 1280w, https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/image-2025-9-12_16-41-5-720x64.png 720w, https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/image-2025-9-12_16-41-5-768x68.png 768w, https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/image-2025-9-12_16-41-5-1536x136.png 1536w, https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/image-2025-9-12_16-41-5-2048x181.png 2048w, https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/image-2025-9-12_16-41-5-480x42.png 480w\" data-sizes=\"(max-width: 1280px) 100vw, 1280px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 1280px; --smush-placeholder-aspect-ratio: 1280\/113;\" \/><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"RavenIntelligenceDocumentation-AudioData\">Audio Data<\/h3>\n\n\n\n<p>Choose the folder where your audio data are located. Information about the selected data is shown below for reference.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1280\" height=\"264\" data-src=\"https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/image-2025-9-12_16-53-8-1280x264.png\" alt=\"\" class=\"wp-image-11638 lazyload\" data-srcset=\"https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/image-2025-9-12_16-53-8-1280x264.png 1280w, https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/image-2025-9-12_16-53-8-720x148.png 720w, https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/image-2025-9-12_16-53-8-768x158.png 768w, https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/image-2025-9-12_16-53-8-1536x317.png 1536w, https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/image-2025-9-12_16-53-8-2048x422.png 2048w, https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/image-2025-9-12_16-53-8-480x99.png 480w\" data-sizes=\"(max-width: 1280px) 100vw, 1280px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 1280px; --smush-placeholder-aspect-ratio: 1280\/264;\" \/><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"RavenIntelligenceDocumentation-PostprocessingSteps\">Postprocessing Steps<\/h3>\n\n\n\n<p>Choose what is done with the model outputs. Currently there are only three options &#8211; writing detections to a Raven Workbench table, CSV file, or the debug log. If you want to save the results of your run, you should include either CSV or table export, or both. You can select which steps are included by moving them from &#8220;available steps&#8221; into &#8220;included steps&#8221; and also reorder them by dragging.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1280\" height=\"304\" data-src=\"https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/Screenshot-2025-11-25-at-11.15.19-AM-1280x304.png\" alt=\"\" class=\"wp-image-11639 lazyload\" data-srcset=\"https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/Screenshot-2025-11-25-at-11.15.19-AM-1280x304.png 1280w, https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/Screenshot-2025-11-25-at-11.15.19-AM-720x171.png 720w, https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/Screenshot-2025-11-25-at-11.15.19-AM-768x182.png 768w, https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/Screenshot-2025-11-25-at-11.15.19-AM-1536x364.png 1536w, https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/Screenshot-2025-11-25-at-11.15.19-AM-2048x486.png 2048w, https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/Screenshot-2025-11-25-at-11.15.19-AM-480x114.png 480w\" data-sizes=\"(max-width: 1280px) 100vw, 1280px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 1280px; --smush-placeholder-aspect-ratio: 1280\/304;\" \/><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"RavenIntelligenceDocumentation-Run\">Run<\/h3>\n\n\n\n<p>Click Run to start running inference! You should see a progress bar showing how many audio segments are completed and are left<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1280\" height=\"40\" data-src=\"https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/image-2025-9-12_16-56-22-1280x40.png\" alt=\"\" class=\"wp-image-11640 lazyload\" data-srcset=\"https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/image-2025-9-12_16-56-22-1280x40.png 1280w, https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/image-2025-9-12_16-56-22-720x22.png 720w, https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/image-2025-9-12_16-56-22-768x24.png 768w, https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/image-2025-9-12_16-56-22-1536x48.png 1536w, https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/image-2025-9-12_16-56-22-2048x64.png 2048w, https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/image-2025-9-12_16-56-22-480x15.png 480w\" data-sizes=\"(max-width: 1280px) 100vw, 1280px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 1280px; --smush-placeholder-aspect-ratio: 1280\/40;\" \/><\/figure>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"612\" height=\"504\" data-src=\"https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/Screenshot-2025-11-25-at-11.17.37-AM.png\" alt=\"\" class=\"wp-image-11641 lazyload\" data-srcset=\"https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/Screenshot-2025-11-25-at-11.17.37-AM.png 612w, https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/Screenshot-2025-11-25-at-11.17.37-AM-480x395.png 480w\" data-sizes=\"(max-width: 612px) 100vw, 612px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 612px; --smush-placeholder-aspect-ratio: 612\/504;\" \/><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"RavenIntelligenceDocumentation-Part2:ConfiguringanewModel\"><strong>Part 2: Configuring a new Model<\/strong><\/h2>\n\n\n\n<p>The Model Configuration Wizard is a guided interface designed to assist users in configuring Raven Intelligence to work with various detection models. This should only need to be done once for each new model. The output of the wizard is a .ravenmodel file containing the configuration information, which you can share with other users.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"1026\" height=\"329\" data-src=\"https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/image-2025-9-12_16-57-36.png\" alt=\"\" class=\"wp-image-11642 lazyload\" data-srcset=\"https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/image-2025-9-12_16-57-36.png 1026w, https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/image-2025-9-12_16-57-36-720x231.png 720w, https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/image-2025-9-12_16-57-36-768x246.png 768w, https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/image-2025-9-12_16-57-36-480x154.png 480w\" data-sizes=\"(max-width: 1026px) 100vw, 1026px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 1026px; --smush-placeholder-aspect-ratio: 1026\/329;\" \/><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"RavenIntelligenceDocumentation-GeneralComments\">General Comments<\/h3>\n\n\n\n<p>Configuring a new model for Raven Intelligence requires having detailed information about the models&#8217; format, inputs, and outputs. This information is generally found in model documentation.<br>.ravenmodel files can be shared between team members, so not everyone has to do the configuration procedure. The .ravenmodel for BirdNET 2.4 is already included with your install!<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>The files for the model you wish to configure should be located in the Models folder. This directory is also where you should save the .ravenmodel files<\/li>\n\n\n\n<li>The only supported model types are: TensorFlow&#8217;s SavedModel format, Pytorch&#8217;s TorchScript format, or ONNX.\n<ul class=\"wp-block-list\">\n<li>If you have a model that doesn&#8217;t work with the existing configuration UI, please let us know! (k.reed@<a href=\"http:\/\/cornell.edu\/\">cornell.edu<\/a>&nbsp;or Kate Reed on Slack!)<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n\n\n\n<p><\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"RavenIntelligenceDocumentation-Screen1:CreateNeworEdit\">Screen 1: Create New or Edit<\/h3>\n\n\n\n<p>Start by choosing whether you want to create a new .ravenmodel file or edit an existing one.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"931\" height=\"635\" data-src=\"https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/Pasted-image-20250912104358.png\" alt=\"\" class=\"wp-image-11643 lazyload\" data-srcset=\"https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/Pasted-image-20250912104358.png 931w, https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/Pasted-image-20250912104358-720x491.png 720w, https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/Pasted-image-20250912104358-768x524.png 768w, https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/Pasted-image-20250912104358-480x327.png 480w\" data-sizes=\"(max-width: 931px) 100vw, 931px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 931px; --smush-placeholder-aspect-ratio: 931\/635;\" \/><\/figure>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"940\" height=\"648\" data-src=\"https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/Pasted-image-20250912104432.png\" alt=\"\" class=\"wp-image-11644 lazyload\" data-srcset=\"https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/Pasted-image-20250912104432.png 940w, https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/Pasted-image-20250912104432-720x496.png 720w, https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/Pasted-image-20250912104432-768x529.png 768w, https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/Pasted-image-20250912104432-480x331.png 480w\" data-sizes=\"(max-width: 940px) 100vw, 940px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 940px; --smush-placeholder-aspect-ratio: 940\/648;\" \/><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"RavenIntelligenceDocumentation-Screen2:GeneralConfiguration\">Screen 2: General Configuration<\/h3>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"943\" height=\"648\" data-src=\"https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/Pasted-image-20250912104519.png\" alt=\"\" class=\"wp-image-11645 lazyload\" data-srcset=\"https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/Pasted-image-20250912104519.png 943w, https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/Pasted-image-20250912104519-720x495.png 720w, https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/Pasted-image-20250912104519-768x528.png 768w, https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/Pasted-image-20250912104519-480x330.png 480w\" data-sizes=\"(max-width: 943px) 100vw, 943px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 943px; --smush-placeholder-aspect-ratio: 943\/648;\" \/><\/figure>\n\n\n\n<p><strong>Model Directory<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Purpose: Specifies the directory that contains your model files.<\/li>\n\n\n\n<li>Allowed Values: This directory must be under the Models folder<\/li>\n\n\n\n<li>Autofill: None<\/li>\n<\/ul>\n\n\n\n<p><strong>Model Type<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Purpose: Indicates which ML framework your model uses.<\/li>\n\n\n\n<li>Allowed Values: TensorFlow, Pytorch, or ONNX (Note: we only support the SavedModel format for TensorFlow and the TorchScript format for Pytorch).<\/li>\n\n\n\n<li>Autofill: When you choose a model directory, autofills which framework you&#8217;re using based on the files in the directory.<\/li>\n<\/ul>\n\n\n\n<p><strong><em>Sample Rate<br><\/em>Min Rate:<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Purpose: Minimum sample rate for input signals (measured in Hz).<\/li>\n\n\n\n<li>Allowed Values: Integer greater than zero.<\/li>\n\n\n\n<li>Autofill: None<\/li>\n<\/ul>\n\n\n\n<p><strong>Max Rate:<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Purpose: Maximum allowable sample rate for input signals.<\/li>\n\n\n\n<li>Allowed Values: Integer greater than or equal to the minimum rate.<\/li>\n\n\n\n<li>Autofill: None<\/li>\n<\/ul>\n\n\n\n<p><strong><em>Segment Duration<br><\/em>Min Duration:<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Purpose: Specifies the minimum length (in seconds) of training samples.<\/li>\n\n\n\n<li>Allowed Values: Integer greater than zero.<\/li>\n\n\n\n<li>Autofill: None<\/li>\n<\/ul>\n\n\n\n<p><strong>Max Duration:<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Purpose: Specifies the maximum length (in seconds) of training samples.<\/li>\n\n\n\n<li>Allowed Values: Integer greater than or equal to the minimum duration.<\/li>\n\n\n\n<li>Autofill: None<\/li>\n<\/ul>\n\n\n\n<p><strong>Labels File<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Purpose: Path to the file containing the labels for the file. There must be exactly one label per line with no additional header info or blank lines.<\/li>\n\n\n\n<li>Allowed Values: Path to a valid txt or csv.<\/li>\n\n\n\n<li>Autofill: When you choose a model directory, Intelligence will check for txt and csv files within the directory and ask if any of them is your labels file.<\/li>\n<\/ul>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"478\" height=\"868\" data-src=\"https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/Pasted-image-20250912111245.png\" alt=\"\" class=\"wp-image-11646 lazyload\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 478px; --smush-placeholder-aspect-ratio: 478\/868;\" \/><\/figure>\n\n\n\n<p><strong>Number of Labels<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Purpose: Configures the number of unique labels\/classes in the dataset.<\/li>\n\n\n\n<li>Allowed Values: Must match the number of lines in the labels file.<\/li>\n\n\n\n<li>Autofill: When a labels file is chosen, autofilled as the number of lines in that file.<\/li>\n<\/ul>\n\n\n\n<p><\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"RavenIntelligenceDocumentation-Screens3-5:TensorFlow-SpecificConfiguration\">Screens 3-5: TensorFlow-Specific Configuration<\/h3>\n\n\n\n<p>If you don&#8217;t have a TensorFlow model, the wizard will skip these screens and proceed to Screen 6.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"RavenIntelligenceDocumentation-SignatureSelection\">Signature Selection<\/h4>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"940\" height=\"650\" data-src=\"https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/Pasted-image-20250912111626.png\" alt=\"\" class=\"wp-image-11647 lazyload\" data-srcset=\"https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/Pasted-image-20250912111626.png 940w, https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/Pasted-image-20250912111626-720x498.png 720w, https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/Pasted-image-20250912111626-768x531.png 768w, https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/Pasted-image-20250912111626-480x332.png 480w\" data-sizes=\"(max-width: 940px) 100vw, 940px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 940px; --smush-placeholder-aspect-ratio: 940\/650;\" \/><\/figure>\n\n\n\n<p>The model signature defines the data type and dimension of the input and output data of the model. Some models provide more than one signature. On this screen, use the dropdown to select the signature that contains classification scores. It may be called something like &#8220;basic&#8221;, &#8220;scores&#8221;, or &#8220;labels&#8221;. It should have at least one output tensor with a last dimension equal to the number of labels (e.g., 6522).<\/p>\n\n\n\n<p>The text area below the dropdown provides some metadata information for each signature to help you choose.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"RavenIntelligenceDocumentation-ModelInputs\">Model Inputs<\/h4>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"945\" height=\"780\" data-src=\"https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/Pasted-image-20250912112133.png\" alt=\"\" class=\"wp-image-11648 lazyload\" data-srcset=\"https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/Pasted-image-20250912112133.png 945w, https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/Pasted-image-20250912112133-720x594.png 720w, https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/Pasted-image-20250912112133-768x634.png 768w, https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/Pasted-image-20250912112133-480x396.png 480w\" data-sizes=\"(max-width: 945px) 100vw, 945px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 945px; --smush-placeholder-aspect-ratio: 945\/780;\" \/><\/figure>\n\n\n\n<p>On this screen, check the configuration of the input tensors defined by the model signature. If your model only takes one tensor of audio data, all info on this screen should be automatically filled. Otherwise, you may need to manually input some values.<\/p>\n\n\n\n<p>Note: The only input tensors we currently support are audio data or scalar values, which are sometimes included in signatures to provide additional parameters.<\/p>\n\n\n\n<p><strong>Tensor Name<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Purpose: Specifies the name identifier for the input tensor.<\/li>\n\n\n\n<li>Allowed Value: Must match the tensor name defined in the signature.<\/li>\n\n\n\n<li>Autofill: Pre-populated from the selected model signature. You should not change this unless it&#8217;s autofilled incorrectly.<\/li>\n<\/ul>\n\n\n\n<p><strong>Dimensions<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Purpose: Defines the shape and size of the input tensor.<\/li>\n\n\n\n<li>Allowed Values: Any number of dimensions from 0 to 5 (0 indicates a scalar). Can be edited with the &#8220;Edit&#8221; button.<\/li>\n\n\n\n<li>Autofill: Pre-populated from the selected model signature. You should not change this unless it&#8217;s autofilled incorrectly.<\/li>\n<\/ul>\n\n\n\n<p><strong>Data Type<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Purpose: Specifies the data type expected by the input tensor.<\/li>\n\n\n\n<li>Allowed Values: INT32, INT64, FLOAT32, FLOAT64, STRING.<\/li>\n\n\n\n<li>Autofill: Pre-populated from the selected model signature. You should not change this unless it&#8217;s autofilled incorrectly.<\/li>\n<\/ul>\n\n\n\n<p><strong>Contents<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Purpose: Indicates the contents of the tensor &#8211; audio data or a scalar &#8211; determined automatically from the dimensions (empty dimensions indicate a scalar, non-empty dimensions indicate audio).<\/li>\n\n\n\n<li>Allowed Values: Descriptive text (e.g., &#8220;Audio Data&#8221;) OR a text field to input the needed scalar value.<\/li>\n\n\n\n<li>Autofill: None<\/li>\n<\/ul>\n\n\n\n<p><\/p>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"RavenIntelligenceDocumentation-ModelOutputs\">Model Outputs<\/h4>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"935\" height=\"820\" data-src=\"https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/Pasted-image-20250912120458.png\" alt=\"\" class=\"wp-image-11649 lazyload\" data-srcset=\"https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/Pasted-image-20250912120458.png 935w, https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/Pasted-image-20250912120458-720x631.png 720w, https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/Pasted-image-20250912120458-768x674.png 768w, https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/Pasted-image-20250912120458-480x421.png 480w\" data-sizes=\"(max-width: 935px) 100vw, 935px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 935px; --smush-placeholder-aspect-ratio: 935\/820;\" \/><\/figure>\n\n\n\n<p>This screen is basically similar to the previous Model Inputs screen, with the addition of one dropdown at the top where you should select the tensor that contains the classification scores. It may be called something like &#8220;scores&#8221; or &#8220;classes&#8221; and should have a final dimension equal to the number of labels.<\/p>\n\n\n\n<p><\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"RavenIntelligenceDocumentation-Screen6:Save\">Screen 6: Save<\/h3>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"934\" height=\"810\" data-src=\"https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/Pasted-image-20250912120624.png\" alt=\"\" class=\"wp-image-11650 lazyload\" data-srcset=\"https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/Pasted-image-20250912120624.png 934w, https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/Pasted-image-20250912120624-720x624.png 720w, https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/Pasted-image-20250912120624-768x666.png 768w, https:\/\/www.birds.cornell.edu\/ccb\/wp-content\/uploads\/2026\/02\/Pasted-image-20250912120624-480x416.png 480w\" data-sizes=\"(max-width: 934px) 100vw, 934px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 934px; --smush-placeholder-aspect-ratio: 934\/810;\" \/><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<p>Save your .ravenmodel file inside the Models directory, and you&#8217;re done!<\/p>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Installing Download the latest version (1.0.5) for your operating system: Part 1: Running a model To run a model, simply select the model settings, choose your audio data and postprocessing steps, and click run. Part 1 of the documentation provides details on each of these steps. Note: throughout this documentation we will refer to the&nbsp;Models<a class=\"excerpt-read-more\" href=\"https:\/\/www.birds.cornell.edu\/ccb\/getting-started-with-raven-intelligence\/\" title=\"ReadGetting started with Raven Intelligence\">&#8230; Read more &raquo;<\/a><\/p>\n","protected":false},"author":11,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_birdpress_hero_toggle":false,"_birdpress_hero_type":"image","_birdpress_hero_image_type":"image","_birdpress_hero_style":"default","_birdpress_hero_ratio":"","_birdpress_hero_h1":"","_birdpress_hero_media_id":0,"_birdpress_hero_media_array_id":[],"_birdpress_hero_media_array":[],"_birdpress_hero_media":0,"_birdpress_hero_video_id":0,"_birdpress_hero_video":0,"_birdpress_hero_youtube":"","_birdpress_hero_content":true,"_birdpress_hero_byline":"","_birdpress_hero_byline_bottom":"","_birdpress_hero_button_link":"","_birdpress_hero_button_text":"","_birdpress_hero_button_color":"","_birdpress_hero_date":false,"original_guid":"","_birdpress_hide_search":false,"_birdpress_page_width":"","_birdpress_global_cta":false,"_birdpress_widget_sidebar":"","_birdpress_next_article":0,"_birdpress_next_article_title":"","_birdpress_prev_article":0,"_birdpress_prev_article_title":"","_birdpress_sub_navigation_id":96,"_birdpress_sub_navigation":"Raven Knowledge Base","_birdpress_sub_navigation_title":true,"_birdpress_anchor_navigation_id":0,"_birdpress_anchor_navigation":"","_birdpress_postType":"both","_birdpress_categoryID":0,"_birdpress_tagID":0,"_birdpress_parentPostID":0,"_birdpress_parentPostTitle":"","_birdpress_menuID":0,"_birdpress_menuName":"","_birdpress_listHeader":"","_birdpress_listLayout":"card-display","_birdpress_listColumns":"","_birdpress_maxItems":12,"_birdpress_listPaginate":true,"_birdpress_displaySort":true,"_birdpress_sortOrder":"DESC","_birdpress_sortBy":"date","_birdpress_listID":"","_birdpress_listClass":"","_birdpress_displayImages":true,"_birdpress_displayCaptions":false,"_birdpress_displayExcerpts":false,"_birdpress_attTop":"","_birdpress_attBottom":"","_birdpress_showLogos":false,"_birdpress_post_logo":0,"footnotes":""},"categories":[1],"tags":[],"content-format":[],"class_list":["post-11636","post","type-post","status-publish","format-standard","hentry","category-uncategorized"],"_links":{"self":[{"href":"https:\/\/www.birds.cornell.edu\/ccb\/wp-json\/wp\/v2\/posts\/11636","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.birds.cornell.edu\/ccb\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.birds.cornell.edu\/ccb\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.birds.cornell.edu\/ccb\/wp-json\/wp\/v2\/users\/11"}],"replies":[{"embeddable":true,"href":"https:\/\/www.birds.cornell.edu\/ccb\/wp-json\/wp\/v2\/comments?post=11636"}],"version-history":[{"count":3,"href":"https:\/\/www.birds.cornell.edu\/ccb\/wp-json\/wp\/v2\/posts\/11636\/revisions"}],"predecessor-version":[{"id":11751,"href":"https:\/\/www.birds.cornell.edu\/ccb\/wp-json\/wp\/v2\/posts\/11636\/revisions\/11751"}],"wp:attachment":[{"href":"https:\/\/www.birds.cornell.edu\/ccb\/wp-json\/wp\/v2\/media?parent=11636"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.birds.cornell.edu\/ccb\/wp-json\/wp\/v2\/categories?post=11636"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.birds.cornell.edu\/ccb\/wp-json\/wp\/v2\/tags?post=11636"},{"taxonomy":"content-format","embeddable":true,"href":"https:\/\/www.birds.cornell.edu\/ccb\/wp-json\/wp\/v2\/content-format?post=11636"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}