Skip to content. Skip to navigation

Cornell Lab of Ornithology

Sections

Measure Effects: Resources

 

There are many resources on program and project evaluation out there. Here we have compiled a sample list of some of the available resources, and we have divided the list into the three categories:

Evaluation Guides

Data Collection Methodologies

Evaluation Theory & Discussion


EVALUATION GUIDES


The 2002 User-Friendly Handbook for Project Evaluation (PDF)* The National Science Foundation. Directorate for Education & Human Resources, Division of Research, Evaluation, and Communication. (January 2002).

This handbook provides four sections describing the evaluation process. Section I is a background of evaluation and a description of the different kinds of evaluation. Section II is an overview of the steps necessary in performing an evaluation, from getting started to disseminating the information collected. Section III focuses on quantitative and qualitative data collection methods. Section IV provides a discussion on different strategies to address culturally responsive evaluation.

*2010 edition now available (PDF)

 

Basic Guide to Program Evaluation. Carter McNamara, Authenticity Consulting, LLC.

This document provides information on planning and implementing an outcomes-based evaluation for nonprofit and for-profit organizations. It includes an overview of the evaluation process, including the steps necessary to complete an evaluation, methodologies used in the process, and how to analyze and report the results.

Key Evaluation Checklist (PDF). Michael Scriven. (February 2007). Evaluation Checklists Project.

This document provides a list of key terms along with a definition and a detailed explanation of their role in the evaluation process. It is divided into four parts, including “Preliminaries”, “Foundations”, “Subevaluations”, and “Conclusions & Implications”.

 

Performance Measurement and Evaluation: Definitions and Relationships (PDF). United States General Accounting Office. (April 1998).

This document provides a simple glossary of definitions and terms related to performance evaluation.    

 

CIPP Evaluation Model Checklist. Prepared by Daniel L. Stufflebeam. (March, 2007). Evaluation Checklists Project. 

This CIPP Evaluation Model is comprised of ten checklists for guiding different types of evaluations. For example, the checklists cover evaluations based on input, impact, sustainability, and effectiveness to name a few.

 

Guideline for Conducting a Knowledge, Attitude and Practice (KAP) Study (PDF). K. Kaliyaperumal, I.E.C. Expert, Diabetic Retinopathy Project. (Jan-Mar 2004). AECS Illumination, Vol. 4 (1).

This guide provides information on how to conduct a Knowledge, Attitude, and Practice (KAP) Study. Included are a set of guidelines that detail the steps required to conduct the study. The guidelines cover background information on the KAP study, preparations of the questionnaires, selection of the sample group, process of conducting the study, and analysis of the the data. 

 

Evaluator Competencies for Professional Development (PDF). Visitor Studies Association.

This document provides a list and explanation of the competencies necessary for visitor studies evaluations and visitor studies professional development. It is specifically geared towards those that conduct evaluations for visitor studies programs and projects, and professionals that work in the visitor studies field.

 

Communicating Climate Change Evaluation Webinar (PDF) David Heil & Associates, Inc. 4614 SW Kelly Avenue, Suite 100, Portland, Oregon 97239

This webinar provides an annotated background of impact evaluation for Informal Science Education (ISE) programs and the development of a C3 Evaluation Manual (Communicating Climate Change). Offers a “How to” for those looking to evaluate ISE projects, and advice for survey methods.

 

My Environmental Education Evaluation Resource Assistant (MEERA)

"...an on-line evaluation resource for environmental educators, offering step-by-step evaluation guidance, sample environmental education evaluations, and links to other evaluation resources for those with novice to advanced evaluation experiences.  An evaluation of MEERA showed that the site can help environmental educators with limited evaluation experience complete evaluations that benefit their programs.."

 

Best Practices Guide to Program Evaluation For Aquatic Educators (PDF) Lead Author: Susan Marynowski. Contributing Authors: Christine Denny and Peter Colverson. Editor: Karen Hill. Layout/design: Christine Denny. Pandion Systems, Inc. Gainesville, Florida. www.pandionsystems.com

This guide gives a detailed overview of the evaluation process specific to aquatic educators. However, it could be used as a tool for any educational evaluation. The authors cover the entire life cycle of the evaluation, from “Creating a Climate for Evaluation” to “Creating Useful Results from the Data”. There is also a comprehensive set of tools listed to assist those looking to execute an evaluation.  

 

Measuring Progress: An Evaluation Guide for Ecosystem and Community-Based Projects (PDF) Management Initiative, University of Michigan, Ann Arbor, MI. (2004). Measuring Progress VERSION 3.0.

Worksheets and templates covering everything from logic model development and choosing indicators to collecting data and acting on results..

 

Evaluation Sourcebook: Measures of Progress for Ecosystem & Community-Based Projects (PDF) Schueller, S.K., S.L. Yaffee, S. J. Higgs, K. Mogelgaard and E. A. DeMattia. (2006). Management Initiative, University of Michigan, Ann Arbor, MI.

This sourcebook is tailored specifically for evaluating ecosystem and community-based projects. It is not a book to be read cover-to-cover, but rather is a tool you can use to fit your specific program. It is divided into four sections; objectives (including ecological concerns and species viability), threats (to ecosystems, or the project as a whole), assets (positive circumstances or opportunities), and strategies to achieve your objectives.

 

Evaluating Environmental Education in Schools (PDF) Prepared by Dean B. Bennett. (1989). UNESCO-UNEP International Environmental Education Programme: Division of Science, Technical, and Environmental Education. Environmental Educational Series 12.

This guide is specifically for educators that want to conduct an evaluation of environmental education programs in schools. The author provides a background to evaluation, and then describes his recommended four steps in the evaluation process (deciding what to evaluate, planning how to do it, carrying it out, and using the results). This is a detailed guide with checklists for each of the steps, as well as suggested schedules for various parts of the evaluation process.

 

Evaluating Environmental Education (PDF) Stokking, K., van Aert, L., Meijberg,W., Kaskens, A., (1999). IUCN, Gland, Switzerland and Cambridge, UK. x + 134 p.

This is a handbook designed to provide a thorough overview of program evaluation, specifically aimed at environmental education programs. This handbook covers the purpose of evaluation, how evaluation is introduced as a regular activity for organizations, and then outlines the 13 steps behind the evaluation process that the authors propose.  The appendices provide examples of the design of instruments and models for the reader to use.

 

Measuring the Success of Environmental Education Programs (PDF) Thomson, Gareth and Jenn Hoffman. Canadian Parks and Wilderness Society, Calgary-Banff Chapter.

This report outlines and describes educational evaluation methodologies and tools. The authors describe a program logic model and an evaluation scheme by using illustrations, graphs, and charts from existing environmental education programs.

 

Does Your Project Make a Difference: A guide to evaluating environmental education projects and programs (PDF) The Department of Environment and Conservation (NSW). (2004). Sydney, NSW.    

This guide provides a description of the evaluation process for environmental education projects and programs. It is a beginner guide that includes a basic overview of the process to start, and then moves through each step of the evaluation process.

 

Framework for Evaluating Impacts of Informal Science Education Projects (PDF). Friedman, A. (Ed.). (March 12, 2008).

This guide is a framework for conducting evaluations of informal science education projects. It includes advice for current evaluation techniques specific to informal science education programs, as well as references to case studies, tools, and instruments.

 

Measuring and Evaluating Stewardship and Innovation Programs: Learning From the PART PHASE I REPORT. A Review of Federal Agency (non-EPA) Performance Measures for Stewardship and Innovation (PDF) Prepared by: Industrial Economics, Incorporated, Dr. Shelley Metzenbaum, Ross & Associates Environmental Consulting, Ltd. (December, 2006).

This report provides an evaluation of US Government stewardship programs and their effectiveness. This document is more useful for government agencies and is not a tool or framework for evaluation.

 

Measuring and Evaluating Stewardship and Innovation Programs: Learning From the PART: A Review (PDF) National Center for Environmental Innovation. (September 2007).

This report provides an overview of the findings from the, “Measuring and Evaluating Stewardship and Innovation Programs: Learning From the PART PHASE I REPORT” as well as recommendations for stewardship and innovation programs. 

 

Guidelines for Evaluating Nonprofit Communications Efforts (PDF) Communications Consortium Media Center’s Media Evaluation Project. (April 2004). Washington D.C.

This guide provides a set of evaluation guidelines for nonprofit organizations and foundations that wish to assess their investments in communications strategies and the impacts of their investments. This comprehensive guide offers practical strategies for the evaluation process, a list of the vital elements of a communications strategy, and definitions of communication-related theories and concepts.

 

Building Capacity in Evaluating Outcomes: A teaching and facilitating resource for community-based programs and organizations. University of Wisconsin-Extension, Cooperative Extension (2008). Madison, WI: UW- Extension, Program Development and Evaluation.

This guide provides numerous activities and materials for those working for community-based organizations that wish to implement the capacity building of individuals, groups, and organizations during evaluations. The document offers tools, instruments, and practical activities that can be used in evaluating community-based activities. It includes a facilitator’s guide with strategies for evaluating an individual’s work. Offers example timelines, management plans, and budget worksheets.

 

Designing Evaluation for Education Projects (PDF) Office of Education and Sustainable Development, NOAA.

This guide covers the basics of evaluation and discusses the numerous types of evaluations, the advantages or disadvantages of using internal or external evaluation, ways of collecting data, and the ethical considerations of gathering data from those who participate in the evaluation. Especially noticeable in this guide is an in-depth discussion of the various types of evaluation.

 

An evaluation of NPASS- National Partnerships for Afterschool Science: Year 3 Final Report (PDF) Prepared by Peggy Vaughan, Colleen Manning, Miriam Kochman, and Irene Goodman. (March 2009). Goodman Research Group, Inc.

This evaluation report is focused on determining whether or not science and 4-H centers could effectively lead training programs for after school community-based staff. This document contains the instruments used in the evaluation process including all the survey forms, activity logs, feedback forms, and interview protocols.

 

Measurement Tools for Evaluating Out-of-School Time Programs: An Evaluation Resource (PDF) Christopher Wimer, Suzanne Bouffard, and Priscilla Little. (2008). Harvard Family Research Project- Harvard Graduate School of Education. Cambridge, MA.

This guide describes tools and instruments that can be used to conduct a large variety of program and project evaluations. Included in this document are sample checklists of program components, survey questions, assessments of academic skills, and more. 

 

Framework for program evaluation in public health (PDF) Centers for Disease Control and Prevention. MMWR. (1999). 48(No.RR-11):1-42.

This guide is comprised partly of a step-by-step guide and standards for effective evaluation practice. A template is provided for designing evaluations for many different program types. In this guide, each step is broken down by a definition, the role of that step, and the activities involved throughout each step of the evaluation process.

 

Shaping Outcomes

"Provides a free, online curriculum in outcomes-based planning and evaluation"

 

Outcome Measurement Resource Network (United Way)

"...offers information, downloadable documents, and links to resources related to the identification and measurement of program- and community-level outcomes."

 

Online Evaluation Resource Library

"...includes professional development modules that can be used to better understand and utilize the [other copious] materials made available."


DATA COLLECTION METHODOLOGIES

Collecting Evaluation Data: An Overview of Sources and Methods. Ellen Taylor-Powell, Sara Steele. (June 1996). Cooperative Extension Publications: Madison, WI.

This document is an overview of the evaluation process specifically aimed towards those in extension services professions. However, it is applicable to any profession or program wishing to conduct an evaluation. A good beginner tool for people wishing to learn more about what the evaluation process entails.

 

Collecting Evaluation Data: Surveys. Ellen Taylor-Powell, Carol Hermann. (May 2000). Cooperative Extension Publications: Madison, WI.

This is a article for community-based educators designed to serve as a guide for formulating surveys during the evaluation process. This guide discusses numerous types of survey methods, their applicability, and the survey process. Included is a useful Example Management Planning Worksheet as well as other practical tools for educators and beginner evaluators.

 

Whitepaper: 7 Habits of Highly Successful Surveys. Vovici.

 This document provides a quick guide to completing successful surveys. 

 

Performance Monitoring and Evaluation TIPS: Selecting Performance Indicators (PDF) USAID Center for Development Information and Evaluation. 1996, Number 6.

This article is specific to those people looking for help with their evaluation. It provides a guide for choosing specific indicators to measure for evaluating how well a program is achieving its objectives.

 

Performance Monitoring and Evaluation TIPS: Conducting Key Informant Interviews (PDF) USAID Center for Development Information and Evaluation. (1996). Number 2.

This article is focused specifically on the process of conducting informant interviews during the data collection part of an evaluation. This is a practical guide that provides recommendations on every phase of the interview process, and also gives a list of the advantages and disadvantage of conducting key informant interviews.

 

Questionnaire Design: Asking questions with a purpose. Ellen Taylor-Powell. (May 1998). Cooperative Extension Publications: Madison, WI.

This guide provides an in-depth look at the process of constructing, formatting, and implementing questionnaires for evaluation or research purposes.  


Sampling. Ellen Taylor-Powell. (1996). Cooperative Extension Publications: Madison, WI.

This is a guide to the sampling research data collection method. It includes detailed descriptions of both probability and nonprobability sampling techniques.

 

Quick Tips: Focus Group Interviews. University of Wisconsin-Extension, Cooperative Extension (2002). Madison, WI: UW- Extension, Program Development and Evaluation.

This document is a quick guide to preparing and conducting focus group discussions. 

 

Collecting Evaluation Data: Direct Observation. Ellen Taylor-Powell, Sara Steele. (1996). Cooperative Extension Publications: Madison, WI.

This article serves as a guide for evaluators wanting to use direct observation data collection techniques during their evaluation. It provides descriptions of the direct observation technique, as well as a “how to” guide and checklists for incorporating observation research techniques into evaluation.

 

Ways to make your evaluations more culturally sensitive. Prepared by Ellen Taylor-Powell, Evaluation Specialist, from Preskill, H., & Russ-Eft, D. (2005).  

This is a short numeric list that provides 15 concepts for making program evaluations more culturally sensitive.

 

Culturally Appropriate Data Collection Methods: How can we be respectful and culturally sensitive when collecting information? (2009). University of Wisconsin-Extension, Cooperative Extension, Program Development and Evaluation.

This is a powerpoint presentation designed to provide information to evaluators on how to collect data for evaluations while maintaining cultural sensitivity. This powerpoint does not offer tools or guides for the evaluation, but rather a list of issues to take into consideration to produce a culturally appropriate evaluation.

 

Chapter 4: Research Ethics. R. Burke Johnson. 

This is a lecture from R. Burke Johnson on research ethics. It provides information on the ethnical guidelines for research with humans. This lectures includes information on ethical concerns, creating an informed consent form, the institutional review board, and other topics for researchers.

 

RWJF Research Primer

"...an orientation guide to some handbooks and basic primers on program evaluation directed toward the nonexpert, explaining some of the central issues in evaluation and why they are important."

 

Building Credibility: Quality Assurance and Quality Control... Volunteer Water Quality Monitoring National Facilitation Project

"This factsheet provides an overview of quality assurance and quality control issues and provides examples of methods used..."


EVALUATION THEORY AND DISCUSSION


Evaluating and Improving the Project  Broadening Participation in Biological Monitoring: Guidelines...

See Evaluation in the left navigation bar: "Any action plan requires periodic evaluation and revision to remain relevant to, and effective at, meeting its stated goals."


Framework for Evaluating Impacts of Informal Science Education Projects National Science Foundation

"...provide overviews of impact evaluation and a look at some of the common issues, concerns, and opportunities in evaluation practice."

 

Public Participation in Scientific Research: Defining the Field and Assessing Its Potential for Informal Science Education  NSF's Center for the Advancement of Informal Science Education (CAISE)

"...describes how Public Participation in Scientific Research (PPSR), in the context of informal science education (ISE), can provide multiple opportunities to increase public science literacy." This page is an overview, with links to download the Executive Summary or Full Report.

 

A fundamental choice: internal or external evaluation (PDF) Conley-Tyler, Melissa. (March/April 2005). Evaluation Journal of Australasia, Vol. 4 (new series), Nos. 1 & 2, pp. 3–11.

This journal article discusses the positives and negatives between choosing to hire someone from outside the organization or to have someone internally conduct an evaluation. A set of guidelines is included that will help an organization choose between internal and external evaluation. The guideline take into consideration cost, knowledge, flexibility, objectivity, accountability, willingness to criticize, ethics, and utilization of results.

 

FOOTPRINTS: Strategies for Non-Traditional Program Evaluation (PDF) Edited by Joy A Frechtling and Westat, Inc. (January 1995). The National Science Foundation. 

This edited volume is a collection of papers and discussions presented at a conference focused on program evaluation. The collection of papers discuss the proposal of new ideas and methodologies that inform the design of evaluation for projects and programs.

 

Indicators and Information Systems for Sustainable Development (PDF) Donella Meadows. (1998). The Sustainability Institute. Hartland, VT.

This report provides a theoretical discussion of indicators and the indicator selection process for evaluating sustainable development projects. It also includes a guide for implementing, monitoring, testing, evaluating, and improving indicators.

 

Toward a Systematic Evidence-Base for Science in Out-of-School Time: The Role of Assessment (PDF) Hussar, Karen; Sarah Schwartz; Ellen Boiselle; and Gil Noam. (August 2008). Program in Education, Afterschool, & Resiliency. Harvard University and McLean Hospital.

This report focuses on the evaluation of after-school programs. This report reviews and reports on the current state and needs of after-school assessment programs. There are a few practical guidelines scattered within this guide but it mostly is a theoretical review of the literature on evaluating after-school programs.

 

  


Know of other Resources for this step?  Please send them to be posted!


 

back arrow Previous Step           Toolkit tiny globe Steps   
  Next Step Go arrow
(disseminate results)

                                

Citizen science, volunteer monitoring, participatory action research... this site supports organizers of all initiatives where public participants are involved in scientific research.

More about this...