Principal Investigator's Guide: Managing Evaluation in Informal STEM Education Projects
An initiative of the Visitor Studies Association, produced in partnership with the Center for Advancement of informal Science Education.
This Guide is designed to help principal investigators and other leaders of informal STEM (Science, Technology, Engineering, and Math) education projects integrate evaluation into all phases of project design, development, and implementation. Such projects include exhibits, media projects, websites, community science projects, afterschool programs, festivals, family activities, online games, citizen science projects, and other efforts to help people learn about science in the course of their everyday lives.
Project evaluation, carefully designed and conducted in collaboration with project developers, can help any informal STEM project improve its deliverables, achieve its goals, and measure the degree to which its targeted objectives have been met. Also, when results are disseminated widely, evaluations can provide critical information for helping projects throughout the STEM education field improve their overall practice.
By design, most of the authors of this guide are not professional evaluators. Rather, they are informal STEM education project leaders who have, through many years of experience, developed effective means of collaborating with evaluators. Also by design, this guide is not a how-to manual on developing and implementing project evaluations. instead, it explores the use and application of evaluation in infor- mal STEM education projects with the goal of improving partnerships between project teams and their evaluators. References and links to the many valuable resources that do provide information on how to conduct project evaluations are included along the way.
This Guide should prove particularly useful for Principal investigators who have received funding from any directorate of the National Science Foundation (NSF) that addresses informal STEM education and which requires external project evaluation. The Guide will also inform prospective Principal investigators who are preparing proposals for submission to NSF. However, the guide is not an NSF policy document. While the authors and editors have done their best to reflect current thinking of NSF staff, all questions about NSF policy should be directed toward an appropriate NSF program officer.
While the Guide is rooted in experiences and stories that are drawn mainly from NSF-funded projects, the process of developing a proposal, implementing a project, and working with an evaluator described in its pages should be relevant to practitioners working in most informal education environments regardless of the funding source.
We welcome feedback on the Guide and encourage you to comment on its overall quality, value to you, missing information, or other ways in which we can improve future editions. Please send feedback to caise@informalScience.org.
About the Editors
Rick Bonney is the director of program development and evaluation at the Cornell Lab of Ornithology, where he has worked since 1983. Some people think he was born there. He is co-founder of the Lab's citizen science program, and since 1991 has been PI, co-PI, consultant, advisor, or evaluator on more than 40 projects funded by the National Science Foundation. As a result he has extensive experience in developing partnerships between practitioners and evaluators to design and execute evaluation plans and disseminate their findings. Rick has been deeply involved in CAISE since its inception and was lead of the CAISE inquiry group that produced the report Public Participation in Scientific Research: Defining the Field. He is also on the board of directors of the Visitor Studies Association and is co-chair of VSA's communications committee. Rick received his BS and MPS degrees from Cornell University's natural resources department.
Kirsten Ellenbogen As co-Principal Investigator of CAISE, Kirsten works in collaboration with the NSF to strengthen and advance the field of informal STEM education. Her work in evaluation and learning research has included service in several positions: Founding officer of the Informal Learning Environments Research SIG-American Education Research Association; affiliated researcher of the Museum Learning Collaborative; project director at the Center for Informal Learning & Schools, King's College London; senior associate at the Institute for Learning Innovation; and senior director for lifelong learning at the Science Museum of Minnesota. She was appointed to the National Academies of Science committee that produced the book Learning Science in Informal Environments and is past-president of the Visitor Studies Association, a network of professionals committed to understanding and enhancing visitor experience in informal learning settings through research, evaluation, and dialogue. Currently, Kirsten is President of Great Lakes Science Center in Cleveland, OH. Kirsten holds a Ph.D. in Science Education from Vanderbilt University and a B.A. from University of Chicago.
Leslie Goodyear is passionate about the value that evaluation can bring to program planning, decision making, organizational learning, and our understanding of human endeavors. She holds an MS and PhD in Human Service Studies from Cornell University, where her concentration was in program evaluation and research methods. Leslie's evaluation work has focused on building stakeholder capacity to manage and use evaluations effectively and on helping evaluators communicate their findings in dynamic and credible ways. She has evaluated programs ranging from HIV prevention curricula to services for adoptive families to civic engagement programs for youth to international youth media programs and afterschool initiatives. Most recently her work has focused on STEM education initiatives in both formal and informal settings. From 2009-2012 she served as a program officer in the Division of Research on Learning at the National Science Foundation, where she worked with the Informal Science Education (ISE), Innovative Technologies for Students and Teachers (ITEST), and Promoting Research and Innovation in Methodologies for Evaluation (PRIME) programs. She also contracted and managed evaluation studies for the DRL programs. She has served in leadership positions in the American Evaluation Association and is currently the Ethics Section editor for the American Journal of Evaluation.
Rachel Hellenga develops exhibitions and performs exhibit-specific strategic planning and fundraising. She is a Chicago-based consultant and self-professed‚ "Sam-I-Am," of evaluation owing to her many positive experiences working with professional evaluators over the course of a twenty-year career in the museum field. She has an insatiable appetite for visitor input, which has been reinforced by the results of integrating evaluation into projects such as the NSF-funded Inventing Lab and Skyline exhibitions at the Chicago Children's Museum, featuring a flying machine tower and construction materials replicated by other museums around the country; and the Science Storms exhibition at the Museum of Science and Industry, winner of the 2011 AAM Excellence in Exhibitions Award and the ASTC 2011 Roy L. Shafer Leading Edge Award. Rachel received her B.A. in psychology from Harvard University, and her particular areas of interest include education research in encouraging persistence; tinkering/making/engineering themes; Reggio-inspired design; bullying prevention; and novel uses of technology in exhibitions.
- Chapter 1: True Stories from your Peers: The Interplay Between Evaluation and Project Implementation
- Chapter 2: Definitions and Principles: Guiding Ideas for Principal Investigators to Know
- Chapter 3: Choosing an Evaluator: Matching Project Needs with Evaluator Skills and Competencies
- Chapter 4: Working as a Team: Collaboration Through All Phases of Project Development
- Chapter 5: Planning for Success: Supporting the Development of an Evaluation Plan
- Chapter 6: Reporting and Dissemination: Building in Dissemination from the Start