Evaluation Capacity Building

Evaluation produces evidence that is critical to improving our work, driving innovation, and making the case for the outcomes and impacts of informal STEM education (ISE). There are many complexities inherent in evaluating free choice informal STEM learning settings and experiences. Evaluators working in these environments address the complexities by drawing upon many different disciplines, including developmental psychology, classroom-based assessments, and health education evaluation. Yet challenges remain and are perhaps growing in this era of increasing accountability. 

The Center for Advancement of Informal Science Education held a convening on June 20-21, 2013, designed to facilitate discussion about the resources needed to improve the quality of evaluation in ISE. Participants included evaluators currently practicing in the field, as well as those working in other disciplines; learning researchers; experience and setting designers; organizational leaders; program officers from the National Science Foundation (NSF); other federal funding agencies; and private philanthropic foundations. Some of the context-setting for the convening included research and development frameworks recently introduced at the federal level including the National Research Council’s Science, Technology and Innovation Indicators and a preview of the since released Common Guidelines for Education Research and Development developed by the National Science Foundation and the U.S. Department of Education. In a pre-meeting online forum participants from the larger community began to identify critical needs in the practice of ISE evaluation and made suggestions for resources, training, and other supports to advance the profession. By overall design, however, this convening raised critical questions for the field, rather than making definitive recommendations. The archive of this forum is available upon request: caise@informalscience.org 

A summary of the meeting is available in our repository

A group of inspired convening participants proposed and received funding to implement a smaller follow up meeting on the idea of sharing measures across programs and projects. The synthesis of that event is also available in our repository


Pre-Convening Thought Papers