Learn More About Evaluation
The field of evaluation within informal science education has grown and evolved over the years. How can you learn more about the current state of informal science education evaluation and where the field is headed?
Several professional associations provide opportunities for evaluators to learn more about evaluation in the informal science education field and share their own work
Visitor Studies Association (VSA): The Visitor Studies Association is a membership organization dedicated to understanding and enhancing learning experiences in informal settings through research, evaluation, and dialogue.
American Evaluation Association (AEA): The American Evaluation Association is an international professional association of evaluators devoted to the application and exploration of program evaluation, personnel evaluation, technology, and many other forms of evaluation. The AEA also hosts the AEA Public Library where members of the community post instruments, presentations, reports, and more. AEA has a number of topical interest groups (TIGs) that may be of particular interest to informal science education evaluators.
- Art, Culture, and Audiences TIG
- Environmental Program Evaluation TIG
- STEM Education and Training TIG
- Youth Focused Evaluation TIG
Committee on Audience Research and Evaluation (CARE): The Committee on Audience Research and Evaluation (CARE) of the American Alliance of Museums provides a forum for museum professionals who believe that understanding the visitor is an essential part of museum planning and operation, and disseminates information about systematic research and evaluation pertaining to museum audiences.
Association of Science and Technology Centers (ASTC): The Association of Science and Technology Centers hosts the Research & Evaluation Community of practice, which provides a forum through which institutions can discuss questions and challenges about evaluation, and receive support and resources from peer institutions and experienced professional evaluators.
American Educational Research Association (AERA): The American Educational Research Association (AERA) is an international professional organization, with the primary goal of advancing educational research and its practical application. They are concerned with improving the educational process by encouraging scholarly inquiry related to education and evaluation and by promoting the dissemination and practical application of research results. They have a special interest group focused on research on evaluation.
There are a wide-range of professional journals available to learn about evaluation theory, methods, practice, and research.
Visitor Studies: Theory, Research, and Practice is the peer-reviewed research journal of the Visitor Studies Association. Appearing bi-annually, Visitor Studies publishes articles, focusing on visitor research, visitor studies, evaluation studies, and research methodologies. The Journal also covers subjects related to museums and out-of-school learning environments, such as zoos, nature centers, visitor centers, historic sites, parks and other informal learning settings.
Journal Of Museum Education 40.1: Empowering Museum Educators to Evaluate is a guest-edited issue of the Journal of Museum Education that gives museum educators, and those that support them, practical evaluation tools and techniques to maximize evaluation efforts. From building staff capacity, to developing standardized evaluation methods, to communicating results this issue can serve as a real-world guide and inspiration for those in the field who need to demonstrate the impact of their work.
American Journal of Evaluation explores the complex and difficult challenges related to conducting evaluations. From choosing program theories to implementing an evaluation to presenting the final report to managing an evaluation's consequences, AJE offers original, peer-reviewed, often highly cited articles about the methods, theory, and practice of evaluation.
Evaluation and Program Planning is based on the principle that the techniques and methods of evaluation and planning transcend the boundaries of specific fields and that relevant contributions to these areas come from people representing many different positions, intellectual traditions, and interests. The primary goals of the journal are to assist evaluators and planners to improve the practice of their professions, to develop their skills, and to improve their knowledge base.
New Directions for Evaluation is a quarterly thematic journal, and an official publication of the American Evaluation Association. The journal publishes empirical, methodological, and theoretical works on all aspects of evaluation.
The Evaluation Exchange is Harvard Family Research Project's evaluation periodical,which addresses current issues facing program evaluators of all levels. Designed as an ongoing discussion among evaluators, program practitioners, funders, and policymakers, The Evaluation Exchange highlights innovative methods and approaches to evaluation, emerging trends in evaluation practice, and practical applications of evaluation theory.
Practical Assessment, Research & Evaluation is a peer-reviewed online journal whose purpose is to provide free access to articles that can have a positive impact on assessment, research, evaluation, and teaching practice.
Journal of MultiDisciplinary Evaluation is a free, online journal published by the Evaluation Center at Western Michigan University. The journal focuses on news and thinking of the profession and discipline of evaluation.
Evaluation Review: A Journal of Applied Social Research is an interdisciplinary forum for social science researchers, planners, and policy makers who develop, implement, and utilize studies designed to improve the human condition. Evaluation Review brings together the latest applied evaluation methods used in a wide range of disciplines. It presents the latest quantitative and qualitative methodological developments, as well as related applied research issues.
Evaluation: The International Journal of Theory, Research and Practice is an interdisciplinary, international peer-reviewed journal. Evaluation’s purpose is to promote dialogue internationally and to build bridges within the expanding field of evaluation.
Canadian Journal of Program Evaluation seeks to promote the theory and practice of program evaluation. The journal publishes full-length articles on all aspects of the theory and practice of evaluation, real-life cases written by evaluation practitioners, and practice notes that share practical knowledge, experiences and lessons learned.
Research on Evaluation
One way to advance the field of informal science education evaluation is through research on evaluation practices, processes, and outcomes. Featured here are just a few of the research on evaluation studies happening in the ISE field.
The NSF-funded BISE project set out to answer the research question: In what ways might the evaluation and research community use a collection of evaluation reports to generate and share useful new knowledge? To answer this question, the BISE team developed an extensive framework for coding ISE evaluation reports and applied the framework to 520 evaluation reports posted to informalscience.org. The BISE research resulted in a wide range of freely available evaluation resources including the BISE coding framework, a NVivo database with the 520 coded reports from informalscience.org, and five synthesis papers that represent what can be learned about evaluation in the ISE field from a collection of evaluation reports.
SK Partners examined methodological characteristics of summative evaluations in informal science education by asking: 1) What are the major types of designs used in summative evaluations, and what kinds of questions can they answer? and 2) What are the types of data collection methods and measures used, and how many are self-reports or direct measures? Through this research they developed and revised a framework for summative evaluation based on an extensive literature review, a critical review of summative evaluation reports posted to informalscience.org, case studies of what they judged to be exemplary summative evaluations, and interviews with eight leading individuals in ISE evaluation-related fields. Their research resulted in a Framework for Summative Evaluation in Informal Science Education, which synthesizes key elements that comprise a high-quality summative evaluation. The Framework has three dimensions: (a) the rationale underlying the intervention (e.g., program, exhibition); (b) methodological rigor balanced by contextual appropriateness; and (c) usefulness to stakeholders.
EvalFest is a NSF-funded community of practice designed to meet the evaluation-related needs of the growing science festival sector in the United States. As a community of practitioners, evaluation professionals and researchers, EvalFest will develop, test and share evaluation approaches that will help science festivals understand and explain their impact. EvalFest is also a research project designed to assess how a community of practice focused on evaluation might lead to better capacity for conducting evaluations, evaluation use and science festival outcomes. EvalFest will answer the following research questions: 1) How are evaluations used in relation to Science Festivals and how does evaluation use change within the context of a community of practice that creates its own multisite evaluation? 2) Which methods and reporting formats are associated with the greatest value in building capacity of individual festivals? 3) In what ways can a community-created multisite evaluation yield additional learning about public science events in particular and informal science education events in general?
The NSF-funded CASNET research project used a multiple case study approach to examine evaluation capacity building (ECB) in a complex adaptive system (CAS), the Nanoscale Informal Science Education Network (NISE Net). Two subcase studies were conducted of Tier 1 (core, funded partners) and Tier 2 (nano-infused partners) members of the NISE Net. The Tier 1 study focused on work groups across Tier 1 institutions while the Tier 2 study focused on individuals across Tier 2 institutions. Project personnel conducted numerous interviews about, and observations of, ECB activities within NISE Net related mostly to an approach to ECB developed by the NISE Net, Team-Based Inquiry. The project developed coding frameworks related to ECB and CAS and produced four research reports as well as additional practitioner oriented materials.
Advancing the Field of ISE Evaluation
In the last few years, a wide range of professionals have gathered at field-wide summits and convenings to chart a path for the advancement of evaluation within the ISE field.
Summit on Assessment of Informal and Afterschool Science Learning
In June 2012, the Board on Science Education at the National Academies, in cooperation with the Harvard Program in Education, Afterschool and Resiliency, convened a Summit on Assessment of Informal and Afterschool Science Learning in Irvine, CA. The goals of the summit included: 1) Identify the greatest needs and major gaps in the informal science assessment field; 2) Create approaches to build consensus in the field and develop strategies to overcome the identified major gaps; 3) Establish topics and themes for in-depth and long-term work; and 4) Identify and analyze existing examples of informal science assessment tools in relationship to the NSF Evaluation Framework for Informal Science Education, the six strands of science proficiency outlined in the NRC report Learning Science in Informal Environments, and other similar frameworks, and discuss their potential for improving the assessments used in research and evaluation of information science education. To read the commissioned background papers developed for the summit, visit the National Academies web site.
CAISE Convening on Building Capacity for Evaluation in Informal Science, Technology, Engineering and Math (STEM) Education
In June 2013, CAISE held a convening designed to facilitate discussion about the resources needed to improve the quality of evaluation in informal STEM education. Participants included evaluators currently practicing in the field, as well as those working in other disciplines; learning researchers; experience and setting designers; organizational leaders; program officers from the National Science Foundation (NSF); other federal funding agencies; and private philanthropic foundation. Three dominant themes emerged during the convening: (1) shared use of evaluation measures and aggregation of findings; (2) access to and coordination of resources; and (3) professional development. Learn more about these themes and corresponding follow up activities in this convening white paper.
The Palo Alto Convening on Assessment in Informal Settings
In December 2013, a group of leaders of six informal science education assessment projects met in Palo Alto, California for a 2-day exploration of the state of the art of measuring the impact of informal STEM education experiences. The goals for this meeting were to explore in-depth the technical and practical details of the assessments, share and critique findings, and review plans for ongoing work to validate and refine measures. View this update from the field for a quick overview of the context of the meeting, projects involved, and what’s next. Want more detailed information? Check out this synthesis report of the meeting.