Skip to main content

Building Informal Science Education: Context and Synthesis

How do we know that informal STEM learning happens? Where’s the evidence to support our impact? What are the design principles underlying effective practice? What’s the best way to conceptualize, support, and measure learning in different learning environments and with different audiences?

Most informal science learning projects funded by the National Science Foundation (NSF) are required to conduct summative evaluation studies. This represents a sizeable investment of time, money, and intellectual energy by the informal science education (ISE) field. There have been hundreds of studies run over the last decade alone, each directly aimed at determining the educational effectiveness of an individual informal STEM learning project. So, fortunately, the field has collected extensive evidence about outcomes in settings as varied as museums, television shows, large-format cinema, digital gaming, web sites, community programs, citizen science projects, and more.

But, while research studies are published in journals and accessible to all, evaluation studies have often been much harder to access. The intended audience for all types of evaluation reports is often quite small—summative, as well as front-end, remedial and formative. The project personnel and the funder are often the only people who read these reports in detail. Unlike research, summative evaluation is not conducted with the goal of discovering generalizable knowledge and communicating it broadly with the field. Summative evaluations mostly involve asking very specific questions about a single project—the goal of which is to collect evidence that the project team and funder can use to reflect upon the success and challenges of a completed project; hence, evaluation reports are often destined for our archived inboxes and filing cabinets—relevant when they are prepared and submitted, but too often filed and forgotten.

Background on the BISE Project

This situation began to change when NSF first funded the original informalscience.org website in 2006. An important goal of the site from the beginning has been to host a publicly accessible collection of evaluation studies of informal STEM education projects. In any research and development enterprise, it is essential to build upon prior work – which was a difficult task when previous work in ISE was not available to anyone who began working in the field. So, as a first step towards building shared empirical knowledge, ISE projects have been posting their evaluations in our searchable and downloadable database of reports. And we know that step has been highly successful; in 2013 alone, evaluation report records have been viewed 19,388 times from our collection of almost 700 evaluation studies.

The Building Informal Science Education (BISE) project represents a second step in helping the field access, understand, and learn from prior work. The project’s objective was to explore what might be learned if we conducted secondary analysis and syntheses of the existing evaluation findings in the database. The BISE team first located and categorized available data within the reports, identified audiences and contexts studied, and identified the range of methods that have been used to study informal STEM learning projects and programs. This work resulted in a coding scheme for the reports that Amy Grack Nelson describes in a prior blog post.

The BISE Synthesis Papers

Using this coding scheme as a roadmap to the findings included in the database, separate research teams then mined the data to find potential topics for synthesis studies. Each of the studies explores a particular issue within ISE, seeking general lessons that might be learned by systematically combining or comparing findings across individual studies.

Together, the studies represent a kind of feasibility test for using databases of evaluation reports as a source for some kind of field-wide, generalizable knowledge. Because evaluation reports as distinct from research studies have not typically been prepared with the goal of identifying broader lessons learned, some of the questions the BISE project has been asking are: What are the limits of what we can learn from evaluation reports? Are there common questions that evaluators are asking already, and might we be able to compare their answers across projects? How should we help funders, practitioners, and researchers make use of the information in reports? Are there ways we can change our evaluation practices to make them more amenable to cross-project generalization?

Each synthesis study, then, has three goals: 1) To generate and communicate new knowledge about a key ISE issue; 2) to demonstrate ways in which a synthetic study of evaluation reports might have value in promoting a broader conversation about impacts and evaluation practice; and 3) to explore ideas about what these trends in informal science evaluation might suggest about future practice.

Future articles will include more details about:

  • BISE codes
  • The BISE data sample
  • Follow-up studies that may result from BISE
  • The synthesis papers themselves, along with commentary on the topics explored in the papers

We look forward to sharing these studies with you, and we invite your commentary, criticism, and suggestions in the comments following each article. The goal of our synthesis project is to begin a conversation about evidence with our colleagues in the field. We hope that you find our results intriguing and that you might consider diving deep into the evaluation studies yourself. This spring, the BISE team will begin disseminating our database to anyone who is interested. Please contact myself or Amy Grack Nelson for more information.

Stay tuned for the post in our BISE blog series where Amy Grack Nelson will dive into the coding framework we developed for the project.

Posted by Kevin Crowley