Skip to main content

Introducing the Building Informal Science Education Blog Series

Within the field of evaluation, there are a limited number of places where evaluators can share their reports. We are fortunate that one such resource exists in the informal learning community − informalscience.org. Informalscience.org provides evaluators access to a rich collection of reports they can use to inform their practice and learn about a wide variety of designs, methods, and measures used in evaluating informal education projects. In what ways might the evaluation and research community use a collection of evaluation reports to generate and share useful new knowledge? The Building Informal Science Education project set out to answer this question. Building Informal Science Education (BISE) is a NSF-funded collaboration between the University of Pittsburgh Center for Learning in Out of School Environments, the Science Museum of Minnesota, and the Visitor Studies Association. We’ve spent the last few years diving deep into the evaluation reports that have been uploaded to informalscience.org in order begin to understand what the field can learn from such a rich resource. To date, BISE has produced an extensive coding framework, coded 521 reports, and commissioned five synthesis papers. We are now working to integrate some of our coding into the new informalscience.org website and exploring ways to make our NVivo database freely available for others to conduct their own analyses.

Our Coding Framework

All evaluation reports on informalscience.org posted through this spring have been coded using the framework that BISE developed. We created coding categories to align with key features of evaluation reports and the potential coding needs of synthesis authors. We also aligned our codes with those of a variety of initiatives from the informal science education field. One of these documents was the Center for Advancement of Informal Science Education (CAISE) Portfolio Inquiry Group’s codebook, which was developed to analyze publicly available National Science Foundation (NSF) informal science education program award data (the program name has changed to the Advancing Informal STEM Learning Program or AISL). We also referred to the categories used in the NSF ISE Online Project Monitoring System’s Baseline Survey that NSF Principal Investigators complete about their project and related evaluation activities. Our final overarching coding categories are listed below. Within each of the categories are a number of themes. We’ll discuss findings related to these categories in future BISE Blog posts.

  • Evaluand
  • Evaluation type
  • Funding source
  • Evaluator type
  • Evaluation purpose
  • Inclusion of instruments and instrument type provided
  • Project setting
  • Sample characteristics
  • Age of individuals sampled
  • Evaluation purpose
  • Data collection methods
  • Pre, post, and follow-up measures
  • Statistical tests
  • Accessibility
  • Language translation
  • Recommendations

Synthesis Papers

Authors from the three collaborating organizations conducted syntheses based on a select sample of reports. The sample included 431 evaluations of STEM informal learning projects that were posted to informalscience.org on or before January 31, 2012. Questions addressed by the synthesis papers include:

  • What are common methods, practices, and outcomes of media projects and evaluations?
  • What can we learn through the recommendations evaluators provide for summative evaluations of exhibitions?
  • What are the practices of evaluating informal science education websites and how have these practices changed over time?
  • What are the intended impacts of socially relevant practices in museums and how are those impacts measured?
  • If the visitor studies field wants to learn from evaluation reports on informalscience.org and similar types of collections, how can evaluators help to ensure the reports they post are useful to other evaluators?

    Future BISE Blog Posts

    Stay tuned for future BISE blog posts where members of the BISE team will discuss key insights and findings from our coding and syntheses. We’ll share examples of innovative data collection methods being used in the ISE field, alternate ways to communicate the evaluation process and findings to stakeholders, snapshots of findings related to the various coding categories, and findings from the five synthesis papers. The BISE project team includes Kevin Crowley (PI), Karen Knutson (Co-PI), Kirsten Ellenbogen (Co-PI), Amy Grack Nelson, and Sarah Cohn. A very special thanks to the Science Museum of Minnesota staff that spent many hours coding all of the reports: Zdanna Tranby, Gayra Ostgaard, Gretchen Haupt, Al Onkka.
Posted by Amy Grack Nelson