Skip to main content

Digging Into the BISE Coding Framework

This post is part of a series on the Building Informal Science Education (BISE) project. For previous posts in this series, click here.

How did the Building Informal Science Education (BISE) project start to make sense of the content in over 500 evaluation reports on InformalScience.org? It all began with our coding framework. Coding categories and related codes were created to align with key features of evaluation reports and the potential coding needs of the five BISE synthesis authors.

Creating the Coding Framework

Throughout our iterative process of developing the Coding Framework, we looked to a number of resources and individuals in the evaluation and informal education fields.

  • The evaluation reporting literature provided guidance on what report elements are useful for understanding how an evaluation is carried out and interpreting the findings (American Evaluation Association, 2004; Fitzpatrick, Sanders, & Worthen, 2011; Miron, 2004; Yarbrough, Shulha, Hopson, & Caruthers, 2011).
  • We looked to the Center for Advancement of Informal Science Education (CAISE) Portfolio Inquiry Group’s codebook, which was developed to analyze publicly available NSF informal science education (ISE) award data (Baek, 2011).
  • We referred to the categories used in the NSF ISE Online Project Monitoring System’s (OPMS) Baseline Survey, which NSF principal investigators complete annually about their project and related evaluation activities.
  • We conducted preliminary coding of a sample of the reports on InformalScience.org to refine code definitions and identify additional codes that emerged from the data.
  • The codes were further refined based on feedback from evaluators during a presentation and discussion at the 2011 Visitor Studies Association conference.
  • As we brought in the five synthesis authors, code definitions and categories were further revised to ensure their clarity and relevance to the authors.
  • As the Science Museum of Minnesota coding team coded reports, definitions continued to be strengthened through the addition of examples and non-examples in the codebook.

The process of creating the coding framework was a long and, at times, difficult endeavor. Within the ISE field, people sometimes use different terminology, so our coding was often being refined and expanded to ensure we recognized and included the variations of language in the field. There was also variation in what people included in their evaluation reports, making it difficult to sometimes establish adequate percent agreement among coders. This led us to further refine our code definitions based on what was explicit in reports. We recognize that within any qualitative study there are variations in how people might define codes or what codes they may see as important. We hope you’ll find our final Coding Framework, and related BISE products, useful for gaining a deeper understanding about characteristics of evaluations in ISE.

Our Final Coding Categories

Below are the final coding categories in the BISE Coding Framework and short descriptions of these categories. A list of the codes included in each category can be found here. Later this spring, we will have a more detailed codebook ready to share with the field. That document will include the coding categories, codes within each category, and detailed definitions and examples of each code.

BISE Coding Categories

Coding Category Description
Evaluland The object(s) being evaluated.
Evaluation type The type of evaluation, such as the formative and the summative.
Author The individual(s) that wrote the report.
Evaluation organization The organization that the author(s) is/are a part of.
Evaluator type If the evaluator was considered internal or external.
National Science Foundation number The NSF number associated with a project, if it was NSF-funded.
Other funding source Sources, other than NSF, that funded the project being evaluated.
Funding start & expiration date For NSF-funded projects, the time span when the project took place.
Year of written report The year the report was written.
Evaluation purpose and/or questions The focus of the evaluation, including evaluation questions.
Project setting The location of the project being evaluated.
Sample size The description of sample size for each data collection method.
Sample for the evaluation The type of individuals that composed the sample or samples of the evaluation.
Age of individuals sampled The age(s) of the individuals sampled.
Special types of adults sampled If the evaluation included specific types of adult audiences.
Sampled a school group If the evaluation included a preK-12 school group.
Accessibility issues If the evaluation looked at physical and/or cognitive accessibility issues.
Language translation If the evaluation dealt with language translation in any way.
Data collection methods The method(s) used to collect data for the evaluation.
Instruments & instrument type provided The inclusion of data collection instruments with the report and type of instrument.
Pre/post measures If the evaluation used pre/post measures.
Follow-up If the evaluation used any type of follow-up data collection.
Recommendations The inclusion of evaluator-generated recommendations, or suggestions, based on the evaluation data.

You might wonder why our coding categories don’t include subject area (i.e. science content). We tried to code reports by subject area and revised our coding structure multiple times in the process, but were unable to get to a place where we had adequate percent agreement. It seemed like it shouldn’t have been that hard to do, but many reports didn’t include enough information about the topic of the object being evaluated to be able to reliably categorize it into a particular subject area.

How You Might Use our Resources

As I mentioned earlier, we’ve coded over 500 reports based on our Coding Framework. In addition to sharing our Framework, we will share our NVivo database and related spreadsheets this spring. You’ll be able to search these resources to find evaluation reports that have certain characteristics.

Looking for a summative evaluation report that uses timing and tracking in an exhibition? We’ve made it easy. How about evaluation questions people ask when evaluating media with middle school kids? You can find that, too!

Stay tuned for our next BISE Blog post, where we’ll share more about what you can find in the database based on our coding categories.


References

American Evaluation Association. (2004). American Evaluation Association guiding principles for evaluators, from http://www.eval.org/p/cm/ld/fid=51.

Baek, J. Y. (2011). CAISE Portfolio Inquiry Group: Codebook. Paper presented at the CAISE Informal Commons Working Group, Corrales, NM.

Fitzpatrick, J. L., Sanders, J. R., & Worthen, B. R. (2011). Program evaluation: Alternative approaches and practical guidelines (4th ed.). Upper Saddle River, NJ: Pearson Education. Miron, G. (2004). Evaluation report checklist. Kalamazoo, MI.

Yarbrough, D. B., Shulha, L. M., Hopson, R. K., & Caruthers, F. A. (2011). The program evaluation standards: A guide for evaluators and evaluation users (3rd ed.). Thousand Oaks, CA: Sage.

Posted by Amy Grack Nelson