Updates from the Field: Meeting on Assessment in Informal Science Education
What is the state of the art in how we assess outcomes of informal STEM experiences? In early December 2013, a group of six informal STEM learning assessment projects met for a two-day, deep dive on this question at the offices of the Gordon and Betty Moore Foundation in Palo Alto, California. The projects had a mutual focus on developing validated instruments for assessing outcomes and were funded by the National Science Foundation, the Noyce Foundation, Bechtel Foundation, and the Moore Foundation. The goals for the convening were to strengthen the community of researchers working on common measures, begin to map the current set of measures, and identify gaps. The gathering also provided a rare opportunity to make comparisons of measures across projects, explore the in-depth details of the assessments, share and critique findings, and review plans for ongoing work to validate and refine measures.
Context for the Meeting
The six projects (see descriptions below) are all actively developing, validating, and sharing assessments targeted at informal STEM settings. In addition to the researchers from the six assessment projects, the meeting also included a group of “critical friends” who were informal science researchers, evaluators, former NSF AISL program officers, and leading practitioners. As assessments were presented and common issues discussed, these critical friends provided critique, feedback, and diverse perspectives from the broader field of informal STEM education.
In her welcoming remarks to the meeting, Janet Coffey of the Gordon and Betty Moore Foundation reminded the projects that they share the goal of establishing whether and in what ways informal learning experiences are consequential for STEM education. The field needs assessments to help consistently measure outcomes in order to inform new and better program design. We also need compelling ways to characterize the impact of our field. Assessment has been prominent in the formal world, and the time is ripe for informal STEM education researchers to take on the task of designing instruments that are sensitive enough to the features of the informal learning experience to give it the unique place it deserves in the STEM educational ecology.
Ann Bowers of the Noyce Foundation extended these remarks, pointing out that the purpose of assessments was not to make funders or stakeholders happy, but their primary purpose is to guide new and innovative STEM programming with feedback that will help us know about the real difference we are making in children’s lives.
In presentations and small group breakouts, the six projects shared their research and instruments. The meeting touched on the substance of measures, including the particular STEM topics they are focused on and the underlying constructs that they measured, including outcomes such as STEM interest, STEM motivation, and scientific sense-making skills, among others. Participants also discussed the utility of the measures, including the original purposes (e.g., formative assessment, program evaluation, individual assessment for research purposes, etc.), the settings for the assessment (museum, classroom, afterschool, etc.), and the particular ages and populations targeted.
A matrix was developed during the meeting that participants are now working on further to provide a map of these resources for the field. In the meantime, CAISE can report that the energy and optimism of this group was encouraging. In response to the increasing calls for assessment in informal STEM education this convening provided evidence that the field is already making excellent progress, and within a year we will be reporting on the emergence and use of several more valid and reliable assessments that are specifically tailored for informal STEM learning.
Some of the take-home messages of the meeting were:
1. The outcomes of informal STEM education experiences can be assessed in ways that are valid, reliable, and useful to the field. Even though the projects represented at the meeting are all still in process, early findings provide unambiguous evidence of success.
2. Once we develop assessments that target the outcomes we care most about in the informal STEM field, we will be in a much better position to conduct longitudinal studies of the cumulative and collective impact of informal STEM experiences on people’s lives.
3. One of the challenges of informal STEM learning assessment is the extent to which assessments need to be customized for different contexts or audiences. To be of most use to practice, assessments should measure constructs that practitioners can directly design into their work.
4. Developing assessments is a complex, non-linear, and challenging process. Research progresses through peer review, debate, and argument. Perhaps one of the most important uses of this convening was to provide a setting for rigorous scientific debate and critique that helped to sharpen the partisans’ and funders' collective understanding of the individual projects and our evolving knowledge about the state of the art.
The community of researchers developing common measurements in informal STEM education is beginning to converge. It is a relatively small but diverse group, each coming to the informal learning setting problem space from different backgrounds. As an interdisciplinary network, they now have the opportunity to leverage this diverse expertise to determine the greatest community needs, who has what skills and strengths, which instruments rest on which assumptions, and connect to which outcomes. Strong assessments will not emerge from a process where we work in silos. A growing body of projects are exploring what it means to become a community that strengthens and leverages their work through peer-critique, direct collaboration, and sharing knowledge. It was an important point of discussion in Palo Alto that such a community would be open to all informal STEM education assessment projects, not just those who participated in this convening.
Informal STEM Education Assessment Projects
Advancing Technology Fluency (PI: Brigid Barron, Stanford University): Advancing tech-fluency of underrepresented youth and their teachers through Project Based Learning (PBL) is designed to contribute to a basic understanding of how aspects of technological fluency develop in both formal and informal settings.
Developing, Validating, and Implementing Situated Evaluation Instruments (DEVISE) (PI: Rick Bonney, Cornell University): DEVISE is a project aimed at helping professional science educators obtain strategies and tools for evaluating the educational and social impacts of informal science education projects with an emphasis on projects that engage the public in scientific research.
Common Instrument (PI: Gil Noam, Harvard University): The Common Instrument is a survey for youth 10 years or older that includes 18 self-report items to assess child and adolescent interest and engagement in science. It is designed to address a need for a reliable tool that measures these constructs in informal science and Out-of-School Time programs.
Framework for Observing and Categorizing Instructional Strategies (FOCIS) (PI: Robert Tai, University of Virginia): FOCIS has developed an instrument that identifies learning activity typologies such as collaborating, creating, and discovering. It has supported research on gender differences in career aspirations and a study on whether youth with preferences for particular types of learning activities were more likely to select STEM-related career choices than youth who have different preferences.
Science Learning Activation Lab (PI: Rena Dorph, University of California, Berkeley): The Learning Activation Lab is a national research and design effort to dramatically strengthen learning in the United States and beyond. The goal is to learn and demonstrate how we can activate children’s interest and curious minds in ways that ignite persistent engagement in learning and innovation. This work spans multiple disciplines and considers the combinations of dispositions, skills, and knowledge that position individuals for success in learning in those disciplines.
SYNERGIES (PI: John Falk, Oregon State University): The Understanding and Connecting STEM Learning in the Community project is designed to understand how, when, where, why, and with whom children access and use STEM resources in their daily lives.
September 21, 2018
September 21, 2018
#InclusiveSciComm: Launching a new national conversation about inclusive public engagement with scienceAugust 29, 2018
August 28, 2018
July 30, 2018