Skip to main content

Community Repository Search Results

resource project Exhibitions
This project is designed to support collaboration between informal STEM learning (ISL) researchers, designers, and educators with sound researchers and acoustic ecologists to jointly explore the role of auditory experiences—soundscapes—on learning. In informal STEM learning spaces, where conversation advances STEM learning and is a vital part of the experience of exploring STEM phenomena with family and friends, attention to the impacts of soundscapes can have an important bearing on learning. Understanding how soundscapes may facilitate, spark, distract from, or even overwhelm thinking and conversation will provide ISL educators and designers evidence to inform their practice. The project is structured to reflect the complexity of ISL audiences and experiences; thus, partners include the North Park Village Nature Center located in in a diverse immigrant neighborhood in Chicago; Wild Indigo, a Great Lakes Audubon program primarily serving African American visitors in Midwest cities; an after-school/summer camp provider, STEAMing Ahead New Mexico, serving families in the rural southwest corner of New Mexico, and four sites in Ohio, MetroParks, Columbus Zoo and Aquarium, Franklin Park Conservatory and Botanical Gardens, and the Center of Science and Industry.

Investigators will conduct large-scale exploratory research to answer an understudied research question: How do environmental sounds impact STEM learning in informal learning spaces?  Researchers and practitioners will characterize and describe the soundscapes throughout the different outdoor and indoor exhibit/learning spaces. Researchers will observe 800 visitors, tracking attraction, attention, dwell time, and shared learning. In addition to observations, researchers will join another 150 visitors for think-aloud interviews, where researchers will walk alongside visitors and capture pertinent notes while visitors describe their experience in real time. Correlational and cluster analyses using machine learning algorithms will be used to identify patterns across different sounds, soundscapes, responses, and reflections of research participants. In particular, the analyses will identify characteristics of sounds that correlate with increased attention and shared learning. Throughout the project, a team of evaluators will monitor progress and support continuous improvement, including guidance for developing culturally responsive research metrics co-defined with project partners. Evaluators will also document the extent to which the project impacts capacity building, and influences planning and design considerations for project partners. This exploratory study is the initial in a larger research agenda, laying the groundwork for future experimental study designs that test causal claims about the relationships between specific soundscapes and visitor learning. Results of this study will be disseminated widely to informal learning researchers and practitioners through workshops, presentations, journal articles, facilitated conversations, and a short film that aligns with the focus and findings of the research.
DATE: -
TEAM MEMBERS: Martha Merson Justin Meyer Daniel Shanahan
resource project Exhibitions
The AI behind Virtual Humans Exhibit aims to communicate to the public about the capabilities and impact of artificial intelligence (AI) through AI technologies used in Virtual Humans including facial recognition and natural language processing. AI has and will continue to profoundly impact society in the United States and around the globe. It is important to prepare the nation’s youth and the future workforce with fundamental knowledge of AI. Informal settings, such as museums, offer open and flexible opportunities in helping youth and the general public learn about AI. Virtual Humans provide an ideal vehicle to illustrate many fields of AI, as AI is arguably the science of building intelligence that thinks and acts like humans. Led by a multidisciplinary team of researchers with expertise in AI, learning design, and assessment from the Institute for Creative Technologies at University of Southern California and the Lawrence Hall of Science at University of California, Berkeley, this project will develop a Virtual Human exhibit to engage visitors through structured conversations with a Virtual Human, while showcasing how AI drives the Virtual Human’s behavior behind the scenes. The exhibit will include collaborative learning experiences for visitors such as parent-child, siblings and peers to explore what AI is and is not, what AI is and is not capable of, and what impact it will have on their lives.

The project will investigate three research questions: (1) How can a museum exhibit be designed to engage visitor dyads in collaborative learning about AI? (2) How can complex AI concepts underlying the Virtual Human be communicated in a way that is understandable by the general public? And (3) How does and to what extent the Virtual Human exhibit increase knowledge and reduce misconceptions about AI?

The project leverages existing conversational Virtual Human technology developed through decades of collaborative research in AI, including machine vision, natural language processing, automated reasoning, character animation, and machine learning. Set in the informal setting of a museum, the exhibit will be designed following evidence-based research in Computer Supported Collaborative Learning. The project team will use a mixed methods design, drawing on design-based research methodologies and experimental studies. The research team will conduct analysis of visitor observations and interviews for iterative formative improvement. Randomized experimental studies will be conducted in both lab and naturalistic environments to gauge visitor knowledge about AI. Quasi-experimental analyses will be performed to study the relationship between engagement with exhibit features and AI knowledge. The project will produce an interactive exhibit with a Virtual Human installed at the Lawrence Hall of Science and other participating museums, and instruments to measure AI learning. The project will also produce a website where visitors can experience parts of the exhibit online and continue more in-depth learning about AI and the Virtual Human technology. The project holds the potential for producing theoretical and practical advances in helping the general public develop an understanding of AI capability and ethics, advancing knowledge in the process through which young learners develop knowledge about AI, and formulating design principles for creating collaborative learning experiences in informal settings. The results will be disseminated through conference presentations, scholarly publications, and social media. The Virtual Human exhibit will be designed for dissemination and made available for installations at informal science education communities.
DATE: -
TEAM MEMBERS: Ning Wang eric greenwald Ari Krakowski