Recent advances in multimodal learning analytics show significant promise for addressing these challenges by combining multi-channel data streams from fully-instrumented exhibit spaces with multimodal machine learning techniques to model patterns in visitor experience data. We describe initial work on the creation of a multimodal learning analytics framework for investigating visitor engagement with a game-based interactive surface exhibit for science museums called Future Worlds.
DATE:
TEAM MEMBERS:
Jonathan RoweWookhee MinSeung LeeBradford MottJames Lester
resourceresearchMuseum and Science Center Exhibits
Multimodal models often utilize video data to capture learner behavior, but video cameras are not always feasible, or even desirable, to use in museums. To address this issue while still harnessing the predictive capacities of multimodal models, we investigate adversarial discriminative domain adaptation for generating modality-invariant representations of both unimodal and multimodal data captured from museum visitors as they engage with interactive science museum exhibits.
DATE:
TEAM MEMBERS:
Nathan HendersonWookhee MinAndrew EmersonJonathan RoweSeung LeeJames MinogueJames Lester