Skip to main content

Community Repository Search Results

resource research Media and Technology
Through the Digital Media in Everyday Life research initiative, The Museum of Science and Industry, Chicago seeks to better understand our audience and their relationship to technology and digital media in order to inform the development of our own digital initiatives. Our definition of “audience” is necessarily broad, and includes visitors to the Museum as well as users of all our online, mobile, and social media experiences. Therefore it is not only important for us to understand what mobile devices visitors might bring into the Museum, but also how users behave online and in social networks
DATE:
TEAM MEMBERS: Steven Beasley Annie Conway
resource research Media and Technology
Digital media and technology have become culturally and economically powerful parts of contemporary middle-class American childhoods. Immersed in various forms of digital media as well as mobile and Web-based technologies, young people today appear to develop knowledge and skills through participation in media. This MacArthur Report examines the ways in which afterschool programs, libraries, and museums use digital media to support extracurricular learning. It investigates how these three varieties of youth-serving organizations have incorporated technological infrastructure and digital
DATE:
TEAM MEMBERS: Joan Ganz Cooney Center Becky Herr-Stephenson Diana Rhoten Dan Perkel Christo Sims
resource research Media and Technology
Finger-based touch input has become a major interaction modality for mobile user interfaces. However, due to the low precision of finger input, small user interface components are often difficult to acquire and operate on a mobile device. It is even harder when the user is on the go and unable to pay close attention to the interface. In this paper, we present Gesture Avatar, a novel interaction technique that allows users to operate existing arbitrary user interfaces using gestures. It leverages the visibility of graphical user interfaces and the casual interaction of gestures. Gesture Avatar
DATE:
TEAM MEMBERS: Hao Lü Yang Li
resource research Media and Technology
This paper explores the interactive possibilities enabled when the barrel of a digital pen is augmented with a multi-touch sensor. We present a novel multi-touch pen (MTPen) prototype and discuss its alternate uses beyond those of a standard stylus, such as allowing new touch gestures to be performed using the index finger or thumb and detecting how users grip the device as a mechanism for mode switch-ing. We also discuss the hardware and software implemen-tation challenges in realizing our prototype, and showcase how one can combine different grips (tripod, relaxed tripod, sketch, wrap) and
DATE:
TEAM MEMBERS: Jim Spadaccini Hyunyoung Song Hrvoje Benko Francois Guimbretiere Shahram Izadi Xiang Cao Ken Hinckley
resource research
In this paper we describe two projects that utilize reality-based interaction to advance collaborative scientific inquiry and discovery. We discuss the relation between reality-based and embodied interaction, and present findings from an experimental study that illustrate benefits of reality-based tabletop interaction for collaborative inquiry-based learning.
DATE:
TEAM MEMBERS: Orit Shaer
resource research Media and Technology
New mobile devices with large multi-touch displays, such as the iPad, have brought revolutionary changes to ways users interact with computers. Instead of traditional input devices such as keyboards, touchpads and mice, multi-touch gestures are used as the primary means of interacting with mobile devices. Surprisingly, body-motion gestures are evolving to become a new, natural, and effective way for game players to interact with game consoles in a very similar fashion: in Kinect for Xbox 360, a controller-free gaming experience is made possible by using body-motion gestures to play games.
DATE:
TEAM MEMBERS: Yuan Feng Zimu Liu Baochun Li
resource evaluation
Direct-touch interaction on mobile phones revolves around screens that compete for visual attention with users‟ real-world tasks and activities. This paper investigates the impact of these situational impairments on touch-screen interaction. We probe several design factors for touch-screen gestures, under various levels of environmental demands on attention, in comparison to the status-quo approach of soft buttons. We find that in the presence of environmental distractions, ges-tures can offer significant performance gains and reduced attentional load, while performing as well as soft buttons
DATE:
TEAM MEMBERS: Andrew Bragdon Eugene Nelson Yang Li Ken Hinckley
resource evaluation
Recent advances in touch screen technology have increased the prevalence of touch screens and have prompted a wave of new touch screen-based devices. However, touch screens are still largely inaccessible to blind users, who must adopt error-prone compensatory strategies to use them or find accessible alternatives. This inaccessibility is due to interaction techniques that require the user to visually locate objects on the screen. To address this problem, we introduce Slide Rule, a set of audiobased multi-touch interaction techniques that enable blind users to access touch screen applications
DATE:
TEAM MEMBERS: Jim Spadaccini Jeffrey Bigham Jacob Wobbrock
resource research
Many tasks in graphical user interfaces require users to interact with elements at various levels of precision. We present FingerGlass, a bimanual technique designed to improve the precision of graphical tasks on multitouch screens. It enables users to quickly navigate to different locations and across multiple scales of a scene using a single hand. The other hand can simultaneously interact with objects in the scene. Unlike traditional pan-zoom interfaces, FingerGlass retains contextual information during the interaction. We evaluated our technique in the context of precise object selection
DATE:
TEAM MEMBERS: Dominik K¨aser Maneesh Agrawala Mark Pauly
resource research Media and Technology
Modern smartphones contain sophisticated sensors to monitor three-dimensional movement of the device. These sensors permit devices to recognize motion gestures— deliberate movements of the device by end-users to invoke commands. However, little is known about best-practices in motion gesture design for the mobile computing paradigm. To address this issue, we present the results of a guessability study that elicits end-user motion gestures to invoke commands on a smartphone device. We demonstrate that consensus exists among our participants on parameters of movement and on mappings of motion
DATE:
TEAM MEMBERS: Jim Spadaccini Jaime Ruiz Yang Li Edward Lank