Skip to main content

Community Repository Search Results

resource project Media and Technology
This project is aimed at creating a language / framework independent Gesture Recognition toolkit that takes OSC messages formatted with TUIO specification as input and outputs recognized gestures via OSC protocol. I will use the gesture recognition toolkit AMELiA to describe models specifically for the domain of multitouch gestures. This project will enable multitouch application developers to easily define a gesture and utilize it within their application, creating more engaging experiences.
DATE:
TEAM MEMBERS: Sashikanth Damaraju
resource research Media and Technology
Creating and editing large graphs and node-link diagrams are crucial activities in many application areas. For them, we consider multi-touch and pen input on interactive surfaces as very promising. This fundamental work presents a user study investigating how people edit node-link diagrams on an interactive tabletop. The study covers a set of basic operations, such as creating, moving, and deleting diagram elements. Participants were asked to perform spontaneous gestures for 14 given tasks. They could interact in three different ways: using one hand, both hands, as well as pen and hand
DATE:
TEAM MEMBERS: Mathias Frisch Jens Heydekorn Raimund Dachselt
resource research Media and Technology
Zooming user interfaces are increasingly popular on mobile devices with touch screens. Swiping and pinching finger gestures anywhere on the screen manipulate the displayed portion of a page, and taps open objects within the page. This makes navigation easy but limits other manipulations of objects that would be supported naturally by the same gestures, notably cut and paste, multiple selection, and drag and drop. A popular device that suffers from this limitation is Apple’s iPhone. In this paper, we present Bezel Swipe, an interaction technique that supports multiple selection, cut, copy
DATE:
TEAM MEMBERS: Volker Roth Thea Turner
resource research Media and Technology
Most current multi-touch capable interactive user interfaces for tabletop are built from custom toolkits that are decoupled from, and on top of, the “Desktop” provided by the underlying Operating System. However, this approach requires that each individual touch system build their own suite of touch capable custom applications (such as photo browsers), usually resulting in limited functionality. In this paper, we propose a software architecture for supporting and integrating multi-touch capability on existing desktop systems, where multi-touch and multiple single pointer input can be used
DATE:
TEAM MEMBERS: Kelvin Cheng Benjamin Itzstein Paul Sztajer Markus Rittenbruch