blue rays

Research

Foto Michael Kipp

My current research focus lies on multitouch and gesture-based interaction. Moreover, I still do development on the video annotation software ANVIL from time to time. ANVIL is primarily used for the analysis of multimodal communication.

In the past I have done research on intelligent, emotional virtual agents and on gesture and sign language generation. During my time at DFKI (German Research Center for AI) I worked on dialog systems in the area of computational linguistics. My diploma thesis dealt with neural networks.

As a scientific reviewer I have been active for the EU and national research agencies (Germany, Switzerland, The Netherlands and Iceland), for international conferences (CHI, IROS, UbiComp, IUI, LREC, SIGGRAPH) and journals (u.a. ACM TIIS, ACM TOG, ACM TACCESS, Speech Communication). I am also member in the editorial board of the Journal on Multimodal User Interfaces (Springer) und the journal on Multimodal Technologies and Interaction (MDPI).

2007 und 2015 my co-authors and me were able to win the Best Paper Awards of the conferences IVA and INTERACT respectively.

You can find more about my research on my Google Scholar and ResearchGate pages.

Google Scholar → ResearchGate →

Topics

  • Human-Computer Interaction
  • Multitouch
  • Gesture-based Interaction
  • Multimodal Interaction
  • Annotation Tools

Anvil

The video annotation research tool ANVIL allows the multi-level annotation of videos for systematic analysis of e.g. multimodal communication.

Anvil Homepage →

Publications

Selected publications:

Nguyen, Q., and Kipp, M. (2015) Where to Start? Exploring the efficiency of translation movements on multitouch devices. In: Proceedings of INTERACT. Best Paper Award

Kipp, M. (2014) ANVIL: The Video Annotation Research Tool. In: Durand, J., Gut, U., Kristoffersen, G. (eds.) Handbook of Corpus Phonology, Oxford University Press, Chapter 21, pp. 420-436.

Nguyen, Q., Kipp, M. (2014) Orientation Matters: Efficiency of translation-rotation multitouch tasks. In: Proceedings of CHI.

Kipp, M., Martin, J.-C. (2009) Gesture and Emotion: Can basic gestural form features discriminate emotions? In: Proceedings of the International Conference on Affective Computing and Intelligent Interaction (ACII-09), IEEE Press. Nominated for Best Paper

Kipp, M., Neff, M., Kipp, K.H. und Albrecht, I. (2007) Toward Natural Gesture Synthesis: Evaluating gesture units in a data-driven approach. In: Proceedings of the 7th International Conference on Intelligent Virtual Agents, Springer, pp.15-28. Best Paper Award

Kipp, M. (1998) The Neural Path to Dialogue Acts. In: Proceedings of the 13th European Conference on Artificial Intelligence (ECAI).

Show all →

Talks

Selected talks:

01/2016: Digital vs. Präsenz: Beispiele aus der Informatikausbildung, Tag der Lehre, Hochschule Augsburg

03/2015: Anvil - Efficient Annotation, Videos and Motion Capture, LIMSI, Paris

11/2012: Keynote Tools for Multimodal Behavior Analysis and Synthesis, 4th Nordic Symposium on Multimodal Communication, Gothenburg, Sweden

09/2012: Gebärdensprachavatare im Internet - Möglichkeiten und Grenzen, Di-Ji-Kongress "Verständlich informiert - im Job integriert", Aktionsbündnis für barrierefreie Informationstechnik, Berlin

Show all →