An Exploratory Study of Marking Menu Selection by Visually
Although there have been recent advances in smartphone accessibility for blind people, they rely on screen readers and voice commands which are not ideal for users with visual impairment in mobile situations. By contrast, recent research has shown that marking menus (Kurtenbach, 1993) would be beneficial to users’ eyes-free interactions. However, the literature lacks accessibility implications and adaptation to the needs of blind people. This talk presents blind people’s capabilities to perform marking menu selections using the 3D motion of a smartphone in order to invoke smartphone functions. I present the bounds and range of marking gestures that a blind person can perform at each level, and the number of levels that a blind person can successfully cope with. Based on the experiment results, design guidelines are presented.
In this talk, I will also outline other projects, which I conducted in industry as well as at school. Those projects include (1) the effects of motion parallax and stereoscopic cues on eye gaze and pose estimation with a life-size cylindrical telepresence pod, (2) the tradeoffs between preserving privacy and sharing workspaces by display curvature and peek user interfaces, (3) a proximity-based game of tag using e-textile displays, (4) interacting with 3D human anatomy via a 360° cylindrical display, and (5) the development of large wall-mounted display interface for visually impaired people, which grants them equal access to important public information that is available to sighted people.
Kibum Kim is currently an assistant professor at the Department of Game and Mobile Engineering at Keimyung University, Daegu since September, 2016. He was an assistant professor at the Center for Human-engaged Computing (CHEC) at Kochi University of Technology, Japan from 2012 to 2014. He was an adjunct assistant professor in the School of Computing at Queen’s University, Canada from 2010 to 2012. He earned his PhD degree in computer science at Virginia Tech in 2006. He then spent 4 years (2006~2010) as a senior human interaction research engineer at Motorola Applied Research Center in Chicago suburb. His research interests span human computer interaction (HCI), computer supported cooperative work (CSCW), computer supported collaborative learning (CSCL), and virtual, augmented and mixed reality (VR/AR/MR). He has written over 30 research publications and his doctoral study was advanced to the semi-finals at ACM Student Research Competition in SIGCSE 2007. He served on the program committee of ACM Conference on Tangible, Embedded and Embodied Interaction (TEI 2012). He was awarded a Canada NSERC Discovery grant with an Early Career Researcher supplement, and a Japan Society for the Promotion of Science KAKEN grant. He received his master’s degree in computer science from University of Illinois at Urbana-Champaign, and a bachelor’s degree in computer science from Korea University at Seoul. Currently, Dr. Kim is working on the projects for collaborations using virtual and augmented reality technology. In his spare time, he enjoys mountain hiking and running.