Realizing Multimodality for Mobile Users

Dr. Inderpal Singh Mumick, Chairman and CEO
Kirusa


Abstract

Multimodal Solutions and Technologies Today's user interfaces on mobile devices are severely limited, hindering the growth and mass adoption of wireless mobile applications. While mobile operators have made major investments in network infrastructure and technology, usability problems unique to mobile devices have prevented these operators from realizing a return on this investment. Visual interfaces such as WAP and i-Mode, as well as the pocket browsers on MS Smartphone, PalmOS, and Symbian devices, are limited by the physical form factor of the screens and keypads on mobile devices. Voice interfaces overcome many of the problems of visual interfaces but are not practical for conveying graphical information, structured data such as tables, or memory straining information such as long lists, complex instructions, or numbers. As an example, it may be difficult for a user to type text on a phone to reply to an SMS, and far easier to speak a reply to an SMS message. In this presentation, we will introduce the "multimodal user interface" that combines both voice and visual interfaces, so that mobile applications can exploit the strengths and mitigate the weaknesses of both modes. This new interface technology, known as multimodality, allows for a more natural and efficient interface to mobile applications. A user can now use the keypad, stylus, screen, speech, and hearing in the same application, allowing the user to use the mode most appropriate for each interaction. We will show examples of multimodal applications, and discuss the technical challenges in enabing multimodality for mobile devices, and in developing multimodal applications. We will present the solutions being developed in the industry, focussing on Kirusa's multimodal solutions.