"This paper presents description and evaluation of input and output modalities used in a sign-language-enabled information kiosk. The kiosk was developed for experiments on interaction between computers and deaf users. The input modalities used are automatic computer-vision-based sign language recognition, automatic speech recognition (ASR) and a touchscreen. The output modalities are presented on a screen displaying 3D signing avatar, and on a touchscreen showing special graphical user interface for the Deaf. The kiosk was tested on a dialogue providing information about train connections, but the scenario can be easily changed to e.g. SL tutoring tool, SL dictionary or SL game. This scenario expects that both deaf and hearing people can use the kiosk. This is why both automatic speech recognition and automatic sign language recognition are used as input modalities, and signing avatar and written text as output modalities (in several languages). The human-computer interaction is controlled by a comput" . "Input and output modalities used in a sign-language-enabled information kiosk"@en . . "Input and output modalities used in a sign-language-enabled information kiosk"@en . . "information kiosk; sign language; multimodal human-computer interfaces"@en . "RIV/49777513:23520/09:00504614!RIV11-AV0-23520___" . "Hr\u00FAz, Marek" . "978-5-8088-0442-5" . "This paper presents description and evaluation of input and output modalities used in a sign-language-enabled information kiosk. The kiosk was developed for experiments on interaction between computers and deaf users. The input modalities used are automatic computer-vision-based sign language recognition, automatic speech recognition (ASR) and a touchscreen. The output modalities are presented on a screen displaying 3D signing avatar, and on a touchscreen showing special graphical user interface for the Deaf. The kiosk was tested on a dialogue providing information about train connections, but the scenario can be easily changed to e.g. SL tutoring tool, SL dictionary or SL game. This scenario expects that both deaf and hearing people can use the kiosk. This is why both automatic speech recognition and automatic sign language recognition are used as input modalities, and signing avatar and written text as output modalities (in several languages). The human-computer interaction is controlled by a comput"@en . . "23520" . "3"^^ . . . "Petrohrad" . "4"^^ . "Input and output modalities used in a sign-language-enabled information kiosk" . "Karpov, Alexey" . "P(1ET101470416)" . . "Input and output modalities used in a sign-language-enabled information kiosk" . "Campr, Pavel" . "319688" . "Santemiz, Pinar" . . "SPECOM'2009 : proceedings" . "\u017Delezn\u00FD, Milo\u0161" . "6"^^ . . "2009-01-01+01:00"^^ . "Aran, Oya" . . . . . . "RIV/49777513:23520/09:00504614" . . "SPIIRAS" . . . "St. Petersburg" . . . "[9B511644CB85]" .