This HTML5 document contains 46 embedded RDF statements represented using HTML+Microdata notation.

The embedded RDF content will be recognized by any processor of HTML5 Microdata.

Namespace Prefixes

PrefixIRI
n7http://linked.opendata.cz/ontology/domain/vavai/riv/typAkce/
dctermshttp://purl.org/dc/terms/
n19http://purl.org/net/nknouf/ns/bibtex#
n10http://localhost/temp/predkladatel/
n9http://linked.opendata.cz/resource/domain/vavai/projekt/
n5http://linked.opendata.cz/resource/domain/vavai/riv/tvurce/
n21http://linked.opendata.cz/ontology/domain/vavai/
n8https://schema.org/
shttp://schema.org/
skoshttp://www.w3.org/2004/02/skos/core#
n4http://linked.opendata.cz/ontology/domain/vavai/riv/
n17http://linked.opendata.cz/resource/domain/vavai/vysledek/RIV%2F49777513%3A23520%2F09%3A00504614%21RIV11-AV0-23520___/
n2http://linked.opendata.cz/resource/domain/vavai/vysledek/
rdfhttp://www.w3.org/1999/02/22-rdf-syntax-ns#
n11http://linked.opendata.cz/ontology/domain/vavai/riv/klicoveSlovo/
n13http://linked.opendata.cz/ontology/domain/vavai/riv/duvernostUdaju/
xsdhhttp://www.w3.org/2001/XMLSchema#
n18http://linked.opendata.cz/ontology/domain/vavai/riv/jazykVysledku/
n16http://linked.opendata.cz/ontology/domain/vavai/riv/aktivita/
n20http://linked.opendata.cz/ontology/domain/vavai/riv/druhVysledku/
n15http://linked.opendata.cz/ontology/domain/vavai/riv/obor/
n14http://reference.data.gov.uk/id/gregorian-year/

Statements

Subject Item
n2:RIV%2F49777513%3A23520%2F09%3A00504614%21RIV11-AV0-23520___
rdf:type
skos:Concept n21:Vysledek
dcterms:description
This paper presents description and evaluation of input and output modalities used in a sign-language-enabled information kiosk. The kiosk was developed for experiments on interaction between computers and deaf users. The input modalities used are automatic computer-vision-based sign language recognition, automatic speech recognition (ASR) and a touchscreen. The output modalities are presented on a screen displaying 3D signing avatar, and on a touchscreen showing special graphical user interface for the Deaf. The kiosk was tested on a dialogue providing information about train connections, but the scenario can be easily changed to e.g. SL tutoring tool, SL dictionary or SL game. This scenario expects that both deaf and hearing people can use the kiosk. This is why both automatic speech recognition and automatic sign language recognition are used as input modalities, and signing avatar and written text as output modalities (in several languages). The human-computer interaction is controlled by a comput This paper presents description and evaluation of input and output modalities used in a sign-language-enabled information kiosk. The kiosk was developed for experiments on interaction between computers and deaf users. The input modalities used are automatic computer-vision-based sign language recognition, automatic speech recognition (ASR) and a touchscreen. The output modalities are presented on a screen displaying 3D signing avatar, and on a touchscreen showing special graphical user interface for the Deaf. The kiosk was tested on a dialogue providing information about train connections, but the scenario can be easily changed to e.g. SL tutoring tool, SL dictionary or SL game. This scenario expects that both deaf and hearing people can use the kiosk. This is why both automatic speech recognition and automatic sign language recognition are used as input modalities, and signing avatar and written text as output modalities (in several languages). The human-computer interaction is controlled by a comput
dcterms:title
Input and output modalities used in a sign-language-enabled information kiosk Input and output modalities used in a sign-language-enabled information kiosk
skos:prefLabel
Input and output modalities used in a sign-language-enabled information kiosk Input and output modalities used in a sign-language-enabled information kiosk
skos:notation
RIV/49777513:23520/09:00504614!RIV11-AV0-23520___
n4:aktivita
n16:P
n4:aktivity
P(1ET101470416)
n4:dodaniDat
n14:2011
n4:domaciTvurceVysledku
n5:3572072 n5:7180659 n5:4051351
n4:druhVysledku
n20:D
n4:duvernostUdaju
n13:S
n4:entitaPredkladatele
n17:predkladatel
n4:idSjednocenehoVysledku
319688
n4:idVysledku
RIV/49777513:23520/09:00504614
n4:jazykVysledku
n18:eng
n4:klicovaSlova
information kiosk; sign language; multimodal human-computer interfaces
n4:klicoveSlovo
n11:information%20kiosk n11:multimodal%20human-computer%20interfaces n11:sign%20language
n4:kontrolniKodProRIV
[9B511644CB85]
n4:mistoKonaniAkce
Petrohrad
n4:mistoVydani
St. Petersburg
n4:nazevZdroje
SPECOM'2009 : proceedings
n4:obor
n15:JD
n4:pocetDomacichTvurcuVysledku
3
n4:pocetTvurcuVysledku
6
n4:projekt
n9:1ET101470416
n4:rokUplatneniVysledku
n14:2009
n4:tvurceVysledku
Hrúz, Marek Karpov, Alexey Campr, Pavel Santemiz, Pinar Železný, Miloš Aran, Oya
n4:typAkce
n7:WRD
n4:zahajeniAkce
2009-01-01+01:00
s:numberOfPages
4
n19:hasPublisher
SPIIRAS
n8:isbn
978-5-8088-0442-5
n10:organizacniJednotka
23520