About: Semantic Spaces for Improving language Modeling     Goto   Sponge   NotDistinct   Permalink

An Entity of Type : http://linked.opendata.cz/ontology/domain/vavai/Vysledek, within Data Space : linked.opendata.cz associated with source document(s)

AttributesValues
rdf:type
Description
  • Language models are crucial for many tasks in NLP (Natural Language Processing) and n-grams are the best way to build them. Huge effort is being invested in improving n-gram language models. By introducing external information (morphology, syntax, partitioning into documents, etc.) into the models a significant improvement can be achieved. The models can however be improved with no external information and smoothing is an excellent example of such an improvement. In this article we show another way of improving the models that also requires no external information. We examine patterns that can be found in large corpora by building semantic spaces (HAL, COALS, BEAGLE and others described in this article). These semantic spaces have never been tested in language modeling before. Our method uses semantic spaces and clustering to build classes for a class-based language model. The class-based model is then coupled with a standard n-gram model to create a very effective language model. Our experiments show that our models reduce the perplexity and improve the accuracy of n-gram language models with no external information added. Training of our models is fully unsupervised. Our models are very effective for inflectional languages, which are particularly hard to model. We show results for five different semantic spaces with different settings and different number of classes. The perplexity tests are accompanied with machine translation tests that prove the ability of proposed models to improve performance of a real-world application.
  • Language models are crucial for many tasks in NLP (Natural Language Processing) and n-grams are the best way to build them. Huge effort is being invested in improving n-gram language models. By introducing external information (morphology, syntax, partitioning into documents, etc.) into the models a significant improvement can be achieved. The models can however be improved with no external information and smoothing is an excellent example of such an improvement. In this article we show another way of improving the models that also requires no external information. We examine patterns that can be found in large corpora by building semantic spaces (HAL, COALS, BEAGLE and others described in this article). These semantic spaces have never been tested in language modeling before. Our method uses semantic spaces and clustering to build classes for a class-based language model. The class-based model is then coupled with a standard n-gram model to create a very effective language model. Our experiments show that our models reduce the perplexity and improve the accuracy of n-gram language models with no external information added. Training of our models is fully unsupervised. Our models are very effective for inflectional languages, which are particularly hard to model. We show results for five different semantic spaces with different settings and different number of classes. The perplexity tests are accompanied with machine translation tests that prove the ability of proposed models to improve performance of a real-world application. (en)
Title
  • Semantic Spaces for Improving language Modeling
  • Semantic Spaces for Improving language Modeling (en)
skos:prefLabel
  • Semantic Spaces for Improving language Modeling
  • Semantic Spaces for Improving language Modeling (en)
skos:notation
  • RIV/49777513:23520/14:43918497!RIV15-MSM-23520___
http://linked.open...avai/riv/aktivita
http://linked.open...avai/riv/aktivity
  • P(ED1.1.00/02.0090), S
http://linked.open...iv/cisloPeriodika
  • 1
http://linked.open...vai/riv/dodaniDat
http://linked.open...aciTvurceVysledku
http://linked.open.../riv/druhVysledku
http://linked.open...iv/duvernostUdaju
http://linked.open...titaPredkladatele
http://linked.open...dnocenehoVysledku
  • 44380
http://linked.open...ai/riv/idVysledku
  • RIV/49777513:23520/14:43918497
http://linked.open...riv/jazykVysledku
http://linked.open.../riv/klicovaSlova
  • Machine translation; Inflectional languages; Clustering; Purandare&Pedersen; Random Indexing; BEAGLE; COALS; HAL; Semantic spaces; Class-based language models (en)
http://linked.open.../riv/klicoveSlovo
http://linked.open...odStatuVydavatele
  • US - Spojené státy americké
http://linked.open...ontrolniKodProRIV
  • [9D750870FC81]
http://linked.open...i/riv/nazevZdroje
  • Computer Speech and Language
http://linked.open...in/vavai/riv/obor
http://linked.open...ichTvurcuVysledku
http://linked.open...cetTvurcuVysledku
http://linked.open...vavai/riv/projekt
http://linked.open...UplatneniVysledku
http://linked.open...v/svazekPeriodika
  • 28
http://linked.open...iv/tvurceVysledku
  • Konopík, Miloslav
  • Brychcín, Tomáš
issn
  • 0885-2308
number of pages
http://bibframe.org/vocab/doi
  • 10.1016/j.csl.2013.05.001
http://localhost/t...ganizacniJednotka
  • 23520
Faceted Search & Find service v1.16.118 as of Jun 21 2024


Alternative Linked Data Documents: ODE     Content Formats:   [cxml] [csv]     RDF   [text] [turtle] [ld+json] [rdf+json] [rdf+xml]     ODATA   [atom+xml] [odata+json]     Microdata   [microdata+json] [html]    About   
This material is Open Knowledge   W3C Semantic Web Technology [RDF Data] Valid XHTML + RDFa
OpenLink Virtuoso version 07.20.3240 as of Jun 21 2024, on Linux (x86_64-pc-linux-gnu), Single-Server Edition (126 GB total memory, 58 GB memory in use)
Data on this page belongs to its respective rights holders.
Virtuoso Faceted Browser Copyright © 2009-2024 OpenLink Software