Contribute Media
A thank you to everyone who makes this possible: Read More

Compositional distributional semantics for modelling natural language

Description

Distributional semantic word representations have become an integral part in numerous natural language processing pipelines in academia and industry. An open question is how these elementary representations can be composed to capture the meaning of longer units of text. In this talk, I will give an overview of compositional distributional models, their applications and current research directions.

Abstract

Representing words as vectors in a high-dimensional space has a long history in natural language processing. Recently, neural network based approaches such as word2vec and GloVe have gained a substantial amount of popularity and have become an ubiquituous part in many NLP pipelines for a variety tasks, ranging from sentiment analysis and text classification, to machine translation, recognising textual entailment or parsing.

An important research problem is how to best leverage these word representations to form longer units of text such as phrases and full sentences. Proposals range from simple pointwise vector operations, to approaches inspired by formal semantics, deep learning based approaches that learn composition as part of an end-to-end system, and more structured approaches such as anchored packed dependency trees.

In this talk I will introduce a variety of compositional distributional models and outline different approaches of how effective meaning representations beyond the word level can successfully be built. I will furthermore provide an overview of the advantages of using compositional distributional approaches, as well as their limitations. Lastly, I will discuss their merit for applications such as aspect oriented sentiment analysis and question answering.

Details

Improve this page