How to Use Word Embeddings for Natural Language Processing

Abstract

This How-to Guide introduces computational methods for the analysis of word meaning, covering the main concepts of distributional semantics and word embedding models. At their core, distributional approaches to lexical semantics (i.e., the study of word meanings) are based on the idea that by looking at the context of use of a word (in terms of other words it occurs with in a text, for instance), we can infer important features of its meaning. This simple idea has turned out to be extremely powerful because it can be directly implemented computationally from text data. Word embeddings have been used extensively in social science research, and they have the potential to highlight expected and less expected similarities between words, and how these change across factors such as time. This guide will summarise not only the strengths of word embeddings for applied text-based research but also their limitations and the features that need to be considered when using or training them in your own research.

locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles