Skip to main content

Natural Language Processing

FOUNDATION
By: Patrick Rafail & Isaac Freitas | Edited by: Paul Atkinson, Sara Delamont, Alexandru Cernat, Joseph W. Sakshaug & Richard A. Williams Published: 2020 | Length: 10 | DOI: |
+- LessMore information
Download PDF

Natural language processing (NLP) is the use of computer technology to assist in, or complete, tasks involving the processing, categorizing, analyzing, or interpreting the meaning of human language. NLP is an interdisciplinary body of research drawing from linguistics, computer science, artificial intelligence, machine learning, and the social sciences. This entry introduces NLP, with emphasis on its strengths and weaknesses for social scientific applications. It reviews major concepts in NLP, such as the representation of natural language as strings, and then discusses commonly used data collection strategies to rapidly build large databases of natural language for analysis. This entry also introduces major techniques in how to efficiently process natural language using computational routines including counting strings and substrings, case manipulation, string substitution, tokenization, stemming and lemmatizing, part-of-speech tagging, chunking, named entity recognition, feature extraction, and sentiment analysis. It also discusses a variety of dimensionality reduction techniques including principal components analysis, topic models, hidden Markov models, and support vector machines that are widely used in NLP-based research projects. This entry also provides a discussion of Python and R, two computer programming language well suited for NLP, with specific recommendations for add-on packages designed to streamline and simplify research. The entry concludes with a more general discussion of the strengths and weaknesses of NLP along with important directions for future research.

Looks like you do not have access to this content.

Copy and paste the following HTML into your website