You may have to Search all our reviewed books and magazines, click the sign up button below to create a free account.
The majority of natural language processing (NLP) is English language processing, and while there is good language technology support for (standard varieties of) English, support for Albanian, Burmese, or Cebuano--and most other languages--remains limited. Being able to bridge this digital divide is important for scientific and democratic reasons but also represents an enormous growth potential. A key challenge for this to happen is learning to align basic meaning-bearing units of different languages. In this book, the authors survey and discuss recent and historical work on supervised and unsupervised learning of such alignments. Specifically, the book focuses on so-called cross-lingual wor...
This book provides a comprehensive introduction to Conversational AI. While the idea of interacting with a computer using voice or text goes back a long way, it is only in recent years that this idea has become a reality with the emergence of digital personal assistants, smart speakers, and chatbots. Advances in AI, particularly in deep learning, along with the availability of massive computing power and vast amounts of data, have led to a new generation of dialogue systems and conversational interfaces. Current research in Conversational AI focuses mainly on the application of machine learning and statistical data-driven approaches to the development of dialogue systems. However, it is impo...
Thanks to the availability of texts on the Web in recent years, increased knowledge and information have been made available to broader audiences. However, the way in which a text is written—its vocabulary, its syntax—can be difficult to read and understand for many people, especially those with poor literacy, cognitive or linguistic impairment, or those with limited knowledge of the language of the text. Texts containing uncommon words or long and complicated sentences can be difficult to read and understand by people as well as difficult to analyze by machines. Automatic text simplification is the process of transforming a text into another text which, ideally conveying the same messag...
The attempt to spot deception through its correlates in human behavior has a long history. Until recently, these efforts have concentrated on identifying individual "cues" that might occur with deception. However, with the advent of computational means to analyze language and other human behavior, we now have the ability to determine whether there are consistent clusters of differences in behavior that might be associated with a false statement as opposed to a true one. While its focus is on verbal behavior, this book describes a range of behaviors—physiological, gestural as well as verbal—that have been proposed as indicators of deception. An overview of the primary psychological and co...
This book explores the cognitive plausibility of computational language models and why it’s an important factor in their development and evaluation. The authors present the idea that more can be learned about cognitive plausibility of computational language models by linking signals of cognitive processing load in humans to interpretability methods that allow for exploration of the hidden mechanisms of neural models. The book identifies limitations when applying the existing methodology for representational analyses to contextualized settings and critiques the current emphasis on form over more grounded approaches to modeling language. The authors discuss how novel techniques for transfer and curriculum learning could lead to cognitively more plausible generalization capabilities in models. The book also highlights the importance of instance-level evaluation and includes thorough discussion of the ethical considerations that may arise throughout the various stages of cognitive plausibility research.
This collection critically examines the practical impacts of machine translation (MT) through the lens of an ethics of care. It addresses the ideological issues in MT development linked to social hierarchies and explores the transformative potential of care ethics for more equitable technological progress. The volume explores the ideological constructs behind MT as a labor-saving technology, how these constructs are embedded in both its development and social reception, and how they manifest in biased outputs. The chapters cover the cultural roots of translation automation, its legal and political implications, and the needs of various stakeholders. These stakeholders include lay users, Indi...
Ruslan Mitkov's highly successful Oxford Handbook of Computational Linguistics has been substantially revised and expanded in this second edition. Alongside updated accounts of the topics covered in the first edition, it includes 17 new chapters on subjects such as semantic role-labelling, text-to-speech synthesis, translation technology, opinion mining and sentiment analysis, and the application of Natural Language Processing in educational and biomedical contexts, among many others. The volume is divided into four parts that examine, respectively: the linguistic fundamentals of computational linguistics; the methods and resources used, such as statistical modelling, machine learning, and corpus annotation; key language processing tasks including text segmentation, anaphora resolution, and speech recognition; and the major applications of Natural Language Processing, from machine translation to author profiling. The book will be an essential reference for researchers and students in computational linguistics and Natural Language Processing, as well as those working in related industries.
This book constitutes the refereed proceedings of the 9th International Conference on Advances in Natural Language Processing, PolTAL 2014, Warsaw, Poland, in September 2014. The 27 revised full papers and 20 revised short papers presented were carefully reviewed and selected from 83 submissions. The papers are organized in topical sections on morphology, named entity recognition, term extraction; lexical semantics; sentence level syntax, semantics, and machine translation; discourse, coreference resolution, automatic summarization, and question answering; text classification, information extraction and information retrieval; and speech processing, language modelling, and spell- and grammar-checking.
Natural language processing (NLP) went through a profound transformation in the mid-1980s when it shifted to make heavy use of corpora and data-driven techniques to analyze language. Since then, the use of statistical techniques in NLP has evolved in several ways. One such example of evolution took place in the late 1990s or early 2000s, when full-fledged Bayesian machinery was introduced to NLP. This Bayesian approach to NLP has come to accommodate various shortcomings in the frequentist approach and to enrich it, especially in the unsupervised setting, where statistical learning is done without target prediction examples. In this book, we cover the methods and algorithms that are needed to...