You may have to Search all our reviewed books and magazines, click the sign up button below to create a free account.
Learn how to build machine translation systems with deep learning from the ground up, from basic concepts to cutting-edge research.
This book conveys the fundamentals of Linked Lexical Knowledge Bases (LLKB) and sheds light on their different aspects from various perspectives, focusing on their construction and use in natural language processing (NLP). It characterizes a wide range of both expert-based and collaboratively constructed lexical knowledge bases. Only basic familiarity with NLP is required and this book has been written for both students and researchers in NLP and related fields who are interested in knowledge-based approaches to language analysis and their applications. Lexical Knowledge Bases (LKBs) are indispensable in many areas of natural language processing, as they encode human knowledge of language in...
Embeddings have undoubtedly been one of the most influential research areas in Natural Language Processing (NLP). Encoding information into a low-dimensional vector representation, which is easily integrable in modern machine learning models, has played a central role in the development of NLP. Embedding techniques initially focused on words, but the attention soon started to shift to other forms: from graph structures, such as knowledge bases, to other types of textual content, such as sentences and documents. This book provides a high-level synthesis of the main embedding techniques in NLP, in the broad sense. The book starts by explaining conventional word vector space models and word embeddings (e.g., Word2Vec and GloVe) and then moves to other types of embeddings, such as word sense, sentence and document, and graph embeddings. The book also provides an overview of recent developments in contextualized representations (e.g., ELMo and BERT) and explains their potential in NLP. Throughout the book, the reader can find both essential information for understanding a certain topic from scratch and a broad overview of the most successful techniques developed in the literature.
The majority of natural language processing (NLP) is English language processing, and while there is good language technology support for (standard varieties of) English, support for Albanian, Burmese, or Cebuano--and most other languages--remains limited. Being able to bridge this digital divide is important for scientific and democratic reasons but also represents an enormous growth potential. A key challenge for this to happen is learning to align basic meaning-bearing units of different languages. In this book, the authors survey and discuss recent and historical work on supervised and unsupervised learning of such alignments. Specifically, the book focuses on so-called cross-lingual wor...
This is the first book to provide an integrated view of preposition from morphology to reasoning, via syntax and semantics. It offers new insights in applied and formal linguistics, and cognitive science. It underlines the importance of prepositions in a number of computational linguistics applications, such as information retrieval and machine translation. The book presents a wide range of views and applications to various linguistic frameworks.
This book constitutes the refereed proceedings of the 21st International Conference on Text, Speech, and Dialogue, TSD 2018, held in Brno, Czech Republic, in September 2018. The 56 regular papers were carefully reviewed and selected from numerous submissions. They focus on topics such as corpora and language resources, speech recognition, tagging, classification and parsing of text and speech, speech and spoken language generation, semantic processing of text and search, integrating applications of text and speech processing, machine translation, automatic dialogue systems, multimodal techniques and modeling.
This book constitutes the refereed proceedings of the 7th International Conference on Computational Linguistics and Intelligent Text Processing, held in February 2006. The 43 revised full papers and 16 revised short papers presented together with three invited papers were carefully reviewed and selected from 176 submissions. The papers are structured into two parts and organized in topical sections on computational linguistics research.