You may have to Search all our reviewed books and magazines, click the sign up button below to create a free account.
The majority of natural language processing (NLP) is English language processing, and while there is good language technology support for (standard varieties of) English, support for Albanian, Burmese, or Cebuano—and most other languages—remains limited. Being able to bridge this digital divide is important for scientific and democratic reasons but also represents an enormous growth potential. A key challenge for this to happen is learning to align basic meaning-bearing units of different languages. In this book, the authors survey and discuss recent and historical work on supervised and unsupervised learning of such alignments. Specifically, the book focuses on so-called cross-lingual w...
This book presents a taxonomy framework and survey of methods relevant to explaining the decisions and analyzing the inner workings of Natural Language Processing (NLP) models. The book is intended to provide a snapshot of Explainable NLP, though the field continues to rapidly grow. The book is intended to be both readable by first-year M.Sc. students and interesting to an expert audience. The book opens by motivating a focus on providing a consistent taxonomy, pointing out inconsistencies and redundancies in previous taxonomies. It goes on to present (i) a taxonomy or framework for thinking about how approaches to explainable NLP relate to one another; (ii) brief surveys of each of the classes in the taxonomy, with a focus on methods that are relevant for NLP; and (iii) a discussion of the inherent limitations of some classes of methods, as well as how to best evaluate them. Finally, the book closes by providing a list of resources for further research on explainability.
This book introduces basic supervised learning algorithms applicable to natural language processing (NLP) and shows how the performance of these algorithms can often be improved by exploiting the marginal distribution of large amounts of unlabeled data. One reason for that is data sparsity, i.e., the limited amounts of data we have available in NLP. However, in most real-world NLP applications our labeled data is also heavily biased. This book introduces extensions of supervised learning algorithms to cope with data sparsity and different kinds of sampling bias. This book is intended to be both readable by first-year students and interesting to the expert audience. My intention was to introd...
This book presents a taxonomy framework and survey of methods relevant to explaining the decisions and analyzing the inner workings of Natural Language Processing (NLP) models. The book is intended to provide a snapshot of Explainable NLP, though the field continues to rapidly grow. The book is intended to be both readable by first-year M.Sc. students and interesting to an expert audience. The book opens by motivating a focus on providing a consistent taxonomy, pointing out inconsistencies and redundancies in previous taxonomies. It goes on to present (i) a taxonomy or framework for thinking about how approaches to explainable NLP relate to one another; (ii) brief surveys of each of the classes in the taxonomy, with a focus on methods that are relevant for NLP; and (iii) a discussion of the inherent limitations of some classes of methods, as well as how to best evaluate them. Finally, the book closes by providing a list of resources for further research on explainability.
The majority of natural language processing (NLP) is English language processing, and while there is good language technology support for (standard varieties of) English, support for Albanian, Burmese, or Cebuano--and most other languages--remains limited. Being able to bridge this digital divide is important for scientific and democratic reasons but also represents an enormous growth potential. A key challenge for this to happen is learning to align basic meaning-bearing units of different languages. In this book, the authors survey and discuss recent and historical work on supervised and unsupervised learning of such alignments. Specifically, the book focuses on so-called cross-lingual wor...
This book constitutes the proceedings of the 7th International Conference on Advances in Natural Language Processing held in Reykjavik, Iceland, in August 2010.
With the release of ChatGPT, large language models (LLMs) have become a prominent topic of international public and scientific debate. The genie is out of the bottle, but does it have a mind? Can philosophical considerations help us to work out how we can live with such smart machines? In this book, distinguished philosophers explore questions such as whether these new machines are able to act, whether they are social agents, whether they have communicative skills, and if they might even become conscious. The book includes contributions from Syed AbuMusab, Constant Bonard, Stephen Butterfill, Daniel Dennett, Paula Droege, Keith Frankish, Frederic Gilbert, Ying-Tung Lin, Sven Nyholm, Joshua Rust, Eric Schwitzgebel, Henry Shevlin, Anna Strasser, Alessio Tacca, Michael Wilby, and a graphic novel by Anna and Moritz Strasser as a bonus
What would the history of ideas look like if we were able to read the entire archive of printed material of a historical period? Would our 'great men (usually)' story of how ideas are formed and change over time begin to look very different? This book explores these questions through case studies on ideas such as 'liberty', 'republicanism' or 'government' using digital humanities approaches to large scale text data sets. It sets out the methodologies and tools created by the Cambridge Concept Lab as exemplifications of how new digital methods can open up the history of ideas to heretofore unseen avenues of enquiry and evidence. By applying text mining techniques to intellectual history or the history of concepts, this book explains how computational approaches to text mining can substantially increase the power of our understanding of ideas in history.