You may have to Search all our reviewed books and magazines, click the sign up button below to create a free account.
Biography of Walter Daelemans, currently Professor at University of Antwerp, previously Researcher, lecturer, professor at Tilburg University and Researcher, lecturer, professor at Tilburg University.
Memory-based language processing - a machine learning and problem solving method for language technology - is based on the idea that the direct reuse of examples using analogical reasoning is more suited for solving language processing problems than the application of rules extracted from those examples. This book discusses the theory and practice of memory-based language processing, showing its comparative strengths over alternative methods of language modelling. Language is complex, with few generalizations, many sub-regularities and exceptions, and the advantage of memory-based language processing is that it does not abstract away from this valuable low-frequency information. By applying the model to a range of benchmark problems, the authors show that for linguistic areas ranging from phonology to semantics, it produces excellent results. They also describe TiMBL, a software package for memory-based language processing. The first comprehensive overview of the approach, this book will be invaluable for computational linguists, psycholinguists and language engineers.
NLDB 2005, the 10th International Conference on Applications of Natural L- guage to Information Systems, was held on June 15–17, 2005 at the University of Alicante, Spain. Since the ?rst NLDB conference in 1995 the main goal has been to provide a forum to discuss and disseminate research on the integration of natural language resources in information system engineering. The development and convergence of computing, telecommunications and information systems has already led to a revolution in the way that we work, communicate with each other, buy goods and use services, and even in the way that weentertainandeducate ourselves.The revolutioncontinues,andoneof its results is that large volume...
Analogical Modeling (AM) is an exemplar-based general theory of description that uses both neighbors and non-neighbors (under certain well-defined conditions of homogeneity) to predict language behavior. This book provides a basic introduction to AM, compares the theory with nearest-neighbor approaches, and discusses the most recent advances in the theory, including psycholinguistic evidence, applications to specific languages, the problem of categorization, and how AM relates to alternative approaches of language description (such as instance families, neural nets, connectionism, and optimality theory). The book closes with a thorough examination of the problem of the exponential explosion, an inherent difficulty in AM (and in fact all theories of language description). Quantum computing (based on quantum mechanics with its inherent simultaneity and reversibility) provides a precise and natural solution to the exponential explosion in AM. Finally, an extensive appendix provides three tutorials for running the AM computer program (available online).
In both the linguistic and the language engineering community, the creation and use of annotated text collections (or annotated corpora) is currently a hot topic. Annotated texts are of interest for research as well as for the development of natural language pro cessing (NLP) applications. Unfortunately, the annotation of text material, especially more interesting linguistic annotation, is as yet a difficult task and can entail a substan tial amount of human involvement. Allover the world, work is being done to replace as much as possible of this human effort by computer processing. At the frontier of what can already be done (mostly) automatically we find syntactic wordclass tagging, the an...
This first review of a new field covers all areas of speech synthesis from text, ranging from text analysis to letter-to-sound conversion. At the leading edge of current research, the concise and accessible book is written by well respected experts in the field.
This book introduces basic supervised learning algorithms applicable to natural language processing (NLP) and shows how the performance of these algorithms can often be improved by exploiting the marginal distribution of large amounts of unlabeled data. One reason for that is data sparsity, i.e., the limited amounts of data we have available in NLP. However, in most real-world NLP applications our labeled data is also heavily biased. This book introduces extensions of supervised learning algorithms to cope with data sparsity and different kinds of sampling bias. This book is intended to be both readable by first-year students and interesting to the expert audience. My intention was to introd...
This book constitutes the refereed proceedings of the joint conference on Machine Learning and Knowledge Discovery in Databases: ECML PKDD 2009, held in Bled, Slovenia, in September 2009. The 106 papers presented in two volumes, together with 5 invited talks, were carefully reviewed and selected from 422 paper submissions. In addition to the regular papers the volume contains 14 abstracts of papers appearing in full version in the Machine Learning Journal and the Knowledge Discovery and Databases Journal of Springer. The conference intends to provide an international forum for the discussion of the latest high quality research results in all areas related to machine learning and knowledge discovery in databases. The topics addressed are application of machine learning and data mining methods to real-world problems, particularly exploratory research that describes novel learning and mining tasks and applications requiring non-standard techniques.
Memory-Based Learning (MBL), one of the most influential machine learning paradigms, has been applied with great success to a variety of NLP tasks. This monograph describes the application of MBL to robust parsing. Robust parsing using MBL can provide added functionality for key NLP applications, such as Information Retrieval, Information Extraction, and Question Answering, by facilitating more complex syntactic analysis than is currently available. The text presupposes no prior knowledge of MBL. It provides a comprehensive introduction to the framework and goes on to describe and compare applications of MBL to parsing. Since parsing is not easily characterizable as a classification task, adaptations of standard MBL are necessary. These adaptations can either take the form of a cascade of local classifiers or of a holistic approach for selecting a complete tree.The text provides excellent course material on MBL. It is equally relevant for any researcher concerned with symbolic machine learning, Information Retrieval, Information Extraction, and Question Answering.