You may have to Search all our reviewed books and magazines, click the sign up button below to create a free account.
Information extraction (IE) is a new technology enabling relevant content to be extracted from textual information available electronically. IE essentially builds on natural language processing and computational linguistics, but it is also closely related to the well established area of information retrieval and involves learning. In concert with other promising and emerging information engineering technologies like data mining, intelligent data analysis, and text summarization, IE will play a crucial role for scientists and professionals as well as other end-users who have to deal with vast amounts of information, for example from the Internet. As the first book solely devoted to IE, it is of relevance to anybody interested in new and emerging trends in information processing technology.
This work offers a survey of methods and techniques for structuring, acquiring and maintaining lexical resources for speech and language processing. The first chapter provides a broad survey of the field of computational lexicography, introducing most of the issues, terms and topics which are addressed in more detail in the rest of the book. The next two chapters focus on the structure and the content of man-made lexicons, concentrating respectively on (morpho- )syntactic and (morpho- )phonological information. Both chapters adopt a declarative constraint-based methodology and pay ample attention to the various ways in which lexical generalizations can be formalized and exploited to enhance the consistency and to reduce the redundancy of lexicons. A complementary perspective is offered in the next two chapters, which present techniques for automatically deriving lexical resources from text corpora. These chapters adopt an inductive data-oriented methodology and focus also on methods for tokenization, lemmatization and shallow parsing. The next three chapters focus on speech synthesis and speech recognition.
In both the linguistic and the language engineering community, the creation and use of annotated text collections (or annotated corpora) is currently a hot topic. Annotated texts are of interest for research as well as for the development of natural language pro cessing (NLP) applications. Unfortunately, the annotation of text material, especially more interesting linguistic annotation, is as yet a difficult task and can entail a substan tial amount of human involvement. Allover the world, work is being done to replace as much as possible of this human effort by computer processing. At the frontier of what can already be done (mostly) automatically we find syntactic wordclass tagging, the an...
Statistical approaches to processing natural language text have become dominant in recent years. This foundational text is the first comprehensive introduction to statistical natural language processing (NLP) to appear. The book contains all the theory and algorithms needed for building NLP tools. It provides broad but rigorous coverage of mathematical and linguistic foundations, as well as detailed discussion of statistical methods, allowing students and researchers to construct their own implementations. The book covers collocation finding, word sense disambiguation, probabilistic parsing, information retrieval, and other applications.
Information extraction (IE) is a new technology enabling relevant content to be extracted from textual information available electronically. IE essentially builds on natural language processing and computational linguistics, but it is also closely related to the well established area of information retrieval and involves learning. In concert with other promising intelligent information processing technologies like data mining, intelligent data analysis, text summarization, and information agents, IE plays a crucial role in dealing with the vast amounts of information accessible electronically, for example from the Internet. The book is based on the Second International School on Information Extraction, SCIE-99, held in Frascati near Rome, Italy in June/July 1999.
Need new summary
Knowledge-Based Information Retrieval and Filtering from the Web contains fifteen chapters, contributed by leading international researchers, addressing the matter of information retrieval, filtering and management of the information on the Internet. The research presented deals with the need to find proper solutions for the description of the information found on the Internet, the description of the information consumers need, the algorithms for retrieving documents (and indirectly, the information embedded in them), and the presentation of the information found. The chapters include: -Ontological representation of knowledge on the WWW; -Information extraction; -Information retrieval and ad...
An investigation of antonyms in English, offering a model of how we mentally organize concepts and perceive contrasts between them.
This book constitutes the proceedings of the 18th International Symposium on String Processing and Information Retrieval, SPIRE 2011, held in Pisa, Italy, in October 2011. The 30 long and 10 short papers together with 1 keynote presented were carefully reviewed and selected from 102 submissions. The papers are structured in topical sections on introduction to web retrieval, sequence learning, computational geography, space-efficient data structures, algorithmic analysis of biological data, compression, text and algorithms.
From the contents: Stig JOHANSSON: Towards a multilingual corpus for contrastive analysis and translation studies. - Anna SAGVALL HEIN: The PLUG project: parallel corpora in Linkoping, Uppsala, Goteborg: aims and achievements. - Raphael SALKIE: How can linguists profit from parallel corpora? - Trond TROSTERUD: Parallel corpora as tools for investigating and developing minority languages."