You may have to Search all our reviewed books and magazines, click the sign up button below to create a free account.
This edited collection of previously unpublished papers focuses on Centering Theory, an account of local discourse structure. Developed in the context of computational linguistics and cognitive science, Centering theory has attracted the attention of an international interdisciplinary audience. As the authors focus on naturally occurring data, they join the general trend towards empiricism in research on computational models of discourse, providing a significant contribution to a fast-moving field.
This collection of papers examines the theoretical, psychological and descriptive approaches to focus.
This volume is a direct result of the International Symposium on Japanese Sentence Processing held at Duke University. The symposium provided the first opportunity for researchers in three disciplinary areas from both Japan and the United States to participate in a conference where they could discuss issues concerning Japanese syntactic processing. The goals of the symposium were three-fold: * to illuminate the mechanisms of Japanese sentence processing from the viewpoints of linguistics, psycholinguistics and computer science; * to synthesize findings about the mechanisms of Japanese sentence processing by researchers in these three fields in Japan and the United States; * to lay foundation...
This book constitutes the refereed proceedings of the Second International Workshop on Electronic Commerce, WELCOM 2001, held in Heidelberg, Germany in November 2001. The 17 revised full papers presented together with two invited contributions were carefully reviewed and selected from 34 submissions. The papers are organized in topical sections on trade and markets, security and trust, auctions, profiling, and business interaction.
A new perspective on phonetic variation is achieved in this volume through the construction of a series of models of spoken American English. In the past, computer theorists and programmers investigating pronunciation have often relied on their own knowledge of the language or on limited transcription data. Speech recognition researchers, on the other hand, have drawn on a great deal of data but without examining in detail the information about pronunciation the data contains. The authors combine the best of each approach to develop probabilistic and rule-based computational models of transcription data. An ongoing controversy in studies of phonetic variation is the existence and proper defi...
This book presents revised versions of the lectures given at the 8th ELSNET European Summer School on Language and Speech Communication held on the Island of Chios, Greece, in summer 2000. Besides an introductory survey, the book presents lectures on data analysis for multimedia libraries, pronunciation modeling for large vocabulary speech recognition, statistical language modeling, very large scale information retrieval, reduction of information variation in text, and a concluding chapter on open questions in research for linguistics in information access. The book gives newcomers to language and speech communication a clear overview of the main technologies and problems in the area. Researchers and professionals active in the area will appreciate the book as a concise review of the technologies used in text- and speech-triggered information access.
Most dialogues are multimodal. When people talk, they use not only their voices, but also facial expressions and other gestures, and perhaps even touch. When computers communicate with people, they use pictures and perhaps sounds, together with textual language, and when people communicate with computers, they are likely to use mouse gestures almost as much as words. How are such multimodal dialogues constructed? This is the main question addressed in this selection of papers of the second Venaco Workshop, sponsored by the NATO Research Study Group RSG-10 on Automatic Speech Processing, and by the European Speech Communication Association (ESCA).
Situation theory is the result of an interdisciplinary effort to create a full-fledged theory of information. Created by scholars and scientists from cognitive science, computer science and AI, linguistics, logic, philosophy, and mathematics, it aims to provide a common set of tools for the analysis of phenomena from all these fields. Unlike Shannon-Weaver type theories of information, which are purely quantitative theories, situation theory aims at providing tools for the analysis of the specific content of a situation (signal, message, data base, statement, or other information-carrying situation). The question addressed is not how much information is carried, but what information is carried.
Stringently reviewed papers presented at the October 1992 meeting held in Cambridge, Mass., address such topics as nonmonotonic logic; taxonomic logic; specialized algorithms for temporal, spatial, and numerical reasoning; and knowledge representation issues in planning, diagnosis, and natural langu