You may have to Search all our reviewed books and magazines, click the sign up button below to create a free account.
The previous conference in this series (AMTA 2002) took up the theme “From Research to Real Users”, and sought to explore why recent research on data-driven machine translation didn’t seem to be moving to the marketplace. As it turned out, the ?rst commercial products of the data-driven research movement were just over the horizon, andintheinterveningtwoyearstheyhavebeguntoappearinthemarketplace. Atthesame time,rule-basedmachinetranslationsystemsareintroducingdata-driventechniquesinto the mix in their products. Machine translation as a software application has a 50-year history. There are an increasing number of exciting deployments of MT, many of which will be exhibited and discussed ...
This comprehensive, interdisciplinary handbook reviews the latest methods and technologies used in automated essay evaluation (AEE) methods and technologies. Highlights include the latest in the evaluation of performance-based writing assessments and recent advances in the teaching of writing, language testing, cognitive psychology, and computational linguistics. This greatly expanded follow-up to Automated Essay Scoring reflects the numerous advances that have taken place in the field since 2003 including automated essay scoring and diagnostic feedback. Each chapter features a common structure including an introduction and a conclusion. Ideas for diagnostic and evaluative feedback are sprin...
While current educational technologies have the potential to fundamentally enhance literacy education, many of these tools remain unknown to or unused by today’s practitioners due to a lack of access and support. Adaptive Educational Technologies for Literacy Instruction presents actionable information to educators, administrators, and researchers about available educational technologies that provide adaptive, personalized literacy instruction to students of all ages. These accessible, comprehensive chapters, written by leading researchers who have developed systems and strategies for classrooms, introduce effective technologies for reading comprehension and writing skills.
This two-volume set, consisting of LNCS 7181 and LNCS 7182, constitutes the thoroughly refereed proceedings of the 13th International Conference on Computer Linguistics and Intelligent Processing, held in New Delhi, India, in March 2012. The total of 92 full papers were carefully reviewed and selected for inclusion in the proceedings. The contents have been ordered according to the following topical sections: NLP system architecture; lexical resources; morphology and syntax; word sense disambiguation and named entity recognition; semantics and discourse; sentiment analysis, opinion mining, and emotions; natural language generation; machine translation and multilingualism; text categorization and clustering; information extraction and text mining; information retrieval and question answering; document summarization; and applications.
Post-editing is possibly the oldest form of human-machine cooperation for translation. It has been a common practice for just about as long as operational machine translation systems have existed. Recently, however, there has been a surge of interest in post-editing among the wider user community, partly due to the increasing quality of machine translation output, but also to the availability of free, reliable software for both machine translation and post-editing. As a result, the practices and processes of the translation industry are changing in fundamental ways. This volume is a compilation of work by researchers, developers and practitioners of post-editing, presented at two recent events on post-editing: The first Workshop on Post-editing Technology and Practice, held in conjunction with the 10th Conference of the Association for Machine Translation in the Americas, held in San Diego, in 2012; and the International Workshop on Expertise in Translation and Post-editing Research and Application, held at the Copenhagen Business School, in 2012.
The fields of Artificial Intelligence (AI) and Machine Learning (ML) have grown dramatically in recent years, with an increasingly impressive spectrum of successful applications. This book represents a key reference for anybody interested in the intersection between mathematics and AI/ML and provides an overview of the current research streams. Engineering Mathematics and Artificial Intelligence: Foundations, Methods, and Applications discusses the theory behind ML and shows how mathematics can be used in AI. The book illustrates how to improve existing algorithms by using advanced mathematics and offers cutting-edge AI technologies. The book goes on to discuss how ML can support mathematical modeling and how to simulate data by using artificial neural networks. Future integration between ML and complex mathematical techniques is also highlighted within the book. This book is written for researchers, practitioners, engineers, and AI consultants.
From early answer sheets filled in with number 2 pencils, to tests administered by mainframe computers, to assessments wholly constructed by computers, it is clear that technology is changing the field of educational and psychological measurement. The numerous and rapid advances have immediate impact on test creators, assessment professionals, and those who implement and analyze assessments. This comprehensive new volume brings together leading experts on the issues posed by technological applications in testing, with chapters on game-based assessment, testing with simulations, video assessment, computerized test development, large-scale test delivery, model choice, validity, and error issue...
The Routledge Encyclopedia of Translation Technology provides a state-of-the art survey of the field of computer-assisted translation. It is the first definitive reference to provide a comprehensive overview of the general, regional and topical aspects of this increasingly significant area of study. The Encyclopedia is divided into three parts: Part One presents general issues in translation technology, such as its history and development, translator training and various aspects of machine translation, including a valuable case study of its teaching at a major university; Part Two discusses national and regional developments in translation technology, offering contributions covering the cruc...
Thanks to the availability of texts on the Web in recent years, increased knowledge and information have been made available to broader audiences. However, the way in which a text is written—its vocabulary, its syntax—can be difficult to read and understand for many people, especially those with poor literacy, cognitive or linguistic impairment, or those with limited knowledge of the language of the text. Texts containing uncommon words or long and complicated sentences can be difficult to read and understand by people as well as difficult to analyze by machines. Automatic text simplification is the process of transforming a text into another text which, ideally conveying the same messag...
Many applications within natural language processing involve performing text-to-text transformations, i.e., given a text in natural language as input, systems are required to produce a version of this text (e.g., a translation), also in natural language, as output. Automatically evaluating the output of such systems is an important component in developing text-to-text applications. Two approaches have been proposed for this problem: (i) to compare the system outputs against one or more reference outputs using string matching-based evaluation metrics and (ii) to build models based on human feedback to predict the quality of system outputs without reference texts. Despite their popularity, ref...