You may have to Search all our reviewed books and magazines, click the sign up button below to create a free account.
Extraordinary advances in machine translation over the last three quarters of a century have profoundly affected many aspects of the translation profession. The widespread integration of adaptive “artificially intelligent” technologies has radically changed the way many translators think and work. In turn, groundbreaking empirical research has yielded new perspectives on the cognitive basis of the human translation process. Translation is in the throes of radical transition on both professional and academic levels. The game-changing introduction of neural machine translation engines almost a decade ago accelerated these transitions. This volume takes stock of the depth and breadth of resulting developments, highlighting the emerging rivalry of human and machine intelligence. The gathering and analysis of big data is a common thread that has given access to new insights in widely divergent areas, from literary translation to movie subtitling to consecutive interpreting to development of flexible and powerful new cognitive models of translation.
Post-editing is possibly the oldest form of human-machine cooperation for translation. It has been a common practice for just about as long as operational machine translation systems have existed. Recently, however, there has been a surge of interest in post-editing among the wider user community, partly due to the increasing quality of machine translation output, but also to the availability of free, reliable software for both machine translation and post-editing. As a result, the practices and processes of the translation industry are changing in fundamental ways. This volume is a compilation of work by researchers, developers and practitioners of post-editing, presented at two recent events on post-editing: The first Workshop on Post-editing Technology and Practice, held in conjunction with the 10th Conference of the Association for Machine Translation in the Americas, held in San Diego, in 2012; and the International Workshop on Expertise in Translation and Post-editing Research and Application, held at the Copenhagen Business School, in 2012.
Word storage and processing define a multi-factorial domain of scientific inquiry whose thorough investigation goes well beyond the boundaries of traditional disciplinary taxonomies, to require synergic integration of a wide range of methods, techniques and empirical and experimental findings. The present book intends to approach a few central issues concerning the organization, structure and functioning of the Mental Lexicon, by asking domain experts to look at common, central topics from complementary standpoints, and discuss the advantages of developing converging perspectives. The book will explore the connections between computational and algorithmic models of the mental lexicon, word f...
Empirical research is carried out in a cyclic way: approaching a research area bottom-up, data lead to interpretations and ideally to the abstraction of laws, on the basis of which a theory can be derived. Deductive research is based on a theory, on the basis of which hypotheses can be formulated and tested against the background of empirical data. Looking at the state-of-the-art in translation studies, either theories as well as models are designed or empirical data are collected and interpreted. However, the final step is still lacking: so far, empirical data has not lead to the formulation of theories or models, whereas existing theories and models have not yet been comprehensively tested...
Artificial intelligence is changing and will continue to change the world we live in. These changes are also influencing the translation market. Machine translation (MT) systems automatically transfer one language to another within seconds. However, MT systems are very often still not capable of producing perfect translations. To achieve high quality translations, the MT output first has to be corrected by a professional translator. This procedure is called post-editing (PE). PE has become an established task on the professional translation market. The aim of this text book is to provide basic knowledge about the most relevant topics in professional PE. The text book comprises ten chapters on both theoretical and practical aspects including topics like MT approaches and development, guidelines, integration into CAT tools, risks in PE, data security, practical decisions in the PE process, competences for PE, and new job profiles.
The present volume contains a collection of papers on spoken language: how to represent it, analyse it, and explain it, without resorting to preconceived notions from text-based linguistics. The papers were presented at an explorative symposium held in Mullsjo, Sweden in August 2009.
Eyetracking has become a powerful tool in scientific research and has finally found its way into disciplines such as applied linguistics and translation studies, paving the way for new insights and challenges in these fields. The aim of the first International Conference on Eyetracking and Applied Linguistics (ICEAL) was to bring together researchers who use eyetracking to empirically answer their research questions. It was intended to bridge the gaps between applied linguistics, translation studies, cognitive science and computational linguistics on the one hand and to further encourage innovative research methodologies and data triangulation on the other hand. These challenges are also addressed in this proceedings volume: While the studies described in the volume deal with a wide range of topics, they all agree on eyetracking as an appropriate methodology in empirical research.
The Development of Translation Competence: Theories and Methodologies from Psycholinguistics and Cognitive Science presents cutting-edge research in translation studies from perspectives in psycholinguistics and cognitive science in order to provide a better understanding of translation and the development of linguistic competence that translators need to be effective professionals. It presents original theories and empirical tests that have significant implications for advancing the field of translation studies and what researchers know about the development of linguistic competence. The book is divided up into three Parts. Part I consists of a state-of-the-art introductory chapter which se...
The present work explores computer-assisted simultaneous interpreting (CASI) from a primarily cognitive perspective. Despite concerns over the potentially negative impact of computer-assisted interpreting (CAI) tools on interpreters’ cognitive load (CL), this hypothesis remains untested. Previous research is restricted to the evaluation of the CASI product and a methodology for the process-oriented evaluation of CASI and the empirical evidence for its cognitive modelling are missing. Overcoming these limitations appears essential to advance CAI research, particularly to foster a deeper understanding of the cognitive aspects of CAI through a validated research methodology and to determine t...
This book presents the first major study of ditransitives in Swedish. Using a combination of well-established and innovative corpus-based methods, the book reveals considerable changes in the constructional behaviour of ditransitive verbs over the course of the last 200 years. The key finding is that the use of the so-called double object construction has decreased dramatically in terms of frequency, lexical richness and semantic range. This development is parallelled by a decisive increase in prepositional object constructions. The results are of high relevance to the ongoing debate within construction grammar on constructional productivity and on the nature of horizontal links.