You may have to Search all our reviewed books and magazines, click the sign up button below to create a free account.
Empirical research is carried out in a cyclic way: approaching a research area bottom-up, data lead to interpretations and ideally to the abstraction of laws, on the basis of which a theory can be derived. Deductive research is based on a theory, on the basis of which hypotheses can be formulated and tested against the background of empirical data. Looking at the state-of-the-art in translation studies, either theories as well as models are designed or empirical data are collected and interpreted. However, the final step is still lacking: so far, empirical data has not lead to the formulation of theories or models, whereas existing theories and models have not yet been comprehensively tested...
Extraordinary advances in machine translation over the last three quarters of a century have profoundly affected many aspects of the translation profession. The widespread integration of adaptive “artificially intelligent” technologies has radically changed the way many translators think and work. In turn, groundbreaking empirical research has yielded new perspectives on the cognitive basis of the human translation process. Translation is in the throes of radical transition on both professional and academic levels. The game-changing introduction of neural machine translation engines almost a decade ago accelerated these transitions. This volume takes stock of the depth and breadth of resulting developments, highlighting the emerging rivalry of human and machine intelligence. The gathering and analysis of big data is a common thread that has given access to new insights in widely divergent areas, from literary translation to movie subtitling to consecutive interpreting to development of flexible and powerful new cognitive models of translation.
The present work explores computer-assisted simultaneous interpreting (CASI) from a primarily cognitive perspective. Despite concerns over the potentially negative impact of computer-assisted interpreting (CAI) tools on interpreters’ cognitive load (CL), this hypothesis remains untested. Previous research is restricted to the evaluation of the CASI product and a methodology for the process-oriented evaluation of CASI and the empirical evidence for its cognitive modelling are missing. Overcoming these limitations appears essential to advance CAI research, particularly to foster a deeper understanding of the cognitive aspects of CAI through a validated research methodology and to determine t...
Ruslan Mitkov's highly successful Oxford Handbook of Computational Linguistics has been substantially revised and expanded in this second edition. Alongside updated accounts of the topics covered in the first edition, it includes 17 new chapters on subjects such as semantic role-labelling, text-to-speech synthesis, translation technology, opinion mining and sentiment analysis, and the application of Natural Language Processing in educational and biomedical contexts, among many others. The volume is divided into four parts that examine, respectively: the linguistic fundamentals of computational linguistics; the methods and resources used, such as statistical modelling, machine learning, and corpus annotation; key language processing tasks including text segmentation, anaphora resolution, and speech recognition; and the major applications of Natural Language Processing, from machine translation to author profiling. The book will be an essential reference for researchers and students in computational linguistics and Natural Language Processing, as well as those working in related industries.
This is a book in the classical Quaestiones genre, like the Tusculanae Quaestiones (“Tusculan questions”) of Cicero (around 45 BCE) and the Quæstiones disputatæ de Veritate (“disputed questions on truth”) of St. Thomas Aquinas (1256-1259). It seeks to ask seven series of questions about key theoretical approaches to the study of translation: three on equivalence theories (semantic equivalence, dynamic equivalence, and deverbalization), three on Descriptive Translation Studies (norms, Toury’s laws, and the translator’s narratoriality), and one on the translator’s visibility. Each “Question” (chapter) charts a circuitous course through past answers to new questions and new answers, drawing especially on the theoretical traditions of hermeneutics, phenomenology, and 4EA cognitive science. The book will guide both veteran and novice scholars of translation deep into the complexities besetting the seven keywords.
This volume offers a snapshot of current perspectives on translation studies within the specific historical and socio-cultural framework of Anglo-Italian relations. It addresses research questions relevant to English historical, literary, cultural and language studies, as well as empirical translation studies. The book is divided into four chapters, each covering a specific research area in the scholarly field of translation studies: namely, historiography, literary translation, specialized translation and multimodality. Each case study selected for this volume has been conducted with critical insight and methodological rigour, and makes a valuable contribution to scientific knowledge in the descriptive and applied branches of a discipline that, since its foundation nearly 50 years ago, has concerned itself with the description, theory and practice of translating and interpreting.
The development of translation memories and machine translation have led to new quality assurance practices where translators have found themselves checking not only human translation but also machine translation outputs. As a result, the notions of revision and interpersonal competences have gained great importance with international projects recognizing them as high priorities. Quality Assurance and Assessment Practices in Translation and Interpreting is a critical scholarly resource that serves as a guide to overcoming the challenge of how translation and interpreting results should be observed, given feedback, and assessed. It also informs the design of new ways of evaluating students as well as suggesting criteria for professional quality control. Featuring coverage on a broad range of topics such as quality management, translation tests, and competency-based assessments, this book is geared towards translators, interpreters, linguists, academicians, translation and interpreting researchers, and students seeking current research on the new ways of evaluating students as well as suggesting criteria for professional quality control in translation.
Artificial intelligence is changing and will continue to change the world we live in. These changes are also influencing the translation market. Machine translation (MT) systems automatically transfer one language to another within seconds. However, MT systems are very often still not capable of producing perfect translations. To achieve high quality translations, the MT output first has to be corrected by a professional translator. This procedure is called post-editing (PE). PE has become an established task on the professional translation market. The aim of this text book is to provide basic knowledge about the most relevant topics in professional PE. The text book comprises ten chapters on both theoretical and practical aspects including topics like MT approaches and development, guidelines, integration into CAT tools, risks in PE, data security, practical decisions in the PE process, competences for PE, and new job profiles.
Cheung, Liu, Moratto, and their contributors examine how corpora can be effectively harnessed to benefit interpreting practice and research in East Asian settings. In comparison to the achievements made in the field of corpus- based translation studies, the use of corpora in interpreting is not comparable in terms of scope, methods, and agenda. One of the predicaments that hampers this line of inquiry is the lack of systematic corpora to document spoken language. This issue is even more pronounced when dealing with East Asian languages such as Chinese, Japanese, and Korean, which are typologically different from European languages. As language plays a pivotal role in interpreting research, the use of corpora in interpreting within East Asian contexts has its own distinct characteristics as well as methodological constraints and concerns. However, it also generates new insights and findings that can significantly advance this research field. A valuable resource for scholars of scholars focusing on corpus interpreting, particularly those dealing with East Asian languages.
Cognitive aspects of the translation process have become central in Translation and Interpreting Studies in recent years, further establishing the field of Cognitive Translatology. Empirical and interdisciplinary studies investigating translation and interpreting processes promise a hitherto unprecedented predictive and explanatory power. This collection contains such studies which observe behaviour during translation and interpreting. The contributions cover a vast area and investigate behaviour during translation and interpreting – with a focus on training of future professionals, on language processing more generally, on the role of technology in the practice of translation and interpreting, on translation of multimodal media texts, on aspects of ergonomics and usability, on emotions, self-concept and psychological factors, and finally also on revision and post-editing. For the present publication, we selected a number of contributions presented at the Second International Congress on Translation, Interpreting and Cognition hosted by the Tra&Co Lab at the Johannes Gutenberg University of Mainz.