You may have to Search all our reviewed books and magazines, click the sign up button below to create a free account.
This work in the field of digital literary stylistics and computational literary studies is concerned with theoretical concerns of literary genre, with the design of a corpus of nineteenth-century Spanish-American novels, and with its empirical analysis in terms of subgenres of the novel. The digital text corpus consists of 256 Argentine, Cuban, and Mexican novels from the period between 1830 and 1910. It has been created with the goal to analyze thematic subgenres and literary currents that were represented in numerous novels in the nineteenth century by means of computational text categorization methods. To categorize the texts, statistical classification and a family resemblance analysis relying on network analysis are used with the aim to examine how the subgenres, which are understood as communicative, conventional phenomena, can be captured on the stylistic, textual level of the novels that participate in them.
This volume presents the state of the art in digital scholarly editing. Drawing together the work of established and emerging researchers, it gives pause at a crucial moment in the history of technology in order to offer a sustained reflection on the practices involved in producing, editing and reading digital scholarly editions—and the theories that underpin them. The unrelenting progress of computer technology has changed the nature of textual scholarship at the most fundamental level: the way editors and scholars work, the tools they use to do such work and the research questions they attempt to answer have all been affected. Each of the essays in Digital Scholarly Editing approaches th...
This book provides an up-to-date, coherent and comprehensive treatment of digital scholarly editing, organized according to the typical timeline and workflow of the preparation of an edition: from the choice of the object to edit, the editorial work, post-production and publication, the use of the published edition, to long-term issues and the ultimate significance of the published work. The author also examines from a theoretical and methodological point of view the issues and problems that emerge during these stages with the application of computational techniques and methods. Building on previous publications on the topic, the book discusses the most significant developments in digital textual scholarship, claiming that the alterations in traditional editorial practices necessitated by the use of computers impose radical changes in the way we think and manage texts, documents, editions and the public. It is of interest not only to scholarly editors, but to all involved in publishing and readership in a digital environment in the humanities.
Scholarly editions contextualize our cultural heritage. Traditionally, methodologies from the field of scholarly editing are applied to works of literature, e.g. in order to trace their genesis or present their varied history of transmission. What do we make of the variance in other types of cultural heritage? How can we describe, record, and reproduce it systematically? From medieval to modern times, from image to audiovisual media, the book traces discourses across different disciplines in order to develop a conceptual model for scholarly editions on a broader scale. By doing so, it also delves into the theory and philosophy of the (digital) humanities as such.
Between 2016 and 2020 the federally funded project "KONDE - Kompetenznetzwerk Digitale Edition" created a network of collaboration between Austrian institutions and researchers working on digital scholarly editions. With the present volume the editors provide a space where researchers and editors from Austrian institutions could theorize on their work and present their editing projects. The collection creates a snapshot of the interests and main research areas regarding digital scholarly editing in Austria at the time of the project.
In scholarly digital editing, the established practice for semantically enriching digital texts is to add markup to a linear string of characters. Graph data-models provide an alternative approach, which is increasingly being given serious consideration. Labelled-property-graph databases, and the W3c's semantic web recommendation and associated standards (RDF and OWL) are powerful and flexible solutions to many of the problems that come with embedded markup. This volume explores the combination of scholarly digital editions, the graph data-model, and the semantic web from three perspectives: infrastructures and technologies, formal models, and projects and editions.
Expert Bytes: Computer Expertise in Forensic Documents — Players, Needs, Resources and Pitfalls —introduces computer scientists and forensic document examiners to the computer expertise of forensic documents and assists them with the design of research projects in this interdisciplinary field. This is not a textbook on how to perform the actual forensic document expertise or program expertise software, but a project design guide, an anthropological inquiry, and a technology, market, and policies review. After reading this book you will have deepened your knowledge on: What computational expertise of forensic documents is What has been done in the field so far and what the future looks li...
Literature and Computation presents some of the most relevantly innovative recent approaches to literary practice, theory, and criticism as driven by computation and situated in digital environments. These approaches rely on automated analyses, but use them creatively, engage in text modeling but inform it with qualitative[-interpretive] critical possibilities, and contribute to present-day platform culture in revolutionizing intermedial ways. While such new directions involve more and more sophisticated machine learning and artificial intelligence, they also mark a spectacular return of the (trans)human(istic) and of traditional-modern literary or urgent political, gender, and minority-related concerns and modes now addressed in ever subtler and more nuanced ways within human-computer interaction frameworks. Expanding the boundaries of literary and data studies, digital humanities, and electronic literature, the featured contributions unveil an emerging landscape of trailblazing practice and theoretical crossovers ready and able to spawn and/or chart the witness literature of our age and cultures.
"This 10-volume compilation of authoritative, research-based articles contributed by thousands of researchers and experts from all over the world emphasized modern issues and the presentation of potential opportunities, prospective solutions, and future directions in the field of information science and technology"--Provided by publisher.
This volume brings together twenty-two authors from various countries who analyze travelogues on the Ottoman Empire between the fifteenth and nineteenth centuries. The travelogues reflect the colorful diversity of the genre, presenting the experiences of individuals and groups from China to Great Britain. The spotlight falls on interdependencies of travel writing and historiography, geographic spaces, and specific practices such as pilgrimages, the hajj, and the harem. Other points of emphasis include the importance of nationalism, the place and time of printing, representations of fashion, and concepts of masculinity and femininity. By displaying close, comparative, and distant readings, the volume offers new insights into perceptions of "otherness", the circulation of knowledge, intermedial relations, gender roles, and digital analysis.