You may have to Search all our reviewed books and magazines, click the sign up button below to create a free account.
This book provides a systematic and comparative description of the vast number of research issues related to the quality of data and information. It does so by delivering a sound, integrated and comprehensive overview of the state of the art and future development of data and information quality in databases and information systems. To this end, it presents an extensive description of the techniques that constitute the core of data and information quality research, including record linkage (also called object identification), data integration, error localization and correction, and examines the related techniques in a comprehensive and original methodological framework. Quality dimension def...
Poor data quality can seriously hinder or damage the efficiency and effectiveness of organizations and businesses. The growing awareness of such repercussions has led to major public initiatives like the "Data Quality Act" in the USA and the "European 2003/98" directive of the European Parliament. Batini and Scannapieco present a comprehensive and systematic introduction to the wide set of issues related to data quality. They start with a detailed description of different data quality dimensions, like accuracy, completeness, and consistency, and their importance in different types of data, like federated data, web data, or time-dependent data, and in different data categories classified acco...
This book explains the development of theoretical computer science in its early stages, specifically from 1965 to 1990. The author is among the pioneers of theoretical computer science, and he guides the reader through the early stages of development of this new discipline. He explains the origins of the field, arising from disciplines such as logic, mathematics, and electronics, and he describes the evolution of the key principles of computing in strands such as computability, algorithms, and programming. But mainly it's a story about people – pioneers with diverse backgrounds and characters came together to overcome philosophical and institutional challenges and build a community. They collaborated on research efforts, they established schools and conferences, they developed the first related university courses, they taught generations of future researchers and practitioners, and they set up the key publications to communicate and archive their knowledge. The book is a fascinating insight into the field as it existed and evolved, it will be valuable reading for anyone interested in the history of computing.
Description Logics are a family of knowledge representation languages that have been studied extensively in Artificial Intelligence over the last two decades. They are embodied in several knowledge-based systems and are used to develop various real-life applications. The Description Logic Handbook provides a thorough account of the subject, covering all aspects of research in this field, namely: theory, implementation, and applications. Its appeal will be broad, ranging from more theoretically-oriented readers, to those with more practically-oriented interests who need a sound and modern understanding of knowledge representation systems based on Description Logics. The chapters are written by some of the most prominent researchers in the field, introducing the basic technical material before taking the reader to the current state of the subject, and including comprehensive guides to the literature. In sum, the book will serve as a unique reference for the subject, and can also be used for self-study or in conjunction with Knowledge Representation and Artificial Intelligence courses.
– semantic caching – data warehousing and semantic data mining – spatial, temporal, multimedia and multimodal semantics – semantics in data visualization – semantic services for mobile users – supporting tools – applications of semantic-driven approaches These topics are to be understood as speci?cally related to semantic issues. Contributions submitted to the journal and dealing with semantics of data will be considered even if they are not within the topics in the list. While the physical appearance of the journal issues looks like the books from the well-known Springer LNCS series, the mode of operation is that of a journal. Contributions can be freely submitted by authors and are reviewed by the Editorial Board. Contributions may also be invited, and nevertheless carefully reviewed, as in the case for issues that contain extended versions of best papers from major conferences addressing data semantics issues. Special issues, focusing on a speci?c topic, are coordinated by guest editors once the proposal for a special issue is accepted by the Editorial Board. Finally, it is also possible that a journal issue be devoted to a single text.
The LNCS Journal on Data Semantics is devoted to the presentation of notable work that, in one way or another, addresses research and development on issues related to data semantics. The scope of the journal ranges from theories supporting the formal definition of semantic content to innovative domain-specific applications of semantic knowledge.
This book constitutes the thoroughly refereed post-conference proceedings of the workshops held at the 11th International Conference on Web Engineering, ICWE 2011, in Paphos, Cyprus, in June 2011. The 42 revised full papers presented were carefully reviewed and selected from numerous submissions . The papers are organized in sections on the Third International Workshop on Lightweight Composition on the Web (ComposableWeb 2011); First International Workshop on Search, Exploration and Navigation of Web Data Sources (ExploreWeb 2011); Second International Workshop on Enterprise Crowdsourcing (EC 2011); Seventh Model-Driven Web Engineering Workshop (MDWE 2011); Second International Workshop on Quality in Web Engineering (QWE 2011); Second Workshop on the Web and Requirements Engineering (WeRE 2011); as well as the Doctoral Symposium2011, and the ICWE 2011 Tutorials.
The issue of data quality is as old as data itself. However, the proliferation of diverse, large-scale and often publically available data on the Web has increased the risk of poor data quality and misleading data interpretations. On the other hand, data is now exposed at a much more strategic level e.g. through business intelligence systems, increasing manifold the stakes involved for individuals, corporations as well as government agencies. There, the lack of knowledge about data accuracy, currency or completeness can have erroneous and even catastrophic results. With these changes, traditional approaches to data management in general, and data quality control specifically, are challenged....
The term “software visualisation” refers to the graphical display of characteristics and behaviour of all aspects of software: design and analysis methods, systems, programs and algorithms. The purpose of this book is to collect and compare different experiences of software visualisation both from fundamental and applied viewpoints.The book is divided into four parts, covering important aspects of software visualisation. Part 1 covers a survey on existing software visualisation tools and environments, the strategies for making a software visualisation system language independent, and program animation for C language. Part 2 presents topics and techniques on graph drawing, which supports efficient and aesthetically pleasing visualisation. Some recently developed graph drawing systems and techniques used are described. Part 3 discusses visual programming concepts and techniques for supporting parallel and heterogeneous distributed programming. Part 4 includes several case studies of software visualisation, concentrating on the broader field of software engineering ranging from software metrics to reverse engineering.