You may have to Search all our reviewed books and magazines, click the sign up button below to create a free account.
The big data era is upon us: data are being generated, analyzed, and used at an unprecedented scale, and data-driven decision making is sweeping through all aspects of society. Since the value of data explodes when it can be linked and fused with other data, addressing the big data integration (BDI) challenge is critical to realizing the promise of big data. BDI differs from traditional data integration along the dimensions of volume, velocity, variety, and veracity. First, not only can data sources contain a huge volume of data, but also the number of data sources is now in the millions. Second, because of the rate at which newly collected data are made available, many of the data sources a...
This book surveys fundamental concepts and practical methods for creating and curating large knowledge bases.
The issue of data quality is as old as data itself. However, the proliferation of diverse, large-scale and often publically available data on the Web has increased the risk of poor data quality and misleading data interpretations. On the other hand, data is now exposed at a much more strategic level e.g. through business intelligence systems, increasing manifold the stakes involved for individuals, corporations as well as government agencies. There, the lack of knowledge about data accuracy, currency or completeness can have erroneous and even catastrophic results. With these changes, traditional approaches to data management in general, and data quality control specifically, are challenged....
Data quality is one of the most important problems in data management. A database system typically aims to support the creation, maintenance, and use of large amount of data, focusing on the quantity of data. However, real-life data are often dirty: inconsistent, duplicated, inaccurate, incomplete, or stale. Dirty data in a database routinely generate misleading or biased analytical results and decisions, and lead to loss of revenues, credibility and customers. With this comes the need for data quality management. In contrast to traditional data management tasks, data quality management enables the detection and correction of errors in the data, syntactic or semantic, in order to improve the...
Requiring heterogeneous information systems to cooperate and communicate has now become crucial, especially in application areas like e-business, Web-based mash-ups and the life sciences. Such cooperating systems have to automatically and efficiently match, exchange, transform and integrate large data sets from different sources and of different structure in order to enable seamless data exchange and transformation. The book edited by Bellahsene, Bonifati and Rahm provides an overview of the ways in which the schema and ontology matching and mapping tools have addressed the above requirements and points to the open technical challenges. The contributions from leading experts are structured i...
On the Web, a massive amount of user-generated content is available through various channels (e.g., texts, tweets, Web tables, databases, multimedia-sharing platforms, etc.). Conflicting information, rumors, erroneous and fake content can be easily spread across multiple sources, making it hard to distinguish between what is true and what is not. This book gives an overview of fundamental issues and recent contributions for ascertaining the veracity of data in the era of Big Data. The text is organized into six chapters, focusing on structured data extracted from texts. Chapter 1 introduces the problem of ascertaining the veracity of data in a multi-source and evolving context. Issues relate...
This two volume set LNCS 7238 and LNCS 7239 constitutes the refereed proceedings of the 17th International Conference on Database Systems for Advanced Applications, DASFAA 2012, held in Busan, South Korea, in April 2012. The 44 revised full papers and 8 short papers presented together with 2 invited keynote papers, 8 industrial papers, 8 demo presentations, 4 tutorials and 1 panel paper were carefully reviewed and selected from a total of 159 submissions. The topics covered are query processing and optimization, data semantics, XML and semi-structured data, data mining and knowledge discovery, privacy and anonymity, data management in the Web, graphs and data mining applications, temporal and spatial data, top-k and skyline query processing, information retrieval and recommendation, indexing and search systems, cloud computing and scalability, memory-based query processing, semantic and decision support systems, social data, data mining.
Data profiling refers to the activity of collecting data about data, {i.e.}, metadata. Most IT professionals and researchers who work with data have engaged in data profiling, at least informally, to understand and explore an unfamiliar dataset or to determine whether a new dataset is appropriate for a particular task at hand. Data profiling results are also important in a variety of other situations, including query optimization, data integration, and data cleaning. Simple metadata are statistics, such as the number of rows and columns, schema and datatype information, the number of distinct values, statistical value distributions, and the number of null or empty values in each column. More...