You may have to Search all our reviewed books and magazines, click the sign up button below to create a free account.
Query reformulation refers to a process of translating a source query—a request for information in some high-level logic-based language—into a target plan that abides by certain interface restrictions. Many practical problems in data management can be seen as instances of the reformulation problem. For example, the problem of translating an SQL query written over a set of base tables into another query written over a set of views; the problem of implementing a query via translating to a program calling a set of database APIs; the problem of implementing a query using a collection of web services. In this book we approach query reformulation in a very general setting that encompasses all ...
The topic of using views to answer queries has been popular for a few decades now, as it cuts across domains such as query optimization, information integration, data warehousing, website design and, recently, database-as-a-service and data placement in cloud systems. This book assembles foundational work on answering queries using views in a self-contained manner, with an effort to choose material that constitutes the backbone of the research. It presents efficient algorithms and covers the following problems: query containment; rewriting queries using views in various logical languages; equivalent rewritings and maximally contained rewritings; and computing certain answers in the data-inte...
This book constitutes the refereed proceedings of the 4th International Conference on Data Management in Grid and Peer-to-Peer Systems, Globe 2011, held in Toulouse, France, in September 2011 in conjunction with DEXA 2011. The 12 revised full papers presented were carefully reviewed and selected from 18 submissions. The papers are organized in topical sections on data storage and replication, semantics for P2P systems and performance evaluation, resource discovery and routing in mobile P2P networks, and data stream systems and large-scale distributed applications.
This edition presents the most prominent topics and applications of digital image processing, analysis, and computer graphics in the field of cultural heritage preservation. The text assumes prior knowledge of digital image processing and computer graphics fundamentals. Each chapter contains a table of contents, illustrations, and figures that elucidate the presented concepts in detail, as well as a chapter summary and a bibliography for further reading. Well-known experts cover a wide range of topics and related applications, including spectral imaging, automated restoration, computational reconstruction, digital reproduction, and 3D models.
Communities serve as basic structural building blocks for understanding the organization of many real-world networks, including social, biological, collaboration, and communication networks. Recently, community search over graphs has attracted significantly increasing attention, from small, simple, and static graphs to big, evolving, attributed, and location-based graphs. In this book, we first review the basic concepts of networks, communities, and various kinds of dense subgraph models. We then survey the state of the art in community search techniques on various kinds of networks across different application areas. Specifically, we discuss cohesive community search, attributed community s...
This two-volume set LNCS 10827 and LNCS 10828 constitutes the refereed proceedings of the 23rd International Conference on Database Systems for Advanced Applications, DASFAA 2018, held in Gold Coast, QLD, Australia, in May 2018. The 83 full papers, 21 short papers, 6 industry papers, and 8 demo papers were carefully selected from a total of 360 submissions. The papers are organized around the following topics: network embedding; recommendation; graph and network processing; social network analytics; sequence and temporal data processing; trajectory and streaming data; RDF and knowledge graphs; text and data mining; medical data mining; security and privacy; search and information retrieval; query processing and optimizations; data quality and crowdsourcing; learning models; multimedia data processing; and distributed computing.
This book contains a number of chapters on transactional database concurrency control. This volume's entire sequence of chapters can summarized as follows: A two-sentence summary of the volume's entire sequence of chapters is this: traditional locking techniques can be improved in multiple dimensions, notably in lock scopes (sizes), lock modes (increment, decrement, and more), lock durations (late acquisition, early release), and lock acquisition sequence (to avoid deadlocks). Even if some of these improvements can be transferred to optimistic concurrency control, notably a fine granularity of concurrency control with serializable transaction isolation including phantom protection, pessimistic concurrency control is categorically superior to optimistic concurrency control, i.e., independent of application, workload, deployment, hardware, and software implementation.
Resource Description Framework (or RDF, in short) is set to deliver many of the original semi-structured data promises: flexible structure, optional schema, and rich, flexible Universal Resource Identifiers as a basis for information sharing. Moreover, RDF is uniquely positioned to benefit from the efforts of scientific communities studying databases, knowledge representation, and Web technologies. As a consequence, the RDF data model is used in a variety of applications today for integrating knowledge and information: in open Web or government data via the Linked Open Data initiative, in scientific domains such as bioinformatics, and more recently in search engines and personal assistants o...
This book constitutes the refereed proceedings of the 20th International Conference on The Semantic Web, ESWC 2023, held in Hersonissos, Crete, Greece, during May 28–June 1, 2023. The 41 full papers included in this book were carefully reviewed and selected from 167 submissions. They are organized in topical sections as follows: research, resource and in-use.
Data profiling refers to the activity of collecting data about data, {i.e.}, metadata. Most IT professionals and researchers who work with data have engaged in data profiling, at least informally, to understand and explore an unfamiliar dataset or to determine whether a new dataset is appropriate for a particular task at hand. Data profiling results are also important in a variety of other situations, including query optimization, data integration, and data cleaning. Simple metadata are statistics, such as the number of rows and columns, schema and datatype information, the number of distinct values, statistical value distributions, and the number of null or empty values in each column. More...