You may have to Search all our reviewed books and magazines, click the sign up button below to create a free account.
Introduces various notions that the research community has studied for defining the correctness of a query answer. This book presents authentication mechanisms for a wide variety of queries in the context of relational and spatial databases, text retrieval, and data streams. It also explains the cryptographic protocols from which the authentication mechanisms derive their security properties.
The chase has long been used as a central tool to analyze dependencies and their effect on queries. It has been applied to different relevant problems in database theory such as query optimization, query containment and equivalence, dependency implication, and database schema design. Recent years have seen a renewed interest in the chase as an important tool in several database applications, such as data exchange and integration, query answering in incomplete data, and many others. It is well known that the chase algorithm might be non-terminating and thus, in order for it to find practical applicability, it is crucial to identify cases where its termination is guaranteed. Another important ...
While classic data management focuses on the data itself, research on Business Processes also considers the context in which this data is generated and manipulated, namely the processes, users, and goals that this data serves. This provides the analysts a better perspective of the organizational needs centered around the data. As such, this research is of fundamental importance. Much of the success of database systems in the last decade is due to the beauty and elegance of the relational model and its declarative query languages, combined with a rich spectrum of underlying evaluation and optimization techniques, and efficient implementations. Much like the case for traditional database resea...
There are millions of searchable data sources on the Web and to a large extent their contents can only be reached through their own query interfaces. There is an enormous interest in making the data in these sources easily accessible. There are primarily two general approaches to achieve this objective. The first is to surface the contents of these sources from the deep Web and add the contents to the index of regular search engines. The second is to integrate the searching capabilities of these sources and support integrated access to them. In this book, we introduce the state-of-the-art techniques for extracting, understanding, and integrating the query interfaces of deep Web data sources....
After the traditional document-centric Web 1.0 and user-generated content focused Web 2.0, Web 3.0 has become a repository of an ever growing variety of Web resources that include data and services associated with enterprises, social networks, sensors, cloud, as well as mobile and other devices that constitute the Internet of Things. These pose unprecedented challenges in terms of heterogeneity (variety), scale (volume), and continuous changes (velocity), as well as present corresponding opportunities if they can be exploited. Just as semantics has played a critical role in dealing with data heterogeneity in the past to provide interoperability and integration, it is playing an even more cri...
This book constitutes the refereed proceedings of the First International Information Security Practice and Experience Conference, ISPEC 2005, held in Singapore in April 2005. The 35 revised full papers presented were carefully reviewed and selected from more than 120 submissions. The papers are organized in topical sections on network security, cryptographic techniques, secure architectures, access control, intrusion detection, data security, and applications and case studies.
Traditional theory and practice of write-ahead logging and of database recovery focus on three failure classes: transaction failures (typically due to deadlocks) resolved by transaction rollback; system failures (typically power or software faults) resolved by restart with log analysis, "redo," and "undo" phases; and media failures (typically hardware faults) resolved by restore operations that combine multiple types of backups and log replay. The recent addition of single-page failures and single-page recovery has opened new opportunities far beyond the original aim of immediate, lossless repair of single-page wear-out in novel or traditional storage hardware. In the contexts of system and ...
This book constitutes the refereed proceedings of the 9th International Conference on Extending Database Technology, EDBT 2004, held in Heraklion, Crete, Greece, in March 2004. The 42 revised full papers presented together with 2 industrial application papers, 15 software demos, and 3 invited contributions were carefully reviewed and selected from 294 submissions. The papers are organized in topical sections on distributed, mobile and peer-to-peer database systems; data mining and knowledge discovery; trustworthy database systems; innovative query processing techniques for XML data; data and information on the web; query processing techniques for spatial databases; foundations of query processing; advanced query processing and optimization; query processing techniques for data and schemas; multimedia and quality-aware systems; indexing techniques; and imprecise sequence pattern queries.
This book constitutes the refereed proceedings of the Second International Workshop on Data Integration in the Life Sciences, DILS 2005, held in San Diego, CA, USA in July 2005. The 20 revised full papers presented together with 8 revised posters and demonstration papers, 2 keynote articles and 5 invited position statements were carefully reviewed and selected from 50 initial submissions. The papers are organized in topical sections on user applications, ontologies, data integration, and others and address all current issues in data integration from the life science point of view.
Large-scale, highly interconnected networks, which are often modeled as graphs, pervade both our society and the natural world around us. Uncertainty, on the other hand, is inherent in the underlying data due to a variety of reasons, such as noisy measurements, lack of precise information needs, inference and prediction models, or explicit manipulation, e.g., for privacy purposes. Therefore, uncertain, or probabilistic, graphs are increasingly used to represent noisy linked data in many emerging application scenarios, and they have recently become a hot topic in the database and data mining communities. Many classical algorithms such as reachability and shortest path queries become #P-comple...