You may have to Search all our reviewed books and magazines, click the sign up button below to create a free account.
Data-intensive science has the potential to transform scientific research and quickly translate scientific progress into complete solutions, policies, and economic success. But this collaborative science is still lacking the effective access and exchange of knowledge among scientists, researchers, and policy makers across a range of disciplines. Bringing together leaders from multiple scientific disciplines, Data-Intensive Science shows how a comprehensive integration of various techniques and technological advances can effectively harness the vast amount of data being generated and significantly accelerate scientific progress to address some of the world's most challenging problems. In the ...
The workshop was organized by the San Diego Supercomputer Center (SDSC) and took place July 20 –22, 2005 at the University of California, San Diego.
The 7 revised full papers, 11 revised medium-length papers, 6 revised short, and 7 demo papers presented together with 10 poster/abstract papers describing late-breaking work were carefully reviewed and selected from numerous submissions. Provenance has been recognized to be important in a wide range of areas including databases, workflows, knowledge representation and reasoning, and digital libraries. Thus, many disciplines have proposed a wide range of provenance models, techniques, and infrastructure for encoding and using provenance. The papers investigate many facets of data provenance, process documentation, data derivation, and data annotation.
As digital technologies are expanding the power and reach of research, they are also raising complex issues. These include complications in ensuring the validity of research data; standards that do not keep pace with the high rate of innovation; restrictions on data sharing that reduce the ability of researchers to verify results and build on previous research; and huge increases in the amount of data being generated, creating severe challenges in preserving that data for long-term use. Ensuring the Integrity, Accessibility, and Stewardship of Research Data in the Digital Age examines the consequences of the changes affecting research data with respect to three issues - integrity, accessibil...
Life science data integration and interoperability is one of the most challenging problems facing bioinformatics today. In the current age of the life sciences, investigators have to interpret many types of information from a variety of sources: lab instruments, public databases, gene expression profiles, raw sequence traces, single nucleotide polymorphisms, chemical screening data, proteomic data, putative metabolic pathway models, and many others. Unfortunately, scientists are not currently able to easily identify and access this information because of the variety of semantics, interfaces, and data formats used by the underlying data sources. Bioinformatics: Managing Scientific Data tackle...
This book constitutes the joint refereed proceedings of the three confederated conferences, CoopIS 2003, DOA 2003, and ODBASE 2003, held in Catania, Sicily, Italy, in November 2003. The 95 revised full papers presented were carefully reviewed and selected from a total of 360 submissions. The papers are organized in topical sections on information integration and mediation, Web services, agent systems, cooperation and evolution, peer-to-peer systems, cooperative systems, trust management, workflow systems, information dissemination systems, data management, the Semantic Web, data mining and classification, ontology management, temporal and spatial data, data semantics and metadata, real-time systems, ubiquitous systems, adaptability and mobility, systems engineering, software engineering, and transactions.
Introduction to Computational Modeling Using C and Open-Source Tools presents the fundamental principles of computational models from a computer science perspective. It explains how to implement these models using the C programming language. The software tools used in the book include the Gnu Scientific Library (GSL), which is a free software libra
Gain Critical Insight into the Parallel I/O Ecosystem Parallel I/O is an integral component of modern high performance computing (HPC), especially in storing and processing very large datasets to facilitate scientific discovery. Revealing the state of the art in this field, High Performance Parallel I/O draws on insights from leading practitioners, researchers, software architects, developers, and scientists who shed light on the parallel I/O ecosystem. The first part of the book explains how large-scale HPC facilities scope, configure, and operate systems, with an emphasis on choices of I/O hardware, middleware, and applications. The book then traverses up the I/O software stack. The second...
This set compiles more than 240 chapters from the world's leading experts to provide a foundational body of research to drive further evolution and innovation of these next-generation technologies and their applications, of which scientific, technological, and commercial communities have only begun to scratch the surface.
Mathematical preliminaries for lossless compression -- Huffman coding -- Arithmetic coding -- Dictionary techniques -- Context-based compression -- Lossless image compression -- Mathematical preliminaries for lossy coding -- Scalar quantization -- Vector quantization -- Differential encoding -- Mathematical preliminaries for transforms, subbands, and wavelets -- Transform coding -- Subband coding -- Wavelets -- Wavelet-based image compression -- Audio coding -- Analysis/synthesis and analysis by synthesis schemes -- Video compression -- Probability and random processes -- A brief review of matrix concepts --The root lattices.