You may have to Search all our reviewed books and magazines, click the sign up button below to create a free account.
Data-intensive science has the potential to transform scientific research and quickly translate scientific progress into complete solutions, policies, and economic success. But this collaborative science is still lacking the effective access and exchange of knowledge among scientists, researchers, and policy makers across a range of disciplines. Bringing together leaders from multiple scientific disciplines, Data-Intensive Science shows how a comprehensive integration of various techniques and technological advances can effectively harness the vast amount of data being generated and significantly accelerate scientific progress to address some of the world's most challenging problems. In the ...
Data Intensive Computing refers to capturing, managing, analyzing, and understanding data at volumes and rates that push the frontiers of current technologies. The challenge of data intensive computing is to provide the hardware architectures and related software systems and techniques which are capable of transforming ultra-large data into valuable knowledge. Handbook of Data Intensive Computing is written by leading international experts in the field. Experts from academia, research laboratories and private industry address both theory and application. Data intensive computing demands a fundamentally different set of principles than mainstream computing. Data-intensive applications typically are well suited for large-scale parallelism over the data and also require an extremely high degree of fault-tolerance, reliability, and availability. Real-world examples are provided throughout the book. Handbook of Data Intensive Computing is designed as a reference for practitioners and researchers, including programmers, computer and system infrastructure designers, and developers. This book can also be beneficial for business managers, entrepreneurs, and investors.
Introduction to Computational Modeling Using C and Open-Source Tools presents the fundamental principles of computational models from a computer science perspective. It explains how to implement these models using the C programming language. The software tools used in the book include the Gnu Scientific Library (GSL), which is a free software libra
Gain Critical Insight into the Parallel I/O Ecosystem Parallel I/O is an integral component of modern high performance computing (HPC), especially in storing and processing very large datasets to facilitate scientific discovery. Revealing the state of the art in this field, High Performance Parallel I/O draws on insights from leading practitioners, researchers, software architects, developers, and scientists who shed light on the parallel I/O ecosystem. The first part of the book explains how large-scale HPC facilities scope, configure, and operate systems, with an emphasis on choices of I/O hardware, middleware, and applications. The book then traverses up the I/O software stack. The second...
By using computer simulations in research and development, computational science and engineering (CSE) allows empirical inquiry where traditional experimentation and methods of inquiry are difficult, inefficient, or prohibitively expensive. The Handbook of Research on Computational Science and Engineering: Theory and Practice is a reference for interested researchers and decision-makers who want a timely introduction to the possibilities in CSE to advance their ongoing research and applications or to discover new resources and cutting edge developments. Rather than reporting results obtained using CSE models, this comprehensive survey captures the architecture of the cross-disciplinary field, explores the long term implications of technology choices, alerts readers to the hurdles facing CSE, and identifies trends in future development.
Industrial Applications of High-Performance Computing: Best Global Practices offers a global overview of high-performance computing (HPC) for industrial applications, along with a discussion of software challenges, business models, access models (e.g., cloud computing), public-private partnerships, simulation and modeling, visualization, big data a
Geosciences and, in particular, numerical weather prediction are demanding the highest levels of available computer power. The European Centre for Medium-Range Weather Forecasts, with its experience in using supercomputers in this field, organizes every other year a workshop bringing together manufacturers, computer scientists, researchers and operational users to share their experiences and to learn about the latest developments. This volume provides an excellent overview of the latest achievements and plans for the use of new parallel techniques in the fields of meteorology, climatology and oceanography.
Unmatched: 50 Years of Supercomputing: A Personal Journey Accompanying the Evolution of a Powerful Tool The rapid and extraordinary progress of supercomputing over the past half-century is a powerful demonstration of our relentless drive to understand and shape the world around us. In this book, David Barkai offers a unique and compelling account of this remarkable technological journey, drawing from his own rich experiences working at the forefront of high-performance computing (HPC). This book is a journey delineated as five decade-long ‘epochs’ defined by the systems’ architectural themes: vector processors, multi-processors, microprocessors, clusters, and accelerators and cloud com...
State-of-the-Art Approaches to Advance the Large-Scale Green Computing Movement Edited by one of the founders and lead investigator of the Green500 list, The Green Computing Book: Tackling Energy Efficiency at Large Scale explores seminal research in large-scale green computing. It begins with low-level, hardware-based approaches and then traverses up the software stack with increasingly higher-level, software-based approaches. In the first chapter, the IBM Blue Gene team illustrates how to improve the energy efficiency of a supercomputer by an order of magnitude without any system performance loss in parallelizable applications. The next few chapters explain how to enhance the energy effici...
Cloud computing offers many advantages to researchers and engineers who need access to high performance computing facilities for solving particular compute-intensive and/or large-scale problems, but whose overall high performance computing (HPC) needs do not justify the acquisition and operation of dedicated HPC facilities. There are, however, a number of fundamental problems which must be addressed, such as the limitations imposed by accessibility, security and communication speed, before these advantages can be exploited to the full. This book presents 14 contributions selected from the International Research Workshop on Advanced High Performance Computing Systems, held in Cetraro, Italy, ...