You may have to Search all our reviewed books and magazines, click the sign up button below to create a free account.
Based on a course developed by the author, Introduction to High Performance Scientific Computing introduces methods for adding parallelism to numerical methods for solving differential equations. It contains exercises and programming projects that facilitate learning as well as examples and discussions based on the C programming language, with additional comments for those already familiar with C++. The text provides an overview of concepts and algorithmic techniques for modern scientific computing and is divided into six self-contained parts that can be assembled in any order to create an introductory course using available computer hardware. Part I introduces the C programming language for...
This is the first entry-level book on algorithmic (also known as automatic) differentiation (AD), providing fundamental rules for the generation of first- and higher-order tangent-linear and adjoint code. The author covers the mathematical underpinnings as well as how to apply these observations to real-world numerical simulation programs. Readers will find: examples and exercises, including hints to solutions; the prototype AD tools dco and dcc for use with the examples and exercises; first- and higher-order tangent-linear and adjoint modes for a limited subset of C/C++, provided by the derivative code compiler dcc; a supplementary website containing sources of all software discussed in the book, additional exercises and comments on their solutions (growing over the coming years), links to other sites on AD, and errata.
This volume presents the proceedings of the First International workshop on Parallel Scientific Computing, PARA '94, held in Lyngby, Denmark in June 1994. It reports interdisciplinary work done by mathematicians, scientists and engineers working on large-scale computational problems in discussion with computer science specialists in the field of parallel methods and the efficient exploitation of modern high-performance computing resources. The 53 full refereed papers provide a wealth of new results: an up-to-date overview on high-speed computing facilities, including different parallel and vector computers as well as workstation clusters, is given and the most important numerical algorithms, with a certain emphasis on computational linear algebra, are investigated.
‘Network’ is a heavily overloaded term, so that ‘network analysis’ means different things to different people. Specific forms of network analysis are used in the study of diverse structures such as the Internet, interlocking directorates, transportation systems, epidemic spreading, metabolic pathways, the Web graph, electrical circuits, project plans, and so on. There is, however, a broad methodological foundation which is quickly becoming a prerequisite for researchers and practitioners working with network models. From a computer science perspective, network analysis is applied graph theory. Unlike standard graph theory books, the content of this book is organized according to methods for specific levels of analysis (element, group, network) rather than abstract concepts like paths, matchings, or spanning subgraphs. Its topics therefore range from vertex centrality to graph clustering and the evolution of scale-free networks. In 15 coherent chapters, this monograph-like tutorial book introduces and surveys the concepts and methods that drive network analysis, and is thus the first book to do so from a methodological perspective independent of specific application areas.
Combinatorial (or discrete) optimization is one of the most active fields in the interface of operations research, computer science, and applied math ematics. Combinatorial optimization problems arise in various applications, including communications network design, VLSI design, machine vision, air line crew scheduling, corporate planning, computer-aided design and man ufacturing, database query design, cellular telephone frequency assignment, constraint directed reasoning, and computational biology. Furthermore, combinatorial optimization problems occur in many diverse areas such as linear and integer programming, graph theory, artificial intelligence, and number theory. All these problems,...
The current exponential growth in graph data has forced a shift to parallel computing for executing graph algorithms. Implementing parallel graph algorithms and achieving good parallel performance have proven difficult. This book addresses these challenges by exploiting the well-known duality between a canonical representation of graphs as abstract collections of vertices and edges and a sparse adjacency matrix representation. This linear algebraic approach is widely accessible to scientists and engineers who may not be formally trained in computer science. The authors show how to leverage existing parallel matrix computation techniques and the large amount of software infrastructure that exists for these computations to implement efficient and scalable parallel graph algorithms. The benefits of this approach are reduced algorithmic complexity, ease of implementation, and improved performance.
This book constitutes the proceedings of the 5th International Conference on Web Information Systems Engineering, WISE 2004, held in Brisbane, Australia in November 2004. The 45 revised full papers and 29 revised short papers presented together with 3 invited contributions were carefully reviewed and selected from 198 submissions. The papers are organized in topical sections on Web information modeling; payment and security; information extraction; advanced applications; performance issues; linkage analysis and document clustering; Web caching and content analysis; XML query processing; Web search and personalization; workflow management and enterprise information systems; business processes; deep Web and dynamic content; Web information systems design; ontologies and applicatoins; multimedia, user interfaces, and languages; and peer-to-peer and grid systems.
Contemporary High Performance Computing: From Petascale toward Exascale focuses on the ecosystems surrounding the world’s leading centers for high performance computing (HPC). It covers many of the important factors involved in each ecosystem: computer architectures, software, applications, facilities, and sponsors. The first part of the book examines significant trends in HPC systems, including computer architectures, applications, performance, and software. It discusses the growth from terascale to petascale computing and the influence of the TOP500 and Green500 lists. The second part of the book provides a comprehensive overview of 18 HPC ecosystems from around the world. Each chapter i...