You may have to Search all our reviewed books and magazines, click the sign up button below to create a free account.
This book has been written for practitioners, researchers and stu dents in the fields of parallel and distributed computing. Its objective is to provide detailed coverage of the applications of graph theoretic tech niques to the problems of matching resources and requirements in multi ple computer systems. There has been considerable research in this area over the last decade and intense work continues even as this is being written. For the practitioner, this book serves as a rich source of solution techniques for problems that are routinely encountered in the real world. Algorithms are presented in sufficient detail to permit easy implementa tion; background material and fundamental concepts are covered in full. The researcher will find a clear exposition of graph theoretic tech niques applied to parallel and distributed computing. Research results are covered and many hitherto unpublished spanning the last decade results by the author are included. There are many unsolved problems in this field-it is hoped that this book will stimulate further research.
This book constitutes the carefully refereed post-proceedings of the 22nd International Workshop on Graph-Theoretic Concepts in Computer Science, WG '96, held in Cadenabbia, Italy, in June 1996. The 30 revised full papers presented in the volume were selected from a total of 65 submissions. This collection documents the state of the art in the area. Among the topics addressed are graph algorithms, graph rewriting, hypergraphs, graph drawing, networking, approximation and optimization, trees, graph computation, and others.
Discover how to streamline complex bioinformatics applications with parallel computing This publication enables readers to handle more complex bioinformatics applications and larger and richer data sets. As the editor clearly shows, using powerful parallel computing tools can lead to significant breakthroughs in deciphering genomes, understanding genetic disease, designing customized drug therapies, and understanding evolution. A broad range of bioinformatics applications is covered with demonstrations on how each one can be parallelized to improve performance and gain faster rates of computation. Current parallel computing techniques and technologies are examined, including distributed comp...
This volume presents revised versions of the 32 papers accepted for the Seventh Annual Workshop on Languages and Compilers for Parallel Computing, held in Ithaca, NY in August 1994. The 32 papers presented report on the leading research activities in languages and compilers for parallel computing and thus reflect the state of the art in the field. The volume is organized in sections on fine-grain parallelism, align- ment and distribution, postlinear loop transformation, parallel structures, program analysis, computer communication, automatic parallelization, languages for parallelism, scheduling and program optimization, and program evaluation.
Advances in microelectronic technology have made massively parallel computing a reality and triggered an outburst of research activity in parallel processing architectures and algorithms. Distributed memory multiprocessors - parallel computers that consist of microprocessors connected in a regular topology - are increasingly being used to solve large problems in many application areas. In order to use these computers for a specific application, existing algorithms need to be restructured for the architecture and new algorithms developed. The performance of a computation on a distributed memory multiprocessor is affected by the node and communication architecture, the interconnection network ...
As we continue to build faster and fast. er computers, their performance is be coming increasingly dependent on the memory hierarchy. Both the clock speed of the machine and its throughput per clock depend heavily on the memory hierarchy. The time to complet. e a cache acce88 is oft. en the factor that det. er mines the cycle time. The effectiveness of the hierarchy in keeping the average cost of a reference down has a major impact on how close the sustained per formance is to the peak performance. Small changes in the performance of the memory hierarchy cause large changes in overall system performance. The strong growth of ruse machines, whose performance is more tightly coupled to the mem...
The organization of data is clearly of great importance in the design of high performance algorithms and architectures. Although there are several landmark papers on this subject, no comprehensive treatment has appeared. This monograph is intended to fill that gap. We introduce a model of computation for parallel computer architec tures, by which we are able to express the intrinsic complexity of data or ganization for specific architectures. We apply this model of computation to several existing parallel computer architectures, e.g., the CDC 205 and CRAY vector-computers, and the MPP binary array processor. The study of data organization in parallel computations was introduced as early as 1...
The second half of the 1970s was marked with impressive advances in array/vector architectures and vectorization techniques and compilers. This progress continued with a particular focus on vector machines until the middle of the 1980s. The major ity of supercomputers during this period were register-to-register (Cray 1) or memory-to-memory (CDC Cyber 205) vector (pipelined) machines. However, the increasing demand for higher computational rates lead naturally to parallel comput ers and software. Through the replication of autonomous processors in a coordinated system, one can skip over performance barriers due technology limitations. In princi ple, parallelism offers unlimited performance p...
It is universally accepted today that parallel processing is here to stay but that software for parallel machines is still difficult to develop. However, there is little recognition of the fact that changes in processor architecture can significantly ease the development of software. In the seventies the availability of processors that could address a large name space directly, eliminated the problem of name management at one level and paved the way for the routine development of large programs. Similarly, today, processor architectures that can facilitate cheap synchronization and provide a global address space can simplify compiler development for parallel machines. If the cost of synchron...