You may have to Search all our reviewed books and magazines, click the sign up button below to create a free account.
This book constitutes the refereed proceedings of the International Conference on Multiscore Software Engineering, Performance, and Tools, MUSEPAT 2013, held in Saint Petersburg, Russia, in August 2013. The 9 revised papers were carefully reviewed and selected from 25 submissions. The accepted papers are organized into three main sessions and cover topics such as software engineering for multicore systems; specification, modeling and design; programing models, languages, compiler techniques and development tools; verification, testing, analysis, debugging and performance tuning, security testing; software maintenance and evolution; multicore software issues in scientific computing, embedded and mobile systems; energy-efficient computing as well as experience reports.
Visualization and analysis tools, techniques, and algorithms have undergone a rapid evolution in recent decades to accommodate explosive growth in data size and complexity and to exploit emerging multi- and many-core computational platforms. High Performance Visualization: Enabling Extreme-Scale Scientific Insight focuses on the subset of scientific visualization concerned with algorithm design, implementation, and optimization for use on today’s largest computational platforms. The book collects some of the most seminal work in the field, including algorithms and implementations running at the highest levels of concurrency and used by scientific researchers worldwide. After introducing th...
The two-volume set LNCS 6852/6853 constitutes the refereed proceedings of the 17th International Euro-Par Conference held in Bordeaux, France, in August/September 2011. The 81 revised full papers presented were carefully reviewed and selected from 271 submissions. The papers are organized in topical sections on support tools and environments; performance prediction and evaluation; scheduling and load-balancing; high-performance architectures and compilers; parallel and distributed data management; grid, cluster and cloud computing; peer to peer computing; distributed systems and algorithms; parallel and distributed programming; parallel numerical algorithms; multicore and manycore programming; theory and algorithms for parallel computation; high performance networks and mobile ubiquitous computing.
This book constitutes the refereed proceedings of the 15th International Conference on Parallel Computing, Euro-Par 2009, held in Delft, The Netherlands, in August 2009. The 85 revised papers presented were carefully reviewed and selected from 256 submissions. The papers are organized in topical sections on support tools and environments; performance prediction and evaluation; scheduling and load balancing; high performance architectures and compilers; parallel and distributed databases; grid, cluster, and cloud computing; peer-to-peer computing; distributed systems and algorithms; parallel and distributed programming; parallel numerical algorithms; multicore and manycore programming; theory and algorithms for parallel computation; high performance networks; and mobile and ubiquitous computing.
Collecting scattered knowledge into one coherent account, this book provides a compendium of both classical and recently developed results on reversible computing. It offers an expanded view of the field that includes the traditional energy-motivated hardware viewpoint as well as the emerging application-motivated software approach. It explores up-and-coming theories, techniques, and tools for the application of reversible computing. The topics covered span several areas of computer science, including high-performance computing, parallel/distributed systems, computational theory, compilers, power-aware computing, and supercomputing.
The most powerful computers work by harnessing the combined computational power of millions of processors, and exploiting the full potential of such large-scale systems is something which becomes more difficult with each succeeding generation of parallel computers. Alternative architectures and computer paradigms are increasingly being investigated in an attempt to address these difficulties. Added to this, the pervasive presence of heterogeneous and parallel devices in consumer products such as mobile phones, tablets, personal computers and servers also demands efficient programming environments and applications aimed at small-scale parallel systems as opposed to large-scale supercomputers....
Written by one of the foremost experts in high-performance computing and the inventor of Gustafson’s law, Every Bit Counts: Posit Computing explains the foundations of a new way for computers to calculate that saves time, storage, energy, and power by packing more information into every bit than do legacy approaches. Both the AI and HPC communities are increasingly using the posit approach that Gustafson introduced in 2017, which may be the future of technical computing. What may seem like a dry subject is made engaging by including the human and historical side of the struggle to represent numbers on machines. The book is richly illustrated in full color throughout, with every effort made to make the material as clear and accessible as possible, and even humorous. Starting with the simplest form of the idea, the chapters gradually add concepts according to stated mathematical and engineering design principles, building a robust tool kit for creating application-specific number systems. There is also a thorough explanation of the PositTM Standard (2022), with motivations and examples that expand on that terse 12-page document.
This two-volume-set (LNCS 7203 and 7204) constitutes the refereed proceedings of the 9th International Conference on Parallel Processing and Applied Mathematics, PPAM 2011, held in Torun, Poland, in September 2011. The 130 revised full papers presented in both volumes were carefully reviewed and selected from numerous submissions. The papers address issues such as parallel/distributed architectures and mobile computing; numerical algorithms and parallel numerics; parallel non-numerical algorithms; tools and environments for parallel/distributed/grid computing; applications of parallel/distributed computing; applied mathematics, neural networks and evolutionary computing; history of computing.
The advent of multicore processors has renewed interest in the idea of incorporating transactions into the programming model used to write parallel programs. This approach, known as transactional memory, offers an alternative, and hopefully better, way to coordinate concurrent threads. The ACI (atomicity, consistency, isolation) properties of transactions provide a foundation to ensure that concurrent reads and writes of shared data do not produce inconsistent or incorrect results. At a higher level, a computation wrapped in a transaction executes atomically - either it completes successfully and commits its result in its entirety or it aborts. In addition, isolation ensures the transaction ...
Introduction to Computational Modeling Using C and Open-Source Tools presents the fundamental principles of computational models from a computer science perspective. It explains how to implement these models using the C programming language. The software tools used in the book include the Gnu Scientific Library (GSL), which is a free software library of C functions, and the versatile, open-source GnuPlot for visualizing the data. All source files, shell scripts, and additional notes are located at science.kennesaw.edu/~jgarrido/comp_models The book first presents an overview of problem solving and the introductory concepts, principles, and development of computational models before covering ...