You may have to Search all our reviewed books and magazines, click the sign up button below to create a free account.
As conventional memory technologies such as DRAM and Flash run into scaling challenges, architects and system designers are forced to look at alternative technologies for building future computer systems. This synthesis lecture begins by listing the requirements for a next generation memory technology and briefly surveys the landscape of novel non-volatile memories. Among these, Phase Change Memory (PCM) is emerging as a leading contender, and the authors discuss the material, device, and circuit advances underlying this exciting technology. The lecture then describes architectural solutions to enable PCM for main memories. Finally, the authors explore the impact of such byte-addressable non-volatile memories on future storage and system designs. Table of Contents: Next Generation Memory Technologies / Architecting PCM for Main Memories / Tolerating Slow Writes in PCM / Wear Leveling for Durability / Wear Leveling Under Adversarial Settings / Error Resilience in Phase Change Memories / Storage and System Design With Emerging Non-Volatile Memories
Multithreaded architectures now appear across the entire range of computing devices, from the highest-performing general purpose devices to low-end embedded processors. Multithreading enables a processor core to more effectively utilize its computational resources, as a stall in one thread need not cause execution resources to be idle. This enables the computer architect to maximize performance within area constraints, power constraints, or energy constraints. However, the architectural options for the processor designer or architect looking to implement multithreading are quite extensive and varied, as evidenced not only by the research literature but also by the variety of commercial imple...
This historical survey of parallel processing from 1980 to 2020 is a follow-up to the authors’ 1981 Tutorial on Parallel Processing, which covered the state of the art in hardware, programming languages, and applications. Here, we cover the evolution of the field since 1980 in: parallel computers, ranging from the Cyber 205 to clusters now approaching an exaflop, to multicore microprocessors, and Graphic Processing Units (GPUs) in commodity personal devices; parallel programming notations such as OpenMP, MPI message passing, and CUDA streaming notation; and seven parallel applications, such as finite element analysis and computer vision. Some things that looked like they would be major tre...
Artificial intelligence has already enabled pivotal advances in diverse fields, yet its impact on computer architecture has only just begun. In particular, recent work has explored broader application to the design, optimization, and simulation of computer architecture. Notably, machine-learning-based strategies often surpass prior state-of-the-art analytical, heuristic, and human-expert approaches. This book reviews the application of machine learning in system-wide simulation and run-time optimization, and in many individual components such as caches/memories, branch predictors, networks-on-chip, and GPUs. The book further analyzes current practice to highlight useful design strategies and identify areas for future work, based on optimized implementation strategies, opportune extensions to existing work, and ambitious long term possibilities. Taken together, these strategies and techniques present a promising future for increasingly automated computer architecture designs.
Edited by one of the founders and lead investigator of the Green500 list, this book presents state-of-the-art approaches to advance the large-scale green computing movement. It begins with low-level, hardware-based approaches and then traverses up the software stack with increasingly higher-level, software-based approaches. The book explains how to control power across the hardware, firmware, operating system, and application levels and explores trends in server costs, energy use, and performance at high-density computing facilities. It also discusses energy management and virtualization in cloud computing.
Most emerging applications in imaging and machine learning must perform immense amounts of computation while holding to strict limits on energy and power. To meet these goals, architects are building increasingly specialized compute engines tailored for these specific tasks. The resulting computer systems are heterogeneous, containing multiple processing cores with wildly different execution models. Unfortunately, the cost of producing this specialized hardware—and the software to control it—is astronomical. Moreover, the task of porting algorithms to these heterogeneous machines typically requires that the algorithm be partitioned across the machine and rewritten for each specific archi...
As computation continues to move into the cloud, the computing platform of interest no longer resembles a pizza box or a refrigerator, but a warehouse full of computers. These new large datacenters are quite different from traditional hosting facilities of earlier times and cannot be viewed simply as a collection of co-located servers. Large portions of the hardware and software resources in these facilities must work in concert to efficiently deliver good levels of Internet service performance, something that can only be achieved by a holistic approach to their design and deployment. In other words, we must treat the datacenter itself as one massive warehouse-scale computer (WSC). We descri...
With growing interest in computer security and the protection of the code and data which execute on commodity computers, the amount of hardware security features in today's processors has increased significantly over the recent years. No longer of just academic interest, security features inside processors have been embraced by industry as well, with a number of commercial secure processor architectures available today. This book aims to give readers insights into the principles behind the design of academic and commercial secure processor architectures. Secure processor architecture research is concerned with exploring and designing hardware features inside computer processors, features whi...
To date, the most common form of simulators of computer systems are software-based running on standard computers. One promising approach to improve simulation performance is to apply hardware, specifically reconfigurable hardware in the form of field programmable gate arrays (FPGAs). This manuscript describes various approaches of using FPGAs to accelerate software-implemented simulation of computer systems and selected simulators that incorporate those techniques. More precisely, we describe a simulation architecture taxonomy that incorporates a simulation architecture specifically designed for FPGA accelerated simulation, survey the state-of-the-art in FPGA-accelerated simulation, and describe in detail selected instances of the described techniques. Table of Contents: Preface / Acknowledgments / Introduction / Simulator Background / Accelerating Computer System Simulators with FPGAs / Simulation Virtualization / Categorizing FPGA-based Simulators / Conclusion / Bibliography / Authors' Biographies
Many modern computer systems, including homogeneous and heterogeneous architectures, support shared memory in hardware. In a shared memory system, each of the processor cores may read and write to a single shared address space. For a shared memory machine, the memory consistency model defines the architecturally visible behavior of its memory system. Consistency definitions provide rules about loads and stores (or memory reads and writes) and how they act upon memory. As part of supporting a memory consistency model, many machines also provide cache coherence protocols that ensure that multiple cached copies of data are kept up-to-date. The goal of this primer is to provide readers with a ba...