You may have to Search all our reviewed books and magazines, click the sign up button below to create a free account.
This book provides a structured treatment of the key principles and techniques for enabling efficient processing of deep neural networks (DNNs). DNNs are currently widely used for many artificial intelligence (AI) applications, including computer vision, speech recognition, and robotics. While DNNs deliver state-of-the-art accuracy on many AI tasks, it comes at the cost of high computational complexity. Therefore, techniques that enable efficient processing of deep neural networks to improve key metrics—such as energy-efficiency, throughput, and latency—without sacrificing accuracy or increasing hardware costs are critical to enabling the wide deployment of DNNs in AI systems. The book i...
Heterogeneous systems on chip (HeSoCs) combine general-purpose, feature-rich multi-core host processors with domain-specific programmable many-core accelerators (PMCAs) to unite versatility with energy efficiency and peak performance. By virtue of their heterogeneity, HeSoCs hold the promise of increasing performance and energy efficiency compared to homogeneous multiprocessors, because applications can be executed on hardware that is designed for them. However, this heterogeneity also increases system complexity substantially. This thesis presents the first research platform for HeSoCs where all components, from accelerator cores to application programming interface, are available under per...
Artificial intelligence has already enabled pivotal advances in diverse fields, yet its impact on computer architecture has only just begun. In particular, recent work has explored broader application to the design, optimization, and simulation of computer architecture. Notably, machine-learning-based strategies often surpass prior state-of-the-art analytical, heuristic, and human-expert approaches. This book reviews the application of machine learning in system-wide simulation and run-time optimization, and in many individual components such as caches/memories, branch predictors, networks-on-chip, and GPUs. The book further analyzes current practice to highlight useful design strategies and identify areas for future work, based on optimized implementation strategies, opportune extensions to existing work, and ambitious long term possibilities. Taken together, these strategies and techniques present a promising future for increasingly automated computer architecture designs.
This book introduces readers to emerging persistent memory (PM) technologies that promise the performance of dynamic random-access memory (DRAM) with the durability of traditional storage media, such as hard disks and solid-state drives (SSDs). Persistent memories (PMs), such as Intel's Optane DC persistent memories, are commercially available today. Unlike traditional storage devices, PMs can be accessed over a byte-addressable load-store interface with access latency that is comparable to DRAM. Unfortunately, existing hardware and software systems are ill-equipped to fully avail the potential of these byte-addressable memory technologies as they have been designed to access traditional sto...
This Synthesis Lecture focuses on techniques for efficient data orchestration within DNN accelerators. The End of Moore's Law, coupled with the increasing growth in deep learning and other AI applications has led to the emergence of custom Deep Neural Network (DNN) accelerators for energy-efficient inference on edge devices. Modern DNNs have millions of hyper parameters and involve billions of computations; this necessitates extensive data movement from memory to on-chip processing engines. It is well known that the cost of data movement today surpasses the cost of the actual computation; therefore, DNN accelerators require careful orchestration of data across on-chip compute, network, and memory elements to minimize the number of accesses to external DRAM. The book covers DNN dataflows, data reuse, buffer hierarchies, networks-on-chip, and automated design-space exploration. It concludes with data orchestration challenges with compressed and sparse DNNs and future trends. The target audience is students, engineers, and researchers interested in designing high-performance and low-energy accelerators for DNN inference.
Energy efficiency is critical for running computer vision on battery-powered systems, such as mobile phones or UAVs (unmanned aerial vehicles, or drones). This book collects the methods that have won the annual IEEE Low-Power Computer Vision Challenges since 2015. The winners share their solutions and provide insight on how to improve the efficiency of machine learning systems.
In recent years, there have been significant progress in computational intelligence and image processing with machine learning and deep learning as important components of modern artificial intelligence. All these progresses face challenges in dealing with Covid-19 pandemic for detection and treatment.This comprehensive compendium provides not only updated advances of computational intelligence and image processing in the detection and treatment of Covid-19, but also other medical applications such as in cancer detection and cardiovascular diseases, etc. More traditional approaches such as 2D segmentation and 3D reconstruction are included.The useful reference text is an updated version of the edited title, Computer Vision in Medical Imaging (World Scientific, 2014) and its companion volume, Frontiers of Medical Imaging (World Scientific, 2015). The book is written for engineers, scientists and the medical community to meet the increased challenges in medical applications.
AI Computing Systems: An Application Driven Perspective adopts the principle of "application-driven, full-stack penetration" and uses the specific intelligent application of "image style migration" to provide students with a sound starting place to learn. This approach enables readers to obtain a full view of the AI computing system. A complete intelligent computing system involves many aspects such as processing chip, system structure, programming environment, software, etc., making it a difficult topic to master in a short time. - Provides an in-depth analysis of the underlying principles behind the use of knowledge in intelligent computing systems - Centers around application-driven and full-stack penetration, focusing on the knowledge required to complete this application at all levels of the software and hardware technology stack - Supporting experimental tutorials covering key knowledge points in each chapter provide practical guidance and formalization tools for developing a simple AI computing system
This book describes deep learning systems: the algorithms, compilers, and processor components to efficiently train and deploy deep learning models for commercial applications. The exponential growth in computational power is slowing at a time when the amount of compute consumed by state-of-the-art deep learning (DL) workloads is rapidly growing. Model size, serving latency, and power constraints are a significant challenge in the deployment of DL models for many applications. Therefore, it is imperative to codesign algorithms, compilers, and hardware to accelerate advances in this field with holistic system-level and algorithm solutions that improve performance, power, and efficiency. Advan...
This book provides a thorough overview of the state-of-the-art field-programmable gate array (FPGA)-based robotic computing accelerator designs and summarizes their adopted optimized techniques. This book consists of ten chapters, delving into the details of how FPGAs have been utilized in robotic perception, localization, planning, and multi-robot collaboration tasks. In addition to individual robotic tasks, this book provides detailed descriptions of how FPGAs have been used in robotic products, including commercial autonomous vehicles and space exploration robots.