You may have to Search all our reviewed books and magazines, click the sign up button below to create a free account.
[The book] provides a balanced survey of the fundamentals of artificial intelligence, emphasizing the relationship between symbolic and numeric processing. The text is structured around an innovative, interactive combination of LISP programming and AI; it uses the constructs of the programming language to help readers understand the array of artificial intelligence concepts presented. After an overview of the field of artificial intelligence, the text presents the fundamentals of LISP, explaining the language's features in more detail than any other AI text. Common Lisp is then used consistently, in both programming exercises and plentiful examples of actual AI code.- Back cover This text is intended to provide an introduction to both AI and LISp for those having a background in computer science and mathematics. -Pref.
The second half of the 1970s was marked with impressive advances in array/vector architectures and vectorization techniques and compilers. This progress continued with a particular focus on vector machines until the middle of the 1980s. The major ity of supercomputers during this period were register-to-register (Cray 1) or memory-to-memory (CDC Cyber 205) vector (pipelined) machines. However, the increasing demand for higher computational rates lead naturally to parallel comput ers and software. Through the replication of autonomous processors in a coordinated system, one can skip over performance barriers due technology limitations. In princi ple, parallelism offers unlimited performance p...
It has been widely recognized that artificial intelligence computations offer large potential for distributed and parallel processing. Unfortunately, not much is known about designing parallel AI algorithms and efficient, easy-to-use parallel computer architectures for AI applications. The field of parallel computation and computers for AI is in its infancy, but some significant ideas have appeared and initial practical experience has become available. The purpose of this book has been to collect in one volume contributions from several leading researchers and pioneers of AI that represent a sample of these ideas and experiences. This sample does not include all schools of thought nor contri...
Modern computing systems are built in terms of components and those components communicating. Communication systems imply concurrency, which is a theme of the WoTUG series. Traditionally concurrency has been taught, considered and experienced as an advanced and difficult topic. The thesis underlying this conference is that that idea is wrong. The natural world operates through continuous interaction of massive numbers of autonomous agents at all levels (sub-atomic, human, astronomic). It seems it is time to mature concurrency into a core engineering discipline that can be used on an everyday basis to simplify problem solutions, as well as to enable them. The goal of Communicating Process Architectures 2000 was to stimulate discussion and ideas as to the role concurrency should play in future generations of scalable computer infrastructure and applications - where scaling means the ability to ramp up functionality (stay in control as complexitiy increases) as well as physical metrics (such as performance).
High Performance Computing is an integrated computing environment for solving large-scale computational demanding problems in science, engineering and business. Newly emerging areas of HPC applications include medical sciences, transportation, financial operations and advanced human-computer interface such as virtual reality. High performance computing includes computer hardware, software, algorithms, programming tools and environments, plus visualization. The book addresses several of these key components of high performance technology and contains descriptions of the state-of-the-art computer architectures, programming and software tools and innovative applications of parallel computers. In addition, the book includes papers on heterogeneous network-based computing systems and scalability of parallel systems. The reader will find information and data relative to the two main thrusts of high performance computing: the absolute computational performance and that of providing the most cost effective and affordable computing for science, industry and business. The book is recommended for technical as well as management oriented individuals.
Supercomputers are the largest and fastest computers available at any point in time. The term was used for the first time in the New York World, March 1920, to describe "new statistical machines with the mental power of 100 skilled mathematicians in solving even highly complex algebraic problems. " Invented by Mendenhall and Warren, these machines were used at Columbia University'S Statistical Bureau. Recently, supercomputers have been used primarily to solve large-scale prob lems in science and engineering. Solutions of systems of partial differential equa tions, such as those found in nuclear physics, meteorology, and computational fluid dynamics, account for the majority of supercomputer ...
This book presents a unified treatment of recently developed techniques and current understanding about solving systems of linear equations and large scale eigenvalue problems on high-performance computers. It provides a rapid introduction to the world of vector and parallel processing for these linear algebra applications. Topics include major elements of advanced-architecture computers and their performance, recent algorithmic development, and software for direct solution of dense matrix problems, direct solution of sparse systems of equations, iterative solution of sparse systems of equations, and solution of large sparse eigenvalue problems.
Designed for undergraduates, An Introduction to High-Performance Scientific Computing assumes a basic knowledge of numerical computation and proficiency in Fortran or C programming and can be used in any science, computer science, applied mathematics, or engineering department or by practicing scientists and engineers, especially those associated with one of the national laboratories or supercomputer centers. This text evolved from a new curriculum in scientific computing that was developed to teach undergraduate science and engineering majors how to use high-performance computing systems (supercomputers) in scientific and engineering applications. Designed for undergraduates, An Introductio...
Multithreaded computer architecture has emerged as one of the most promising and exciting avenues for the exploitation of parallelism. This new field represents the confluence of several independent research directions which have united over a common set of issues and techniques. Multithreading draws on recent advances in dataflow, RISC, compiling for fine-grained parallel execution, and dynamic resource management. It offers the hope of dramatic performance increases through parallel execution for a broad spectrum of significant applications based on extensions to `traditional' approaches. Multithreaded Computer Architecture is divided into four parts, reflecting four major perspectives on ...