You may have to Search all our reviewed books and magazines, click the sign up button below to create a free account.
What makes people smarter than computers? These volumes by a pioneering neurocomputing group suggest that the answer lies in the massively parallel architecture of the human mind. They describe a new theory of cognition called connectionism that is challenging the idea of symbolic computation that has traditionally been at the center of debate in theoretical discussions about the mind. The authors' theory assumes the mind is composed of a great number of elementary units connected in a neural network. Mental processes are interactions between these units which excite and inhibit each other in parallel rather than sequential operations. In this context, knowledge can no longer be thought of as stored in localized structures; instead, it consists of the connections between pairs of units that are distributed throughout the network. Volume 1 lays the foundations of this exciting theory of parallel distributed processing, while Volume 2 applies it to a number of specific issues in cognitive science and neuroscience, with chapters describing models of aspects of perception, memory, language, and thought.
Composed of three sections, this book presents the most popular training algorithm for neural networks: backpropagation. The first section presents the theory and principles behind backpropagation as seen from different perspectives such as statistics, machine learning, and dynamical systems. The second presents a number of network architectures that may be designed to match the general concepts of Parallel Distributed Processing with backpropagation learning. Finally, the third section shows how these principles can be applied to a number of different fields related to the cognitive sciences, including control, speech recognition, robotics, image processing, and cognitive psychology. The volume is designed to provide both a solid theoretical foundation and a set of examples that show the versatility of the concepts. Useful to experts in the field, it should also be most helpful to students seeking to understand the basic principles of connectionist learning and to engineers wanting to add neural networks in general -- and backpropagation in particular -- to their set of problem-solving methods.
The interdisciplinary field of cognitive science brings together elements of cognitive psychology, mathematics, perception, and linguistics. Focusing on the main areas of exploration in this field today, Cognitive Science presents comprehensive overviews of research findings and discusses new cross-over areas of interest. Contributors represent the most senior and well-established names in the field. This volume serves as a high-level introduction, with sufficient breadth to be a graduate-level text, and enough depth to be a valued reference source to researchers.
Theory; Linguistic analysis; The computer model; Studies of language; Studies of visual perceptioon and problem solving; Extensions.
Surprising tales from the scientists who first learned how to use computers to understand the workings of the human brain. Since World War II, a group of scientists has been attempting to understand the human nervous system and to build computer systems that emulate the brain's abilities. Many of the early workers in this field of neural networks came from cybernetics; others came from neuroscience, physics, electrical engineering, mathematics, psychology, even economics. In this collection of interviews, those who helped to shape the field share their childhood memories, their influences, how they became interested in neural networks, and what they see as its future. The subjects tell stori...
The philosophy of cognitive science has recently become one of the most exciting and fastest growing domains of philosophical inquiry and analysis. Until the early 1980s, nearly all of the models developed treated cognitive processes -- like problem solving, language comprehension, memory, and higher visual processing -- as rule-governed symbol manipulation. However, this situation has changed dramatically over the last half dozen years. In that period there has been an enormous shift of attention toward connectionist models of cognition that are inspired by the network-like architecture of the brain. Because of their unique architecture and style of processing, connectionist systems are gen...
Mind Readings is a collection of accessible readings on some of the most important topics in cognitive science. Although anyone interested in the interdisciplinary study of mind will find the selections well worth reading, they work particularly well with Paul Thagard's textbook Mind: An Introduction Cognitive Science, and provide further discussion on the major topics discussed in that book. The first eight chapters present approaches to cognitive science from the perspective that thinking consists of computational procedures on mental representations. The remaining five chapters discuss challenges to the computational-representational understanding of mind. Contributors John R. Anderson, Ruth M.J. Byrne, E.H. Durfee, Chris Eliasmith, Owen Flanagan, Dedre Gentner, Janice Glasgow, Philip N. Johnson-Laird, Alan Mackworth, Arthur B. Markman, Douglas L. Medin, Keith Oatley, Dimitri Papadias, Steven Pinker, David E. Rumelhart, Herbert A. Simon.
Written for cognitive scientists, psychologists, computer scientists, engineers, and neuroscientists, this book provides an accessible overview of how computational network models are being used to model neurobiological phenomena. Each chapter presents a representative example of how biological data and network models interact with the authors' research. The biological phenomena cover network- or circuit-level phenomena in humans and other higher-order vertebrates.
Few developments in the intellectual life of the past quarter-century have provoked more controversy than the attempt to engineer human-like intelligence by artificial means. Born of computer science, this effort has sparked a continuing debate among the psychologists, neuroscientists, philosophers,and linguists who have pioneered--and criticized--artificial intelligence. Are there general principles, as some computer scientists had originally hoped, that would fully describe the activity of both animal and machine minds, just as aerodynamics accounts for the flight of birds and airplanes? In the twenty substantial interviews published here, leading researchers address this and other vexing ...
Attention and Performance XIV, provides a broad, historic, and timely synthesis of the empirical and theoretical ideas on which performance theory now rests.