You may have to Search all our reviewed books and magazines, click the sign up button below to create a free account.
This is the first comprehensive book on information geometry, written by the founder of the field. It begins with an elementary introduction to dualistic geometry and proceeds to a wide range of applications, covering information science, engineering, and neuroscience. It consists of four parts, which on the whole can be read independently. A manifold with a divergence function is first introduced, leading directly to dualistic structure, the heart of information geometry. This part (Part I) can be apprehended without any knowledge of differential geometry. An intuitive explanation of modern differential geometry then follows in Part II, although the book is for the most part understandable ...
Information geometry provides the mathematical sciences with a fresh framework of analysis. This book presents a comprehensive introduction to the mathematical foundation of information geometry. It provides an overview of many areas of applications, such as statistics, linear systems, information theory, quantum mechanics, and convex analysis.
From the reviews: "In this Lecture Note volume the author describes his differential-geometric approach to parametrical statistical problems summarizing the results he had published in a series of papers in the last five years. The author provides a geometric framework for a special class of test and estimation procedures for curved exponential families. ... ... The material and ideas presented in this volume are important and it is recommended to everybody interested in the connection between statistics and geometry ..." #Metrika#1 "More than hundred references are given showing the growing interest in differential geometry with respect to statistics. The book can only strongly be recommended to a geodesist since it offers many new insights into statistics on a familiar ground." #Manuscripta Geodaetica#2
Im Mittelpunkt dieses modernen und spezialisierten Bandes stehen adaptive Strukturen und unüberwachte Lernalgorithmen, besonders im Hinblick auf effektive Computersimulationsprogramme. Anschauliche Illustrationen und viele Beispiele sowie eine interaktive CD-ROM ergänzen den Text.
Neural field theory has a long-standing tradition in the mathematical and computational neurosciences. Beginning almost 50 years ago with seminal work by Griffiths and culminating in the 1970ties with the models of Wilson and Cowan, Nunez and Amari, this important research area experienced a renaissance during the 1990ties by the groups of Ermentrout, Robinson, Bressloff, Wright and Haken. Since then, much progress has been made in both, the development of mathematical and numerical techniques and in physiological refinement und understanding. In contrast to large-scale neural network models described by huge connectivity matrices that are computationally expensive in numerical simulations, ...
This book provides a broad survey of models and efficient algorithms for Nonnegative Matrix Factorization (NMF). This includes NMF’s various extensions and modifications, especially Nonnegative Tensor Factorizations (NTF) and Nonnegative Tucker Decompositions (NTD). NMF/NTF and their extensions are increasingly used as tools in signal and image processing, and data analysis, having garnered interest due to their capability to provide new insights and relevant information about the complex latent relationships in experimental data sets. It is suggested that NMF can provide meaningful components with physical interpretations; for example, in bioinformatics, NMF and its extensions have been s...
Since its founding in 1989 by Terrence Sejnowski, Neural Computation has become the leading journal in the field. Foundations of Neural Computation collects, by topic, the most significant papers that have appeared in the journal over the past nine years. This volume of Foundations of Neural Computation, on unsupervised learning algorithms, focuses on neural network learning algorithms that do not require an explicit teacher. The goal of unsupervised learning is to extract an efficient internal representation of the statistical structure implicit in the inputs. These algorithms provide insights into the development of the cerebral cortex and implicit learning in humans. They are also of interest to engineers working in areas such as computer vision and speech recognition who seek efficient representations of raw input data.
This volume of research papers comprises the proceedings of the first International Conference on Mathematics of Neural Networks and Applications (MANNA), which was held at Lady Margaret Hall, Oxford from July 3rd to 7th, 1995 and attended by 116 people. The meeting was strongly supported and, in addition to a stimulating academic programme, it featured a delightful venue, excellent food and accommo dation, a full social programme and fine weather - all of which made for a very enjoyable week. This was the first meeting with this title and it was run under the auspices of the Universities of Huddersfield and Brighton, with sponsorship from the US Air Force (European Office of Aerospace Resea...
The human brain, wi th its hundred billion or more neurons, is both one of the most complex systems known to man and one of the most important. The last decade has seen an explosion of experimental research on the brain, but little theory of neural networks beyond the study of electrical properties of membranes and small neural circuits. Nonetheless, a number of workers in Japan, the United States and elsewhere have begun to contribute to a theory which provides techniques of mathematical analysis and computer simulation to explore properties of neural systems containing immense numbers of neurons. Recently, it has been gradually recognized that rather independent studies of the dynamics of ...
The annual conference on Neural Information Processing Systems (NIPS) is the flagship conference on neural computation. It draws preeminent academic researchers from around the world and is widely considered to be a showcase conference for new developments in network algorithms and architectures. The broad range of interdisciplinary research areas represented includes neural networks and genetic algorithms, cognitive science, neuroscience and biology, computer science, AI, applied mathematics, physics, and many branches of engineering. Only about 30% of the papers submitted are accepted for presentation at NIPS, so the quality is exceptionally high. All of the papers presented appear in these proceedings.