You may have to Search all our reviewed books and magazines, click the sign up button below to create a free account.
Algorithmic learning theory is mathematics about computer programs which learn from experience. This involves considerable interaction between various mathematical disciplines including theory of computation, statistics, and c- binatorics. There is also considerable interaction with the practical, empirical ?elds of machine and statistical learning in which a principal aim is to predict, from past data about phenomena, useful features of future data from the same phenomena. The papers in this volume cover a broad range of topics of current research in the ?eld of algorithmic learning theory. We have divided the 29 technical, contributed papers in this volume into eight categories (correspond...
Solutions for learning from large scale datasets, including kernel learning algorithms that scale linearly with the volume of the data and experiments carried out on realistically large datasets. Pervasive and networked computers have dramatically reduced the cost of collecting and distributing large datasets. In this context, machine learning algorithms that scale poorly could simply become irrelevant. We need learning algorithms that scale linearly with the volume of the data while maintaining enough statistical efficiency to outperform algorithms that simply process a random subset of the data. This volume offers researchers and engineers practical solutions for learning from large scale ...
The first unified treatment of time series modelling techniques spanning machine learning, statistics, engineering and computer science.
This book constitutes the refereed proceedings of the 14th Annual and 5th European Conferences on Computational Learning Theory, COLT/EuroCOLT 2001, held in Amsterdam, The Netherlands, in July 2001. The 40 revised full papers presented together with one invited paper were carefully reviewed and selected from a total of 69 submissions. All current aspects of computational learning and its applications in a variety of fields are addressed.
The annual conference on Neural Information Processing Systems (NIPS) is the flagship conference on neural computation. It draws preeminent academic researchers from around the world and is widely considered to be a showcase conference for new developments in network algorithms and architectures. The broad range of interdisciplinary research areas represented includes computer science, neuroscience, statistics, physics, cognitive science, and many branches of engineering, including signal processing and control theory. Only about 30 percent of the papers submitted are accepted for presentation at NIPS, so the quality is exceptionally high. These proceedings contain all of the papers that were presented.
In the last decade, there have been an increasing convergence of interest and methods between theoretical physics and fields as diverse as probability, machine learning, optimization and compressed sensing. In particular, many theoretical and applied works in statistical physics and computer science have relied on the use of message passing algorithms and their connection to statistical physics of spin glasses. The aim of this book, especially adapted to PhD students, post-docs, and young researchers, is to present the background necessary for entering this fast developing field.
Papers presented at the 2003 Neural Information Processing Conference by leading physicists, neuroscientists, mathematicians, statisticians, and computer scientists. The annual Neural Information Processing (NIPS) conference is the flagship meeting on neural computation. It draws a diverse group of attendees -- physicists, neuroscientists, mathematicians, statisticians, and computer scientists. The presentations are interdisciplinary, with contributions in algorithms, learning theory, cognitive science, neuroscience, brain imaging, vision, speech and signal processing, reinforcement learning and control, emerging technologies, and applications. Only thirty percent of the papers submitted are accepted for presentation at NIPS, so the quality is exceptionally high. This volume contains all the papers presented at the 2003 conference.
This book contains the proceedings of the workshop Uncertainty in Geomet ric Computations that was held in Sheffield, England, July 5-6, 2001. A total of 59 delegates from 5 countries in Europe, North America and Asia attended the workshop. The workshop provided a forum for the discussion of com putational methods for quantifying, representing and assessing the effects of uncertainty in geometric computations. It was organised around lectures by invited speakers, and presentations in poster form from participants. Computer simulations and modelling are used frequently in science and engi neering, in applications ranging from the understanding of natural and artificial phenomena, to the desig...
Presents a collection of articles by leading researchers in neural networks. This work focuses on data storage and retrieval, and the recognition of handwriting.
The book provides an overview of recent developments in large margin classifiers, examines connections with other methods (e.g., Bayesian inference), and identifies strengths and weaknesses of the method, as well as directions for future research. The concept of large margins is a unifying principle for the analysis of many different approaches to the classification of data from examples, including boosting, mathematical programming, neural networks, and support vector machines. The fact that it is the margin, or confidence level, of a classification--that is, a scale parameter--rather than a raw training error that matters has become a key tool for dealing with classifiers. This book shows how this idea applies to both the theoretical analysis and the design of algorithms. The book provides an overview of recent developments in large margin classifiers, examines connections with other methods (e.g., Bayesian inference), and identifies strengths and weaknesses of the method, as well as directions for future research. Among the contributors are Manfred Opper, Vladimir Vapnik, and Grace Wahba.