You may have to Search all our reviewed books and magazines, click the sign up button below to create a free account.
The Wiley-Interscience Paperback Series consists of selected books that have been made more accessible to consumers in an effort to increase global appeal and general circulation. With these new unabridged softcover volumes, Wiley hopes to extend the lives of these works by making them available to future generations of statisticians, mathematicians, and scientists. "This text is unique in bringing together so many results hitherto found only in part in other texts and papers. . . . The text is fairly self-contained, inclusive of some basic mathematical results needed, and provides a rich diet of examples, applications, and exercises. The bibliographical material at the end of each chapter i...
Eugene A. Feinberg Adam Shwartz This volume deals with the theory of Markov Decision Processes (MDPs) and their applications. Each chapter was written by a leading expert in the re spective area. The papers cover major research areas and methodologies, and discuss open questions and future research directions. The papers can be read independently, with the basic notation and concepts ofSection 1.2. Most chap ters should be accessible by graduate or advanced undergraduate students in fields of operations research, electrical engineering, and computer science. 1.1 AN OVERVIEW OF MARKOV DECISION PROCESSES The theory of Markov Decision Processes-also known under several other names including seq...
This book presents classical Markov Decision Processes (MDP) for real-life applications and optimization. MDP allows users to develop and formally support approximate and simple decision rules, and this book showcases state-of-the-art applications in which MDP was key to the solution approach. The book is divided into six parts. Part 1 is devoted to the state-of-the-art theoretical foundation of MDP, including approximate methods such as policy improvement, successive approximation and infinite state spaces as well as an instructive chapter on Approximate Dynamic Programming. It then continues with five parts of specific and non-exhaustive application areas. Part 2 covers MDP healthcare appl...
This book provides a unified approach for the study of constrained Markov decision processes with a finite state space and unbounded costs. Unlike the single controller case considered in many other books, the author considers a single controller with several objectives, such as minimizing delays and loss, probabilities, and maximization of throughputs. It is desirable to design a controller that minimizes one cost objective, subject to inequality constraints on other cost objectives. This framework describes dynamic decision problems arising frequently in many engineering fields. A thorough overview of these applications is presented in the introduction. The book is then divided into three sections that build upon each other.
The Encyclopedia received the 2011 RUSA Award for Outstanding Business Reference Source AN UNPARALLELED UNDERTAKING The Wiley Encyclopedia of Operations Research and Management Science is the first multi-volume encyclopedia devoted to advancing the areas of operations research and management science. The Encyclopedia is available online and in print. The Encyclopedia was honored with the distinction of an "Outstanding Business Reference Source" by the Reference and User Services Association DETAILED AND AUTHORITATIVE Designed to be a mainstay for students and professionals alike, the Encyclopedia features four types of articles at varying levels written by diverse, international contributors...
From household appliances to applications in robotics, engineered systems involving complex dynamics can only be as effective as the algorithms that control them. While Dynamic Programming (DP) has provided researchers with a way to optimally solve decision and control problems involving complex dynamic systems, its practical value was limited by algorithms that lacked the capacity to scale up to realistic problems. However, in recent years, dramatic developments in Reinforcement Learning (RL), the model-free counterpart of DP, changed our understanding of what is possible. Those developments led to the creation of reliable methods that can be applied even when a mathematical model of the sy...
A BBC TWO BETWEEN THE COVERS BOOK CLUB PICK AN INTERNATIONAL BESTSELLER SHORTLISTED FOR THE MAN BOOKER PRIZE WINNER OF THE MAN ASIAN LITERARY PRIZE AND THE WALTER SCOTT PRIZE Teoh Yun Ling was seventeen years old when she first heard about Aritomo and the garden. But a war would come to Malaya, and a decade pass before she would travel to see him. A man of extraordinary skill and reputation, Aritomo was once the gardener for the Emperor of Japan, and now Yun Ling needs him. She needs him to help her build a memorial to her beloved sister, killed at the hands of the Japanese. She wants to learn everything Aritomo can teach her, and do her sister proud, but to do so she must also begin a journey into her own past, a past inextricably linked with the secrets of her troubled country. A story of art, war, love and memory, The Garden of Evening Mists captures a dark moment in history with richness, power and incredible beauty.
Basic properties of log-linear models; Maximum-likelihood estimation; Numerical evaluation of maximum-likelihood estimates; Asymptotic properties; Complete factorial tables; Social-mobility tables; Incomplete contingency tables; Quantal response models; Some extensions.
Markov Decision Processes (MDPs) are a mathematical framework for modeling sequential decision problems under uncertainty as well as reinforcement learning problems. Written by experts in the field, this book provides a global view of current research using MDPs in artificial intelligence. It starts with an introductory presentation of the fundamental aspects of MDPs (planning in MDPs, reinforcement learning, partially observable MDPs, Markov games and the use of non-classical criteria). It then presents more advanced research trends in the field and gives some concrete examples using illustrative real life applications.
The theory of Markov decision processes focuses on controlled Markov chains in discrete time. The authors establish the theory for general state and action spaces and at the same time show its application by means of numerous examples, mostly taken from the fields of finance and operations research. By using a structural approach many technicalities (concerning measure theory) are avoided. They cover problems with finite and infinite horizons, as well as partially observable Markov decision processes, piecewise deterministic Markov decision processes and stopping problems. The book presents Markov decision processes in action and includes various state-of-the-art applications with a particular view towards finance. It is useful for upper-level undergraduates, Master's students and researchers in both applied probability and finance, and provides exercises (without solutions).