You may have to Search all our reviewed books and magazines, click the sign up button below to create a free account.
Markov decision process (MDP) models are widely used for modeling sequential decision-making problems that arise in engineering, economics, computer science, and the social sciences. Many real-world problems modeled by MDPs have huge state and/or action spaces, giving an opening to the curse of dimensionality and so making practical solution of the resulting models intractable. In other cases, the system of interest is too complex to allow explicit specification of some of the MDP model parameters, but simulation samples are readily available (e.g., for random transitions and costs). For these settings, various sampling and population-based algorithms have been developed to overcome the diff...
Data-Based Controller Design presents a comprehensive analysis of data-based control design. It brings together the different data-based design methods that have been presented in the literature since the late 1990’s. To the best knowledge of the author, these data-based design methods have never been collected in a single text, analyzed in depth or compared to each other, and this severely limits their widespread application. In this book these methods will be presented under a common theoretical framework, which fits also a large family of adaptive control methods: the MRAC (Model Reference Adaptive Control) methods. This common theoretical framework has been developed and presented very...
The Handbook of Simulation Optimization presents an overview of the state of the art of simulation optimization, providing a survey of the most well-established approaches for optimizing stochastic simulation models and a sampling of recent research advances in theory and methodology. Leading contributors cover such topics as discrete optimization via simulation, ranking and selection, efficient simulation budget allocation, random search methods, response surface methodology, stochastic gradient estimation, stochastic approximation, sample average approximation, stochastic constraints, variance reduction techniques, model-based stochastic search methods and Markov decision processes. This single volume should serve as a reference for those already in the field and as a means for those new to the field for understanding and applying the main approaches. The intended audience includes researchers, practitioners and graduate students in the business/engineering fields of operations research, management science, operations management and stochastic control, as well as in economics/finance and computer science.
There are plenty of challenging and interesting problems open for investigation in the field of switched systems. Stability issues help to generate many complex nonlinear dynamic behaviors within switched systems. The authors present a thorough investigation of stability effects on three broad classes of switching mechanism: arbitrary switching where stability represents robustness to unpredictable and undesirable perturbation, constrained switching, including random (within a known stochastic distribution), dwell-time (with a known minimum duration for each subsystem) and autonomously-generated (with a pre-assigned mechanism) switching; and designed switching in which a measurable and freely-assigned switching mechanism contributes to stability by acting as a control input. For each of these classes this book propounds: detailed stability analysis and/or design, related robustness and performance issues, connections to other control problems and many motivating and illustrative examples.
Although the problem of nonlinear controller design is as old as that of linear controller design, the systematic design methods framed in response are more sparse. Given the range and complexity of nonlinear systems, effective new methods of control design are therefore of significant importance. Dynamic Surface Control of Uncertain Nonlinear Systems provides a theoretically rigorous and practical introduction to nonlinear control design. The convex optimization approach applied to good effect in linear systems is extended to the nonlinear case using the new dynamic surface control (DSC) algorithm developed by the authors. A variety of problems – DSC design, output feedback, input saturat...
This volume provides a general overview of discrete- and continuous-time Markov control processes and stochastic games, along with a look at the range of applications of stochastic control and some of its recent theoretical developments. These topics include various aspects of dynamic programming, approximation algorithms, and infinite-dimensional linear programming. In all, the work comprises 18 carefully selected papers written by experts in their respective fields. Optimization, Control, and Applications of Stochastic Systems will be a valuable resource for all practitioners, researchers, and professionals in applied mathematics and operations research who work in the areas of stochastic control, mathematical finance, queueing theory, and inventory systems. It may also serve as a supplemental text for graduate courses in optimal control and dynamic games.
Discontinuous Systems develops nonsmooth stability analysis and discontinuous control synthesis based on novel modeling of discontinuous dynamic systems, operating under uncertain conditions. While being primarily a research monograph devoted to the theory of discontinuous dynamic systems, no background in discontinuous systems is required; such systems are introduced in the book at the appropriate conceptual level. Being developed for discontinuous systems, the theory is successfully applied to their subclasses – variable-structure and impulsive systems – as well as to finite- and infinite-dimensional systems such as distributed-parameter and time-delay systems. The presentation concentrates on algorithms rather than on technical implementation although theoretical results are illustrated by electromechanical applications. These specific applications complete the book and, together with the introductory theoretical constituents bring some elements of the tutorial to the text.
Foundations of Reinforcement Learning with Applications in Finance aims to demystify Reinforcement Learning, and to make it a practically useful tool for those studying and working in applied areas — especially finance. Reinforcement Learning is emerging as a powerful technique for solving a variety of complex problems across industries that involve Sequential Optimal Decisioning under Uncertainty. Its penetration in high-profile problems like self-driving cars, robotics, and strategy games points to a future where Reinforcement Learning algorithms will have decisioning abilities far superior to humans. But when it comes getting educated in this area, there seems to be a reluctance to jump...
This is a subject that is as hot as a snake in a wagon rut, offering as it does huge potentiality in the field of computer programming. That’s why this book, which constitutes the refereed proceedings of the 7th International Symposium on Abstraction, Reformulation, and Approximation, held in Whistler, Canada, in July 2007, will undoubtedly prove so popular among researchers and professionals in relevant fields. 26 revised full papers are presented, together with the abstracts of 3 invited papers and 13 research summaries.
Cooperative Control Design: A Systematic, Passivity-Based Approach discusses multi-agent coordination problems, including formation control, attitude coordination, and synchronization. The goal of the book is to introduce passivity as a design tool for multi-agent systems, to provide exemplary work using this tool, and to illustrate its advantages in designing robust cooperative control algorithms. The discussion begins with an introduction to passivity and demonstrates how passivity can be used as a design tool for motion coordination. Followed by the case of adaptive redesigns for reference velocity recovery while describing a basic design, a modified design and the parameter convergence problem. Formation control is presented as it relates to relative distance control and relative position control. The coverage is concluded with a comprehensive discussion of agreement and the synchronization problem with an example using attitude coordination.