Welcome to our book review site go-pdf.online!

You may have to Search all our reviewed books and magazines, click the sign up button below to create a free account.

Sign up

Markov Chains and Invariant Probabilities
  • Language: en
  • Pages: 213

Markov Chains and Invariant Probabilities

  • Type: Book
  • -
  • Published: 2012-12-06
  • -
  • Publisher: Birkhäuser

This book is about discrete-time, time-homogeneous, Markov chains (Mes) and their ergodic behavior. To this end, most of the material is in fact about stable Mes, by which we mean Mes that admit an invariant probability measure. To state this more precisely and give an overview of the questions we shall be dealing with, we will first introduce some notation and terminology. Let (X,B) be a measurable space, and consider a X-valued Markov chain ~. = {~k' k = 0, 1, ... } with transition probability function (t.pJ.) P(x, B), i.e., P(x, B) := Prob (~k+1 E B I ~k = x) for each x E X, B E B, and k = 0,1, .... The Me ~. is said to be stable if there exists a probability measure (p.m.) /.l on B such that (*) VB EB. /.l(B) = Ix /.l(dx) P(x, B) If (*) holds then /.l is called an invariant p.m. for the Me ~. (or the t.p.f. P).

Hidden Markov Models
  • Language: en
  • Pages: 374

Hidden Markov Models

As more applications are found, interest in Hidden Markov Models continues to grow. Following comments and feedback from colleagues, students and other working with Hidden Markov Models the corrected 3rd printing of this volume contains clarifications, improvements and some new material, including results on smoothing for linear Gaussian dynamics. In Chapter 2 the derivation of the basic filters related to the Markov chain are each presented explicitly, rather than as special cases of one general filter. Furthermore, equations for smoothed estimates are given. The dynamics for the Kalman filter are derived as special cases of the authors’ general results and new expressions for a Kalman smoother are given. The Chapters on the control of Hidden Markov Chains are expanded and clarified. The revised Chapter 4 includes state estimation for discrete time Markov processes and Chapter 12 has a new section on robust control.

Markov Processes, Brownian Motion, and Time Symmetry
  • Language: en
  • Pages: 444

Markov Processes, Brownian Motion, and Time Symmetry

From the reviews of the First Edition: "This excellent book is based on several sets of lecture notes written over a decade and has its origin in a one-semester course given by the author at the ETH, Zürich, in the spring of 1970. The author's aim was to present some of the best features of Markov processes and, in particular, of Brownian motion with a minimum of prerequisites and technicalities. The reader who becomes acquainted with the volume cannot but agree with the reviewer that the author was very successful in accomplishing this goal...The volume is very useful for people who wish to learn Markov processes but it seems to the reviewer that it is also of great interest to specialists in this area who could derive much stimulus from it. One can be convinced that it will receive wide circulation." (Mathematical Reviews) This new edition contains 9 new chapters which include new exercises, references, and multiple corrections throughout the original text.

Discrete-Time Markov Control Processes
  • Language: en
  • Pages: 223

Discrete-Time Markov Control Processes

This book presents the first part of a planned two-volume series devoted to a systematic exposition of some recent developments in the theory of discrete-time Markov control processes (MCPs). Interest is mainly confined to MCPs with Borel state and control (or action) spaces, and possibly unbounded costs and noncompact control constraint sets. MCPs are a class of stochastic control problems, also known as Markov decision processes, controlled Markov processes, or stochastic dynamic pro grams; sometimes, particularly when the state space is a countable set, they are also called Markov decision (or controlled Markov) chains. Regardless of the name used, MCPs appear in many fields, for example,...

Markov Chains
  • Language: en
  • Pages: 456

Markov Chains

Primarily an introduction to the theory of stochastic processes at the undergraduate or beginning graduate level, the primary objective of this book is to initiate students in the art of stochastic modelling. However it is motivated by significant applications and progressively brings the student to the borders of contemporary research. Examples are from a wide range of domains, including operations research and electrical engineering. Researchers and students in these areas as well as in physics, biology and the social sciences will find this book of interest.

Brownian Motion and Stochastic Calculus
  • Language: en
  • Pages: 490

Brownian Motion and Stochastic Calculus

  • Type: Book
  • -
  • Published: 2014-03-27
  • -
  • Publisher: Springer

A graduate-course text, written for readers familiar with measure-theoretic probability and discrete-time processes, wishing to explore stochastic processes in continuous time. The vehicle chosen for this exposition is Brownian motion, which is presented as the canonical example of both a martingale and a Markov process with continuous paths. In this context, the theory of stochastic integration and stochastic calculus is developed, illustrated by results concerning representations of martingales and change of measure on Wiener space, which in turn permit a presentation of recent advances in financial economics. The book contains a detailed discussion of weak and strong solutions of stochastic differential equations and a study of local time for semimartingales, with special emphasis on the theory of Brownian local time. The whole is backed by a large number of problems and exercises.

Markov Processes, Gaussian Processes, and Local Times
  • Language: en
  • Pages: 640

Markov Processes, Gaussian Processes, and Local Times

A readable 2006 synthesis of three main areas in the modern theory of stochastic processes.

Further Topics on Discrete-Time Markov Control Processes
  • Language: en
  • Pages: 286

Further Topics on Discrete-Time Markov Control Processes

Devoted to a systematic exposition of some recent developments in the theory of discrete-time Markov control processes, the text is mainly confined to MCPs with Borel state and control spaces. Although the book follows on from the author's earlier work, an important feature of this volume is that it is self-contained and can thus be read independently of the first. The control model studied is sufficiently general to include virtually all the usual discrete-time stochastic control models that appear in applications to engineering, economics, mathematical population processes, operations research, and management science.

Markov Decision Processes with Applications to Finance
  • Language: en
  • Pages: 393

Markov Decision Processes with Applications to Finance

The theory of Markov decision processes focuses on controlled Markov chains in discrete time. The authors establish the theory for general state and action spaces and at the same time show its application by means of numerous examples, mostly taken from the fields of finance and operations research. By using a structural approach many technicalities (concerning measure theory) are avoided. They cover problems with finite and infinite horizons, as well as partially observable Markov decision processes, piecewise deterministic Markov decision processes and stopping problems. The book presents Markov decision processes in action and includes various state-of-the-art applications with a particular view towards finance. It is useful for upper-level undergraduates, Master's students and researchers in both applied probability and finance, and provides exercises (without solutions).

Markov Chains with Stationary Transition Probabilities
  • Language: en
  • Pages: 287

Markov Chains with Stationary Transition Probabilities

  • Type: Book
  • -
  • Published: 2013-03-08
  • -
  • Publisher: Springer

The theory of Markov chains, although a special case of Markov processes, is here developed for its own sake and presented on its own merits. In general, the hypothesis of a denumerable state space, which is the defining hypothesis of what we call a "chain" here, generates more clear-cut questions and demands more precise and definitive an swers. For example, the principal limit theorem (§§ 1. 6, II. 10), still the object of research for general Markov processes, is here in its neat final form; and the strong Markov property (§ 11. 9) is here always applicable. While probability theory has advanced far enough that a degree of sophistication is needed even in the limited context of this bo...