You may have to Search all our reviewed books and magazines, click the sign up button below to create a free account.
Contributed in honour of Lucien Le Cam on the occasion of his 70th birthday, the papers reflect the immense influence that his work has had on modern statistics. They include discussions of his seminal ideas, historical perspectives, and contributions to current research - spanning two centuries with a new translation of a paper of Daniel Bernoulli. The volume begins with a paper by Aalen, which describes Le Cams role in the founding of the martingale analysis of point processes, and ends with one by Yu, exploring the position of just one of Le Cams ideas in modern semiparametric theory. The other 27 papers touch on areas such as local asymptotic normality, contiguity, efficiency, admissibility, minimaxity, empirical process theory, and biological medical, and meteorological applications - where Le Cams insights have laid the foundations for new theories.
This book grew out of lectures delivered at the University of California, Berkeley, over many years. The subject is a part of asymptotics in statistics, organized around a few central ideas. The presentation proceeds from the general to the particular since this seemed the best way to emphasize the basic concepts. The reader is expected to have been exposed to statistical thinking and methodology, as expounded for instance in the book by H. Cramer [1946] or the more recent text by P. Bickel and K. Doksum [1977]. Another pos sibility, closer to the present in spirit, is Ferguson [1967]. Otherwise the reader is expected to possess some mathematical maturity, but not really a great deal of deta...
This is the second edition of a coherent introduction to the subject of asymptotic statistics as it has developed over the past 50 years. It differs from the first edition in that it is now more 'reader friendly' and also includes a new chapter on Gaussian and Poisson experiments, reflecting their growing role in the field. Most of the subsequent chapters have been entirely rewritten and the nonparametrics of Chapter 7 have been amplified. The volume is not intended to replace monographs on specialized subjects, but will help to place them in a coherent perspective. It thus represents a link between traditional material - such as maximum likelihood, and Wald's Theory of Statistical Decision Functions -- together with comparison and distances for experiments. Much of the material has been taught in a second year graduate course at Berkeley for 30 years.
This book is an introduction to the field of asymptotic statistics. The treatment is both practical and mathematically rigorous. In addition to most of the standard topics of an asymptotics course, including likelihood inference, M-estimation, the theory of asymptotic efficiency, U-statistics, and rank procedures, the book also presents recent research topics such as semiparametric models, the bootstrap, and empirical processes and their applications. The topics are organized from the central idea of approximation by limit experiments, which gives the book one of its unifying themes. This entails mainly the local approximation of the classical i.i.d. set up with smooth parameters by location experiments involving a single, normally distributed observation. Thus, even the standard subjects of asymptotic statistics are presented in a novel way. Suitable as a graduate or Master s level statistics text, this book will also give researchers an overview of the latest research in asymptotic statistics.
There are a number of important questions associated with statistical experiments: when does one given experiment yield more information than another; how can we measure the difference in information; how fast does information accumulate by repeating the experiment? The means of answering such questions has emerged from the work of Wald, Blackwell, LeCam and others and is based on the ideas of risk and deficiency. The present work which is devoted to the various methods of comparing statistical experiments, is essentially self-contained, requiring only some background in measure theory and functional analysis. Chapters introducing statistical experiments and the necessary convex analysis begin the book and are followed by others on game theory, decision theory and vector lattices. The notion of deficiency, which measures the difference in information between two experiments, is then introduced. The relation between it and other concepts, such as sufficiency, randomisation, distance, ordering, equivalence, completeness and convergence are explored. This is a comprehensive treatment of the subject and will be an essential reference for mathematical statisticians.
A Course in Large Sample Theory is presented in four parts. The first treats basic probabilistic notions, the second features the basic statistical tools for expanding the theory, the third contains special topics as applications of the general theory, and the fourth covers more standard statistical topics. Nearly all topics are covered in their multivariate setting.The book is intended as a first year graduate course in large sample theory for statisticians. It has been used by graduate students in statistics, biostatistics, mathematics, and related fields. Throughout the book there are many examples and exercises with solutions. It is an ideal text for self study.
Written by pioneers in this exciting new field, Algebraic Statistics introduces the application of polynomial algebra to experimental design, discrete probability, and statistics. It begins with an introduction to Grobner bases and a thorough description of their applications to experimental design. A special chapter covers the binary case