You may have to Search all our reviewed books and magazines, click the sign up button below to create a free account.
Here, the authors explain the basic ideas so as to generate interest in modern problems of experimental design. The topics discussed include designs for inference based on nonlinear models, designs for models with random parameters and stochastic processes, designs for model discrimination and incorrectly specified (contaminated) models, as well as examples of designs in functional spaces. Since the authors avoid technical details, the book assumes only a moderate background in calculus, matrix algebra, and statistics. However, at many places, hints are given as to how readers may enhance and adopt the basic ideas for advanced problems or applications. This allows the book to be used for courses at different levels, as well as serving as a useful reference for graduate students and researchers in statistics and engineering.
In 1984, the University of Bonn (FRG) and the International Institute for Applied System Analysis (IIASA) in Laxenburg (Austria), created a joint research group to analyze the relationship between economic growth and structural change. The research team was to examine the commodity composition as well as the size and direction of commodity and credit flows among countries and regions. Krelle (1988) reports on the results of this "Bonn-IIASA" research project. At the same time, an informal IIASA Working Group was initiated to deal with prob lems of the statistical analysis of economic data in the context of structural change: What tools do we have to identify nonconstancy of model parameters?...
Statistical disclosure control is the discipline that deals with producing statistical data that are safe enough to be released to external researchers. This book concentrates on the methodology of the area. It deals with both microdata (individual data) and tabular (aggregated) data. The book attempts to develop the theory from what can be called the paradigm of statistical confidentiality: to modify unsafe data in such a way that safe (enough) data emerge, with minimum information loss. This book discusses what safe data, are, how information loss can be measured, and how to modify the data in a (near) optimal way. Once it has been decided how to measure safety and information loss, the pr...
Researchers in many disciplines face the formidable task of analyzing massive amounts of high-dimensional and highly-structured data. This is due in part to recent advances in data collection and computing technologies. As a result, fundamental statistical research is being undertaken in a variety of different fields. Driven by the complexity of these new problems, and fueled by the explosion of available computer power, highly adaptive, non-linear procedures are now essential components of modern "data analysis," a term that we liberally interpret to include speech and pattern recognition, classification, data compression and signal processing. The development of new, flexible methods combines advances from many sources, including approximation theory, numerical analysis, machine learning, signal processing and statistics. The proposed workshop intends to bring together eminent experts from these fields in order to exchange ideas and forge directions for the future.
The book covers the basic theory of linear regression models and presents a comprehensive survey of different estimation techniques as alternatives and complements to least squares estimation. Proofs are given for the most relevant results, and the presented methods are illustrated with the help of numerical examples and graphics. Special emphasis is placed on practicability and possible applications. The book is rounded off by an introduction to the basics of decision theory and an appendix on matrix algebra.
Government policy questions and media planning tasks may be answered by this data set. It covers a wide range of different aspects of statistical matching that in Europe typically is called data fusion. A book about statistical matching will be of interest to researchers and practitioners, starting with data collection and the production of public use micro files, data banks, and data bases. People in the areas of database marketing, public health analysis, socioeconomic modeling, and official statistics will find it useful.
The book is composed of two volumes, each consisting of five chapters. In Vol ume I, following some statistical motivation based on a randomization model, a general theory of the analysis of experiments in block designs has been de veloped. In the present Volume II, the primary aim is to present methods of that satisfy the statistical requirements described in constructing block designs Volume I, particularly those considered in Chapters 3 and 4, and also to give some catalogues of plans of the designs. Thus, the constructional aspects are of predominant interest in Volume II, with a general consideration given in Chapter 6. The main design investigations are systematized by separating the m...
Artificial "neural networks" are widely used as flexible models for classification and regression applications, but questions remain about how the power of these models can be safely exploited when training data is limited. This book demonstrates how Bayesian methods allow complex neural network models to be used without fear of the "overfitting" that can occur with traditional training methods. Insight into the nature of these complex Bayesian models is provided by a theoretical investigation of the priors over functions that underlie them. A practical implementation of Bayesian neural network learning using Markov chain Monte Carlo methods is also described, and software for it is freely available over the Internet. Presupposing only basic knowledge of probability and statistics, this book should be of interest to researchers in statistics, engineering, and artificial intelligence.
Bayesian and such approaches to inference have a number of points of close contact, especially from an asymptotic point of view. Both emphasize the construction of interval estimates of unknown parameters. In this volume, researchers present recent work on several aspects of Bayesian, likelihood and empirical Bayes methods, presented at a workshop held in Montreal, Canada. The goal of the workshop was to explore the linkages among the methods, and to suggest new directions for research in the theory of inference.
The main subject of this book is the estimation and forecasting of continuous time processes. It leads to a development of the theory of linear processes in function spaces. Mathematical tools are presented, as well as autoregressive processes in Hilbert and Banach spaces and general linear processes and statistical prediction. Implementation and numerical applications are also covered. The book assumes knowledge of classical probability theory and statistics.