You may have to Search all our reviewed books and magazines, click the sign up button below to create a free account.
This volume highlights Prof. Hira Koul’s achievements in many areas of Statistics, including Asymptotic theory of statistical inference, Robustness, Weighted empirical processes and their applications, Survival Analysis, Nonlinear time series and Econometrics, among others. Chapters are all original papers that explore the frontiers of these areas and will assist researchers and graduate students working in Statistics, Econometrics and related areas. Prof. Hira Koul was the first Ph.D. student of Prof. Peter Bickel. His distinguished career in Statistics includes the receipt of many prestigious awards, including the Senior Humbolt award (1995), and dedicated service to the profession through editorial work for journals and through leadership roles in professional societies, notably as the past president of the International Indian Statistical Association. Prof. Hira Koul has graduated close to 30 Ph.D. students, and made several seminal contributions in about 125 innovative research papers. The long list of his distinguished collaborators is represented by the contributors to this volume.
This is a graduate level textbook on measure theory and probability theory. It presents the main concepts and results in measure theory and probability theory in a simple and easy-to-understand way. It further provides heuristic explanations behind the theory to help students see the big picture. The book can be used as a text for a two semester sequence of courses in measure theory and probability theory, with an option to include supplemental material on stochastic processes and special topics. Prerequisites are kept to the minimal level and the book is intended primarily for first year Ph.D. students in mathematics and statistics.
This book presents a unified approach for obtaining the limiting distributions of minimum distance. It discusses classes of goodness-of-t tests for fitting an error distribution in some of these models and/or fitting a regression-autoregressive function without assuming the knowledge of the error distribution. The main tool is the asymptotic equi-continuity of certain basic weighted residual empirical processes in the uniform and L2 metrics.
During the last two decades, many areas of statistical inference have experienced phenomenal growth. This book presents a timely analysis and overview of some of these new developments and a contemporary outlook on the various frontiers of statistics.Eminent leaders in the field have contributed 16 review articles and 6 research articles covering areas including semi-parametric models, data analytical nonparametric methods, statistical learning, network tomography, longitudinal data analysis, financial econometrics, time series, bootstrap and other re-sampling methodologies, statistical computing, generalized nonlinear regression and mixed effects models, martingale transform tests for model diagnostics, robust multivariate analysis, single index models and wavelets.This volume is dedicated to Prof. Peter J Bickel in honor of his 65th birthday. The first article of this volume summarizes some of Prof. Bickel's distinguished contributions.
Toxicogenomics was established as a merger of toxicology with genomics approaches and methodologies more than 15 years ago, and considered of major value for studying toxic mechanisms-of-action in greater depth and for classification of toxic agents for predicting adverse human health risks. While the original focus was on technological validation of in particular microarray-based whole genome expression analysis (transcriptomics), mainly through cross-comparing different platforms for data generation (MAQC-I), it was soon appreciated that actually the wide variety of data analysis approaches represents the major source of inter-study variation. This led to early attempts towards harmonizing...
The advent of high-speed, affordable computers in the last two decades has given a new boost to the nonparametric way of thinking. Classical nonparametric procedures, such as function smoothing, suddenly lost their abstract flavour as they became practically implementable. In addition, many previously unthinkable possibilities became mainstream; prime examples include the bootstrap and resampling methods, wavelets and nonlinear smoothers, graphical methods, data mining, bioinformatics, as well as the more recent algorithmic approaches such as bagging and boosting. This volume is a collection of short articles - most of which having a review component - describing the state-of-the art of Nonparametric Statistics at the beginning of a new millennium.Key features:• algorithic approaches • wavelets and nonlinear smoothers • graphical methods and data mining • biostatistics and bioinformatics • bagging and boosting • support vector machines • resampling methods
This textbook provides an easy-to-understand introduction to the mathematical concepts and algorithms at the foundation of data science. It covers essential parts of data organization, descriptive and inferential statistics, probability theory, and machine learning. These topics are presented in a clear and mathematical sound way to help readers gain a deep and fundamental understanding. Numerous application examples based on real data are included. The book is well-suited for lecturers and students at technical universities, and offers a good introduction and overview for people who are new to the subject. Basic mathematical knowledge of calculus and linear algebra is required.
The Second European Conference on Geostatistics for Environmental Ap plications took place in Valencia, November 18-20, 1998. Two years have past from the first meeting in Lisbon and the geostatistical community has kept active in the environmental field. In these days of congress inflation, we feel that continuity can only be achieved by ensuring quality in the papers. For this reason, all papers in the book have been reviewed by, at least, two referees, and care has been taken to ensure that the reviewer comments have been incorporated in the final version of the manuscript. We are thankful to the members of the scientific committee for their timely review of the scripts. All in all, there are three keynote papers from experts in soil science, climatology and ecology and 43 contributed papers providing a good indication of the status of geostatistics as applied in the environ mental field all over the world. We feel now confident that the geoENV conference series, seeded around a coffee table almost six years ago, will march firmly into the next century.
The focus of this book is on the birth and historical development of permutation statistical methods from the early 1920s to the near present. Beginning with the seminal contributions of R.A. Fisher, E.J.G. Pitman, and others in the 1920s and 1930s, permutation statistical methods were initially introduced to validate the assumptions of classical statistical methods. Permutation methods have advantages over classical methods in that they are optimal for small data sets and non-random samples, are data-dependent, and are free of distributional assumptions. Permutation probability values may be exact, or estimated via moment- or resampling-approximation procedures. Because permutation methods are inherently computationally-intensive, the evolution of computers and computing technology that made modern permutation methods possible accompanies the historical narrative. Permutation analogs of many well-known statistical tests are presented in a historical context, including multiple correlation and regression, analysis of variance, contingency table analysis, and measures of association and agreement. A non-mathematical approach makes the text accessible to readers of all levels.