You may have to Search all our reviewed books and magazines, click the sign up button below to create a free account.
The explosive development of information science and technology puts in new problems involving statistical data analysis. These problems result from higher re quirements concerning the reliability of statistical decisions, the accuracy of math ematical models and the quality of control in complex systems. A new aspect of statistical analysis has emerged, closely connected with one of the basic questions of cynergetics: how to "compress" large volumes of experimental data in order to extract the most valuable information from data observed. De tection of large "homogeneous" segments of data enables one to identify "hidden" regularities in an object's behavior, to create mathematical models fo...
Recently there has been a keen interest in the statistical analysis of change point detec tion and estimation. Mainly, it is because change point problems can be encountered in many disciplines such as economics, finance, medicine, psychology, geology, litera ture, etc. , and even in our daily lives. From the statistical point of view, a change point is a place or time point such that the observations follow one distribution up to that point and follow another distribution after that point. Multiple change points problem can also be defined similarly. So the change point(s) problem is two fold: one is to de cide if there is any change (often viewed as a hypothesis testing problem), another i...
The explosive development of information science and technology puts in new problems involving statistical data analysis. These problems result from higher re quirements concerning the reliability of statistical decisions, the accuracy of math ematical models and the quality of control in complex systems. A new aspect of statistical analysis has emerged, closely connected with one of the basic questions of cynergetics: how to "compress" large volumes of experimental data in order to extract the most valuable information from data observed. De tection of large "homogeneous" segments of data enables one to identify "hidden" regularities in an object's behavior, to create mathematical models fo...
This research monograph on circular data analysis covers some recent advances in the field, besides providing a brief introduction to, and a review of, existing methods and models. The primary focus is on recent research into topics such as change-point problems, predictive distributions, circular correlation and regression, etc. An important feature of this work is the S-plus subroutines provided for analyzing actual data sets. Coupled with the discussion of new theoretical research, the book should benefit both the researcher and the practitioner.
This volume is a compressed survey containing recent results on statistics of stochastic processes and on identification with incomplete observations. It comprises a collection of papers presented at the Shoresh Conference 2000 on the Foundation of Statistical Inference. The papers cover the following areas with high research activity: - Identification with Incomplete Observations, Data Mining, - Bayesian Methods and Modelling, - Testing, Goodness of Fit and Randomness, - Statistics of Stationary Processes.
The series, Contemporary Perspectives on Data Mining, is composed of blind refereed scholarly research methods and applications of data mining. This series will be targeted both at the academic community, as well as the business practitioner. Data mining seeks to discover knowledge from vast amounts of data with the use of statistical and mathematical techniques. The knowledge is extracted form this data by examining the patterns of the data, whether they be associations of groups or things, predictions, sequential relationships between time order events or natural groups. Data mining applications are seen in finance (banking, brokerage, insurance), marketing (customer relationships, retailing, logistics, travel), as well as in manufacturing, health care, fraud detection, home-land security, and law enforcement.
Stationarity is a very general, qualitative assumption, that can be assessed on the basis of application specifics. It is thus a rather attractive assumption to base statistical analysis on, especially for problems for which less general qualitative assumptions, such as independence or finite memory, clearly fail. However, it has long been considered too general to be able to make statistical inference. One of the reasons for this is that rates of convergence, even of frequencies to the mean, are not available under this assumption alone. Recently, it has been shown that, while some natural and simple problems, such as homogeneity, are indeed provably impossible to solve if one only assumes ...