You may have to Search all our reviewed books and magazines, click the sign up button below to create a free account.
Modern statistics deals with large and complex data sets, and consequently with models containing a large number of parameters. This book presents a detailed account of recently developed approaches, including the Lasso and versions of it for various models, boosting methods, undirected graphical modeling, and procedures controlling false positive selections. A special characteristic of the book is that it contains comprehensive mathematical theory on high-dimensional statistics combined with methodology, algorithms and illustrations with real data examples. This in-depth approach highlights the methods’ great potential and practical applicability in a variety of settings. As such, it is a valuable resource for researchers, graduate students and experts in statistics, applied mathematics and computer science.
This book features research contributions from The Abel Symposium on Statistical Analysis for High Dimensional Data, held in Nyvågar, Lofoten, Norway, in May 2014. The focus of the symposium was on statistical and machine learning methodologies specifically developed for inference in “big data” situations, with particular reference to genomic applications. The contributors, who are among the most prominent researchers on the theory of statistics for high dimensional inference, present new theories and methods, as well as challenging applications and computational solutions. Specific themes include, among others, variable selection and screening, penalised regression, sparsity, thresholding, low dimensional structures, computational challenges, non-convex situations, learning graphical models, sparse covariance and precision matrices, semi- and non-parametric formulations, multiple testing, classification, factor models, clustering, and preselection. Highlighting cutting-edge research and casting light on future research directions, the contributions will benefit graduate students and researchers in computational biology, statistics and the machine learning community.
This volume presents selections of Peter J. Bickel’s major papers, along with comments on their novelty and impact on the subsequent development of statistics as a discipline. Each of the eight parts concerns a particular area of research and provides new commentary by experts in the area. The parts range from Rank-Based Nonparametrics to Function Estimation and Bootstrap Resampling. Peter’s amazing career encompasses the majority of statistical developments in the last half-century or about about half of the entire history of the systematic development of statistics. This volume shares insights on these exciting statistical developments with future generations of statisticians. The compilation of supporting material about Peter’s life and work help readers understand the environment under which his research was conducted. The material will also inspire readers in their own research-based pursuits. This volume includes new photos of Peter Bickel, his biography, publication list, and a list of his students. These give the reader a more complete picture of Peter Bickel as a teacher, a friend, a colleague, and a family man.
The Handbook of Computational Statistics: Concepts and Methodology is divided into four parts. It begins with an overview over the field of Computational Statistics. The second part presents several topics in the supporting field of statistical computing. Emphasis is placed on the need of fast and accurate numerical algorithms and it discusses some of the basic methodologies for transformation, data base handling and graphics treatment. The third part focuses on statistical methodology. Special attention is given to smoothing, iterative procedures, simulation and visualization of multivariate data. Finally a set of selected applications like Bioinformatics, Medical Imaging, Finance and Network Intrusion Detection highlight the usefulness of computational statistics.
The advent of high-speed, affordable computers in the last two decades has given a new boost to the nonparametric way of thinking. Classical nonparametric procedures, such as function smoothing, suddenly lost their abstract flavour as they became practically implementable. In addition, many previously unthinkable possibilities became mainstream; prime examples include the bootstrap and resampling methods, wavelets and nonlinear smoothers, graphical methods, data mining, bioinformatics, as well as the more recent algorithmic approaches such as bagging and boosting. This volume is a collection of short articles - most of which having a review component - describing the state-of-the art of Nonparametric Statistics at the beginning of a new millennium. Key features: . algorithic approaches . wavelets and nonlinear smoothers . graphical methods and data mining . biostatistics and bioinformatics . bagging and boosting . support vector machines . resampling methods
This open access book discusses the statistical modeling of insurance problems, a process which comprises data collection, data analysis and statistical model building to forecast insured events that may happen in the future. It presents the mathematical foundations behind these fundamental statistical concepts and how they can be applied in daily actuarial practice. Statistical modeling has a wide range of applications, and, depending on the application, the theoretical aspects may be weighted differently: here the main focus is on prediction rather than explanation. Starting with a presentation of state-of-the-art actuarial models, such as generalized linear models, the book then dives into modern machine learning tools such as neural networks and text recognition to improve predictive modeling with complex features. Providing practitioners with detailed guidance on how to apply machine learning methods to real-world data sets, and how to interpret the results without losing sight of the mathematical assumptions on which these methods are based, the book can serve as a modern basis for an actuarial education syllabus.
The Handbook of Computational Statistics - Concepts and Methods (second edition) is a revision of the first edition published in 2004, and contains additional comments and updated information on the existing chapters, as well as three new chapters addressing recent work in the field of computational statistics. This new edition is divided into 4 parts in the same way as the first edition. It begins with "How Computational Statistics became the backbone of modern data science" (Ch.1): an overview of the field of Computational Statistics, how it emerged as a separate discipline, and how its own development mirrored that of hardware and software, including a discussion of current active researc...
This volume seeks to infer large phylogenetic networks from phonetically encoded lexical data and contribute in this way to the historical study of language varieties. The technical step that enables progress in this case is the use of causal inference algorithms. Sample sets of words from language varieties are preprocessed into automatically inferred cognate sets, and then modeled as information-theoretic variables based on an intuitive measure of cognate overlap. Causal inference is then applied to these variables in order to determine the existence and direction of influence among the varieties. The directed arcs in the resulting graph structures can be interpreted as reflecting the exis...
In most languages, words contain vowels, elements of high intensity with rich harmonic structure, enabling the perceptual retrieval of pitch. By contrast, in Tashlhiyt, a Berber language, words can be composed entirely of voiceless segments. When an utterance consists of such words, the phonetic opportunity for the execution of intonational pitch movements is exceptionally limited. This book explores in a series of production and perception experiments how these typologically rare phonotactic patterns interact with intonational aspects of linguistic structure. It turns out that Tashlhiyt allows for a tremendously flexible placement of tonal events. Observed intonational structures can be conceived of as different solutions to a functional dilemma: The requirement to realise meaningful pitch movements in certain positions and the extent to which segments lend themselves to a clear manifestation of these pitch movements.
Digital health and medical informatics have grown in importance in recent years, and have now become central to the provision of effective healthcare around the world. This book presents the proceedings of the 30th Medical Informatics Europe conference (MIE). This edition of the conference, hosted by the European Federation for Medical Informatics (EFMI) since the 1970s, was due to be held in Geneva, Switzerland in April 2020, but as a result of measures to prevent the spread of the Covid19 pandemic, the conference itself had to be cancelled. Nevertheless, because this collection of papers offers a wealth of knowledge and experience across the full spectrum of digital health and medicine, it...