You may have to Search all our reviewed books and magazines, click the sign up button below to create a free account.
This lively collection of essays examines statistical ideas with an ironic eye for their essence and what their history can tell us for current disputes. The topics range from 17th-century medicine and the circulation of blood, to the cause of the Great Depression, to the determinations of the shape of the Earth and the speed of light.
This magnificent book is the first comprehensive history of statistics from its beginnings around 1700 to its emergence as a distinct and mature discipline around 1900. Stephen M. Stigler shows how statistics arose from the interplay of mathematical concepts and the needs of several applied sciences including astronomy, geodesy, experimental psychology, genetics, and sociology. He addresses many intriguing questions: How did scientists learn to combine measurements made under different conditions? And how were they led to use probability theory to measure the accuracy of the result? Why were statistical methods used successfully in astronomy long before they began to play a significant role ...
What gives statistics its unity as a science? Stephen Stigler sets forth the seven foundational ideas of statistics—a scientific discipline related to but distinct from mathematics and computer science. Even the most basic idea—aggregation, exemplified by averaging—is counterintuitive. It allows one to gain information by discarding information, namely, the individuality of the observations. Stigler’s second pillar, information measurement, challenges the importance of “big data” by noting that observations are not all equally important: the amount of information in a data set is often proportional to only the square root of the number of observations, not the absolute number. Th...
"In the Winter Quarter of the academic year 1984-1985, Raj Bahadur gave a series of lectures on estimation theory at the University of Chicago"--Page i.
"In 1994, historian Stephen Stigler placed a mail-order purchase for a rare bit of ephemera from a French bookstore: a lottery Almanac from 1834. It contained the winning numbers for the entire span of the French Loterie from 1758 onward, including details on prizes actually awarded-difficult data to come by-as well as hand-written notes by an early owner. Stigler was fascinated with what he saw about how the Loterie was carried out, who bought tickets, and what size bets they placed, and so in the decades that followed he amassed booklets, legal documents, advertising bills, notices, contracts, and tickets. His own collection and extensive additional research helped him piece together the L...
A daily glass of wine prolongs life—yet alcohol can cause life-threatening cancer. Some say raising the minimum wage will decrease inequality while others say it increases unemployment. Scientists once confidently claimed that hormone replacement therapy reduced the risk of heart disease but now they equally confidently claim it raises that risk. What should we make of this endless barrage of conflicting claims? Observation and Experiment is an introduction to causal inference by one of the field’s leading scholars. An award-winning professor at Wharton, Paul Rosenbaum explains key concepts and methods through lively examples that make abstract principles accessible. He draws his example...
A traditional approach to developing multivariate statistical theory is algebraic. Sets of observations are represented by matrices, linear combinations are formed from these matrices by multiplying them by coefficient matrices, and useful statistics are found by imposing various criteria of optimization on these combinations. Matrix algebra is the vehicle for these calculations. A second approach is computational. Since many users find that they do not need to know the mathematical basis of the techniques as long as they have a way to transform data into results, the computation can be done by a package of computer programs that somebody else has written. An approach from this perspective e...
Begins with study of history of statistics, and shows how the evolution of modern statistics has been inextricably bound up with the knowledge and power of governments.
Classical statistical theory—hypothesis testing, estimation, and the design of experiments and sample surveys—is mainly the creation of two men: Ronald A. Fisher (1890-1962) and Jerzy Neyman (1894-1981). Their contributions sometimes complemented each other, sometimes occurred in parallel, and, particularly at later stages, often were in strong opposition. The two men would not be pleased to see their names linked in this way, since throughout most of their working lives they detested each other. Nevertheless, they worked on the same problems, and through their combined efforts created a new discipline. This new book by E.L. Lehmann, himself a student of Neyman’s, explores the relationship between Neyman and Fisher, as well as their interactions with other influential statisticians, and the statistical history they helped create together. Lehmann uses direct correspondence and original papers to recreate an historical account of the creation of the Neyman-Pearson Theory as well as Fisher’s dissent, and other important statistical theories.
In 1865, Gregor Mendel presented "Experiments in Plant-Hybridization," the results of his eight-year study of the principles of inheritance through experimentation with pea plants. Overlooked in its day, Mendel's work would later become the foundation of modern genetics. Did his pioneering research follow the rigors of real scientific inquiry, or was Mendel's data too good to be true—the product of doctored statistics? In Ending the Mendel-Fisher Controversy, leading experts present their conclusions on the legendary controversy surrounding the challenge to Mendel's findings by British statistician and biologist R. A. Fisher. In his 1936 paper "Has Mendel's Work Been Rediscovered?" Fisher ...