You may have to Search all our reviewed books and magazines, click the sign up button below to create a free account.
During the past two or three decades, research in cognitive science and psychology has yielded an improved understanding of the fundamental psychological nature of knowledge and cognitive skills that psychological testing attempts to measure. These theories have reached sufficient maturity, making it reasonable to look upon them to provide a sound theoretical foundation for assessment, particulary for the content of assessments. This fact, combined with much discontentedness over current testing practices, has inspired efforts to bring testing and cognitive theory together to create a new theoretical framework for psychological testing -- a framework developed for diagnosing learners' differ...
The techniques of analytic mapping and of geographic information systems (GIS) have become increasingly important tools for analysing census, crime, environmental and consumer data. The authors discuss data access, transformation and preparation issues, and how to select the appropriate analytic graphics techniques.
The number of longitudinal data archives is growing almost daily, yet no resource exists to help understand the relationship between research questions and archival data--until now. Drawing on a single project, the Lewis Terman Study at Stanford University, the authors illustrate how to use the model-fitting process to select and fit the right data set to a particular research problem. Employing a step-by-step approach, this handy volume covers the measurement of historical influences, the adaptation of existing coding schemes to temporal patterns that are characteristic of life records, and the recasting of archival materials to illuminate contemporary questions that the data were not designed to answer.
Through the use of actual research investigations that have appeared in recent social science journals, Gibbons shows the reader the specific methodology and logical rationale for many of the best-known and most frequently used nonparametric methods that are applicable to most small and large sample sizes. The methods are organized according to the type of sample structure that produced the data to be analyzed, and the inference types covered are limited to location tests, such as the sign test, the Mann-Whitney-Wilcoxon test, the Kruskal-Wallis test and Friedman's test. The formal introduction of each test is followed by a data example, calculated first by hand and then by computer.
"Since ... writing the first edition of this monograph in 1990, ... the 1990s have seen an increasing focus on more sophisticated approaches to dealing with missing data in both cross-sectional and longitudinal research. Software applicable to longitudinal research has also improved, and more evidence for the rapid pace of change in longitudinal analysis can be found in the dozen or so books written and edited about longitudinal research design and data analysis published in the 1990s and early in the present millennium. The organization of this monograph remains the same as in the first edition. ... There is much less said about the application of traditional methods of analysis to longitudinal data, and more focus on analytical methods specifically designed for longitudinal data, including time series analysis, linear panel analyis, multilevel and latent growth curve modeling, and event history analysis."--Preface.
The International Conference on Cognitive Modeling brings together researchers who develop computational models to explain and predict cognitive data. The core theme of the 2004 conference was "Integrating Computational Models," encompassing an integration of diverse data through models of coherent phenomena; integration across modeling approaches; and integration of teaching and modeling. This text presents the proceedings of that conference. The International Conference on Cognitive Modeling 2004 sought to grow the discipline of computational cognitive modeling by providing a sophisticated modeling audience for cutting-edge researchers, in addition to offering a forum for integrating insights across alternative modeling approaches in both basic research and applied settings, and a venue for planning the future growth of the discipline. The meeting included a careful peer-review process of 6-page paper submissions; poster-abstracts to include late-breaking work in the area; prizes for best papers; a doctoral consortium; and competitive modeling symposia that compare and contrast different approaches to the same phenomena.
Explaining the techniques needed for exploring problems that comprise a regression analysis, and for determining whether certain assumptions appear reasonable, this book covers such topics as the problem of collinearity in multiple regression, non-normality of errors, and discrete data.
Creativity and design creativity in particular are being recognized as playing an increasing role in the social and economic wellbeing of a society. As a consequence creativity is becoming a focus of research. However, much of this burgeoning research is distributed across multiple disciplines that normally do not intersect with each other and researchers in one discipline are often unaware of related research in another discipline. This volume brings together contributions from design science, computer science, cognitive science and neuroscience on studying visual and spatial reasoning applicable to design creativity. The book is the result of a unique NSF-funded workshop held in Aix-en-Provence, France. The aim of the workshop and the resulting volume was to allow researchers in disparate disciplines to be exposed to the other’s research, research methods and research results within the context of design creativity. Fifteen of the papers presented and discussed at the workshop are contained in this volume. The contributors come from Germany, Israel, Netherlands, Poland, Singapore, UK and USA, indicating the international spread of the research presented in this volume.
A unique, practical manual for identifying and analyzing item bias in standardized tests. Osterlind discusses five strategies for detecting bias: analysis of variance, transformed item difficulties, chi square, item characteristic curve, and distractor response. He covers specific hypotheses under test for each technique, as well as the capabilities and limitations of each strategy.