You may have to Search all our reviewed books and magazines, click the sign up button below to create a free account.
This book summarizes the methods and concepts of Statistical Implicative Analysis (SIA), created by Régis Gras in the 1980s to study, in a new way, the behavioural responses of French pupils to mathematics tests. Using a multidimensional, non-symmetrical data analysis method, SIA crosses a set of subjects or objects with a set of variables. It effectively complements traditional correlational and psychometric methods. SIA, through its various extensions, is today presented as a broad Artificial Intelligence method aimed at extracting trends and possible causalities in the form of rules, from a set of variables. It is based on the unlikeliness of the existence of these relationships, i.e. on...
This book presents a fascinating and self-contained account of "recruitment learning", a model and theory of fast learning in the neocortex. In contrast to the more common attractor network paradigm for long- and short-term memory, recruitment learning focuses on one-shot learning or "chunking" of arbitrary feature conjunctions that co-occur in single presentations. The book starts with a comprehensive review of the historic background of recruitment learning, putting special emphasis on the ground-breaking work of D.O. Hebb, W.A.Wickelgren, J.A.Feldman, L.G.Valiant, and L. Shastri. Afterwards a thorough mathematical analysis of the model is presented which shows that recruitment is indeed a...
Mechatronic design processes have become shorter and more parallelized, induced by growing time-to-market pressure. Methods that enable quantitative analysis in early design stages are required, should dependability analyses aim to influence the design. Due to the limited amount of data in this phase, the level of uncertainty is high and explicit modeling of these uncertainties becomes necessary. This work introduces new uncertainty-preserving dependability methods for early design stages. These include the propagation of uncertainty through dependability models, the activation of data from similar components for analyses and the integration of uncertain dependability predictions into an optimization framework. It is shown that Dempster-Shafer theory can be an alternative to probability theory in early design stage dependability predictions. Expert estimates can be represented, input uncertainty is propagated through the system and prediction uncertainty can be measured and interpreted. The resulting coherent methodology can be applied to represent the uncertainty in dependability models.
Artists and creators in interactive art and interaction design have long been conducting research on human-machine interaction. Through artistic, conceptual, social and critical projects, they have shown how interactive digital processes are essential elements for their artistic creations. Resulting prototypes have often reached beyond the art arena into areas such as mobile computing, intelligent ambiences, intelligent architecture, fashionable technologies, ubiquitous computing and pervasive gaming. Many of the early artist-developed interactive technologies have influenced new design practices, products and services of today's media society. This book brings together key theoreticians and practitioners of this field. It shows how historically relevant the issues of interaction and interface design are, as they can be analyzed not only from an engineering point of view but from a social, artistic and conceptual, and even commercial angle as well.
This book represents a comprehensive introduction into both conceptual and rigorous brain and cognition modelling. It is devoted to understanding, prediction and control of the fundamental mechanisms of brain functioning. The reader will be provided with a scientific tool enabling him or her to perform a competitive research in brain and cognition modelling. This is a graduate–level monographic textbook.
One of the main difficulties of applying an evolutionary algorithm (or, as a matter of fact, any heuristic method) to a given problem is to decide on an appropriate set of parameter values. Typically these are specified before the algorithm is run and include population size, selection rate, operator probabilities, not to mention the representation and the operators themselves. This book gives the reader a solid perspective on the different approaches that have been proposed to automate control of these parameters as well as understanding their interactions. The book covers a broad area of evolutionary computation, including genetic algorithms, evolution strategies, genetic programming, estimation of distribution algorithms, and also discusses the issues of specific parameters used in parallel implementations, multi-objective evolutionary algorithms, and practical consideration for real-world applications. It is a recommended read for researchers and practitioners of evolutionary computation and heuristic methods.
Perspectives on Ontology Learning brings together researchers and practitioners from different communities − natural language processing, machine learning, and the semantic web − in order to give an interdisciplinary overview of recent advances in ontology learning. Starting with a comprehensive introduction to the theoretical foundations of ontology learning methods, the edited volume presents the state-of-the-start in automated knowledge acquisition and maintenance. It outlines future challenges in this area with a special focus on technologies suitable for pushing the boundaries beyond the creation of simple taxonomical structures, as well as on problems specifically related to knowledge modeling and representation using the Web Ontology Language. Perspectives on Ontology Learning is designed for researchers in the field of semantic technologies and developers of knowledge-based applications. It covers various aspects of ontology learning including ontology quality, user interaction, scalability, knowledge acquisition from heterogeneous sources, as well as the integration with ontology engineering methodologies.
In this book, various aspects of cognitive and emotional behaviour is described. In chapter one, a state of the art introduction to VH is presented and the associated research is given. In Chapter 2, cognitive and emotions processes are described. A Comprehensive context model for multi-party interactions with the VH is given in the next chapter. Finally, it is very important to model the socializing of groups of virtual humans. This is discussed in Chapter 4. The automatic modelling of expressions for VH is described in Chapter 5. The last chapter gives a case study of an intelligent kios avatar and its usability. This book gives examples of some advances that enable VH to behave intelligently. It provides an overview of these research problems and some unsolved problems.
This volume comprises papers dedicated to data science and the extraction of knowledge from many types of data: structural, quantitative, or statistical approaches for the analysis of data; advances in classification, clustering and pattern recognition methods; strategies for modeling complex data and mining large data sets; applications of advanced methods in specific domains of practice. The contributions offer interesting applications to various disciplines such as psychology, biology, medical and health sciences; economics, marketing, banking and finance; engineering; geography and geology; archeology, sociology, educational sciences, linguistics and musicology; library science. The book contains the selected and peer-reviewed papers presented during the European Conference on Data Analysis (ECDA 2013) which was jointly held by the German Classification Society (GfKl) and the French-speaking Classification Society (SFC) in July 2013 at the University of Luxembourg.