You may have to Search all our reviewed books and magazines, click the sign up button below to create a free account.
Recent advances in the area of lifted inference, which exploits the structure inherent in relational probabilistic models. Statistical relational AI (StaRAI) studies the integration of reasoning under uncertainty with reasoning about individuals and relations. The representations used are often called relational probabilistic models. Lifted inference is about how to exploit the structure inherent in relational probabilistic models, either in the way they are expressed or by extracting structure from observations. This book covers recent significant advances in the area of lifted inference, providing a unifying introduction to this very active field. After providing necessary background on probabilistic graphical models, relational probabilistic models, and learning inside these models, the book turns to lifted inference, first covering exact inference and then approximate inference. In addition, the book considers the theory of liftability and acting in relational domains, which allows the connection of learning and reasoning in relational domains.
This book constitutes the refereed proceedings of the 15th Conference on Artificial Intelligence in Medicine, AIME 2015, held in Pavia, Italy, in June 2015. The 19 revised full and 24 short papers presented were carefully reviewed and selected from 99 submissions. The papers are organized in the following topical sections: process mining and phenotyping; data mining and machine learning; temporal data mining; uncertainty and Bayesian networks; text mining; prediction in clinical practice; and knowledge representation and guidelines.
An intelligent agent interacting with the real world will encounter individual people, courses, test results, drugs prescriptions, chairs, boxes, etc., and needs to reason about properties of these individuals and relations among them as well as cope with uncertainty. Uncertainty has been studied in probability theory and graphical models, and relations have been studied in logic, in particular in the predicate calculus and its extensions. This book examines the foundations of combining logic and probability into what are called relational probabilistic models. It introduces representations, inference, and learning techniques for probability, logic, and their combinations. The book focuses on two representations in detail: Markov logic networks, a relational extension of undirected graphical models and weighted first-order predicate calculus formula, and Problog, a probabilistic extension of logic programs that can also be viewed as a Turing-complete relational extension of Bayesian networks.
This book constitutes the thoroughly refereed post-conference proceedings of the 26th International Conference on Inductive Logic Programming, ILP 2016, held in London, UK, in September 2016. The 10 full papers presented were carefully reviewed and selected from 29 submissions. The papers represent well the current breath of ILP research topics such as predicate invention; graph-based learning; spatial learning; logical foundations; statistical relational learning; probabilistic ILP; implementation and scalability; applications in robotics, cyber security and games.
This book constitutes the thoroughly refereed post-conference proceedings of the 17th International Conference on Inductive Logic Programming, ILP 2007, held in Corvallis, OR, USA, in June 2007 in conjunction with ICML 2007, the International Conference on Machine Learning. The 15 revised full papers and 11 revised short papers presented together with 2 invited lectures were carefully reviewed and selected from 38 initial submissions. The papers present original results on all aspects of learning in logic, as well as multi-relational learning and data mining, statistical relational learning, graph and tree mining, relational reinforcement learning, and learning in other non-propositional knowledge representation frameworks. Thus all current topics in inductive logic programming, ranging from theoretical and methodological issues to advanced applications in various areas are covered.
Intelligent systems often depend on data provided by information agents, for example, sensor data or crowdsourced human computation. Providing accurate and relevant data requires costly effort that agents may not always be willing to provide. Thus, it becomes important not only to verify the correctness of data, but also to provide incentives so that agents that provide high-quality data are rewarded while those that do not are discouraged by low rewards. We cover different settings and the assumptions they admit, including sensing, human computation, peer grading, reviews, and predictions. We survey different incentive mechanisms, including proper scoring rules, prediction markets and peer prediction, Bayesian Truth Serum, Peer Truth Serum, Correlated Agreement, and the settings where each of them would be suitable. As an alternative, we also consider reputation mechanisms. We complement the game-theoretic analysis with practical examples of applications in prediction platforms, community sensing, and peer grading.
This three-volume set LNAI 8724, 8725 and 8726 constitutes the refereed proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases: ECML PKDD 2014, held in Nancy, France, in September 2014. The 115 revised research papers presented together with 13 demo track papers, 10 nectar track papers, 8 PhD track papers, and 9 invited talks were carefully reviewed and selected from 550 submissions. The papers cover the latest high-quality interdisciplinary research results in all areas related to machine learning and knowledge discovery in databases.
The ubiquitous challenge of learning and decision-making from rank data arises in situations where intelligent systems collect preference and behavior data from humans, learn from the data, and then use the data to help humans make efficient, effective, and timely decisions. Often, such data are represented by rankings. This book surveys some recent progress toward addressing the challenge from the considerations of statistics, computation, and socio-economics. We will cover classical statistical models for rank data, including random utility models, distance-based models, and mixture models. We will discuss and compare classical and state-of-the-art algorithms, such as algorithms based on M...
This comprehensive encyclopedia, in A-Z format, provides easy access to relevant information for those seeking entry into any aspect within the broad field of Machine Learning. Most of the entries in this preeminent work include useful literature references.
Graph-structured data is ubiquitous throughout the natural and social sciences, from telecommunication networks to quantum chemistry. Building relational inductive biases into deep learning architectures is crucial for creating systems that can learn, reason, and generalize from this kind of data. Recent years have seen a surge in research on graph representation learning, including techniques for deep graph embeddings, generalizations of convolutional neural networks to graph-structured data, and neural message-passing approaches inspired by belief propagation. These advances in graph representation learning have led to new state-of-the-art results in numerous domains, including chemical sy...