You may have to Search all our reviewed books and magazines, click the sign up button below to create a free account.
The position taken in this collection of pedagogically written essays is that conjugate gradient algorithms and finite element methods complement each other extremely well. Via their combinations practitioners have been able to solve complicated, direct and inverse, multidemensional problems modeled by ordinary or partial differential equations and inequalities, not necessarily linear, optimal control and optimal design being part of these problems. The aim of this book is to present both methods in the context of complicated problems modeled by linear and nonlinear partial differential equations, to provide an in-depth discussion on their implementation aspects. The authors show that conjugate gradient methods and finite element methods apply to the solution of real-life problems. They address graduate students as well as experts in scientific computing.
Proceedings of the AMS-IMS-SIAM Summer Research Conference held at the University of Washington, July 1995.
Preconditioning and the Conjugate Gradient Method in the Context of Solving PDEs?is about the interplay between modeling, analysis, discretization, matrix computation, and model reduction. The authors link PDE analysis, functional analysis, and calculus of variations with matrix iterative computation using Krylov subspace methods and address the challenges that arise during formulation of the mathematical model through to efficient numerical solution of the algebraic problem. The book?s central concept, preconditioning of the conjugate gradient method, is traditionally developed algebraically using the preconditioned finite-dimensional algebraic system. In this text, however, preconditioning is connected to the PDE analysis, and the infinite-dimensional formulation of the conjugate gradient method and its discretization and preconditioning are linked together. This text challenges commonly held views, addresses widespread misunderstandings, and formulates thought-provoking open questions for further research.?
An intuitive approach to machine learning covering key concepts, real-world applications, and practical Python coding exercises.
Optimization Theory and Methods can be used as a textbook for an optimization course for graduates and senior undergraduates. It is the result of the author's teaching and research over the past decade. It describes optimization theory and several powerful methods. For most methods, the book discusses an idea’s motivation, studies the derivation, establishes the global and local convergence, describes algorithmic steps, and discusses the numerical performance.
Two approaches are known for solving large-scale unconstrained optimization problems—the limited-memory quasi-Newton method (truncated Newton method) and the conjugate gradient method. This is the first book to detail conjugate gradient methods, showing their properties and convergence characteristics as well as their performance in solving large-scale unconstrained optimization problems and applications. Comparisons to the limited-memory and truncated Newton methods are also discussed. Topics studied in detail include: linear conjugate gradient methods, standard conjugate gradient methods, acceleration of conjugate gradient methods, hybrid, modifications of the standard scheme, memoryless...
This third edition of the classic textbook in Optimization has been fully revised and updated. It comprehensively covers modern theoretical insights in this crucial computing area, and will be required reading for analysts and operations researchers in a variety of fields. The book connects the purely analytical character of an optimization problem, and the behavior of algorithms used to solve it. Now, the third edition has been completely updated with recent Optimization Methods. The book also has a new co-author, Yinyu Ye of California’s Stanford University, who has written lots of extra material including some on Interior Point Methods.
This book details algorithms for large-scale unconstrained and bound constrained optimization. It shows optimization techniques from a conjugate gradient algorithm perspective as well as methods of shortest residuals, which have been developed by the author.
The primary goal of this book is to provide a self-contained, comprehensive study of the main ?rst-order methods that are frequently used in solving large-scale problems. First-order methods exploit information on values and gradients/subgradients (but not Hessians) of the functions composing the model under consideration. With the increase in the number of applications that can be modeled as large or even huge-scale optimization problems, there has been a revived interest in using simple methods that require low iteration cost as well as low memory storage. The author has gathered, reorganized, and synthesized (in a unified manner) many results that are currently scattered throughout the literature, many of which cannot be typically found in optimization books. First-Order Methods in Optimization offers comprehensive study of first-order methods with the theoretical foundations; provides plentiful examples and illustrations; emphasizes rates of convergence and complexity analysis of the main first-order methods used to solve large-scale problems; and covers both variables and functional decomposition methods.