You may have to Search all our reviewed books and magazines, click the sign up button below to create a free account.
For a long time computer scientists have distinguished between fast and slow algo rithms. Fast (or good) algorithms are the algorithms that run in polynomial time, which means that the number of steps required for the algorithm to solve a problem is bounded by some polynomial in the length of the input. All other algorithms are slow (or bad). The running time of slow algorithms is usually exponential. This book is about bad algorithms. There are several reasons why we are interested in exponential time algorithms. Most of us believe that there are many natural problems which cannot be solved by polynomial time algorithms. The most famous and oldest family of hard problems is the family of NP...
This comprehensive textbook presents a clean and coherent account of most fundamental tools and techniques in Parameterized Algorithms and is a self-contained guide to the area. The book covers many of the recent developments of the field, including application of important separators, branching based on linear programming, Cut & Count to obtain faster algorithms on tree decompositions, algorithms based on representative families of matroids, and use of the Strong Exponential Time Hypothesis. A number of older results are revisited and explained in a modern and didactic way. The book provides a toolbox of algorithmic techniques. Part I is an overview of basic techniques, each chapter discuss...
A complete introduction to recent advances in preprocessing analysis, or kernelization, with extensive examples using a single data set.
Introduces exciting new methods for assessing algorithms for problems ranging from clustering to linear programming to neural networks.
This book studies exponential time algorithms for NP-hard problems. In this modern area, the aim is to design algorithms for combinatorially hard problems that execute provably faster than a brute-force enumeration of all candidate solutions. After an introduction and survey of the field, the text focuses first on the design and especially the analysis of branching algorithms. The analysis of these algorithms heavily relies on measures of the instances, which aim at capturing the structure of the instances, not merely their size. This makes them more appropriate to quantify the progress an algorithm makes in the process of solving a problem. Expanding the methodology to design exponential time algorithms, new techniques are then presented. Two of them combine treewidth based algorithms with branching or enumeration algorithms. Another one is the iterative compression technique, prominent in the design of parameterized algorithms, and adapted here to the design of exponential time algorithms. This book assumes basic knowledge of algorithms and should serve anyone interested in exactly solving hard problems.
This book constitutes the refereed proceedings of the 17th International Symposium on Algorithms and Computation, ISAAC 2006, held in Kolkata, India, December 2006. The 73 revised full papers cover algorithms and data structures, online algorithms, approximation algorithm, computational geometry, computational complexity, optimization and biology, combinatorial optimization and quantum computing, as well as distributed computing and cryptography.
Here are the refereed proceedings of the Second International Workshop on Parameterized and Exact Computation, IWPEC 2006, held in the context of the combined conference ALGO 2006. The book presents 23 revised full papers together with 2 invited lectures. Coverage includes research in all aspects of parameterized and exact computation and complexity, including new techniques for the design and analysis of parameterized and exact algorithms, parameterized complexity theory, and more.
In this thesis we study the computational complexity of five NP-hard graph problems. It is widely accepted that, in general, NP-hard problems cannot be solved efficiently, that is, in polynomial time, due to many unsuccessful attempts to prove the contrary. Hence, we aim to identify properties of the inputs other than their length, that make the problem tractable or intractable. We measure these properties via parameters, mappings that assign to each input a nonnegative integer. For a given parameter k, we then attempt to design fixed-parameter algorithms, algorithms that on input q have running time upper bounded by f(k(q)) * |q|^c , where f is a preferably slowly growing function, |q| is t...
This volume contains the papers presented at the 29th Symposium on Mat- matical Foundations of Computer Science, MFCS 2004, held in Prague, Czech Republic, August 22–27, 2004. The conference was organized by the Institute for Theoretical Computer Science (ITI) and the Department of Theoretical Com- terScienceandMathematicalLogic(KTIML)oftheFacultyofMathematicsand Physics of Charles University in Prague. It was supported in part by the Eu- pean Association for Theoretical Computer Science (EATCS) and the European Research Consortium for Informatics and Mathematics (ERCIM). Traditionally, the MFCS symposia encourage high-quality research in all branches of theoretical computer science. Rangi...
This book constitutes the refereed proceedings of the 13th International Symposium Fundamentals of Computation Theory, FCT 2001, as well as of the International Workshop on Efficient Algorithms, WEA 2001, held in Riga, Latvia, in August 2001. The 28 revised full FCT papers and 15 short papers presented together with six invited contributions and 8 revised full WEA papers as well as three invited WEA contributions have been carefully reviewed and selected. Among the topics addressed are a broad variety of topics from theoretical computer science, algorithmics and programming theory. The WEA papers deal with graph and network algorithms, flow and routing problems, scheduling and approximation algorithms, etc.