You may have to Search all our reviewed books and magazines, click the sign up button below to create a free account.
Computer vision is one of the most complex and computationally intensive problem. Like any other computationally intensive problems, parallel pro cessing has been suggested as an approach to solving the problems in com puter vision. Computer vision employs algorithms from a wide range of areas such as image and signal processing, advanced mathematics, graph theory, databases and artificial intelligence. Hence, not only are the comput ing requirements for solving vision problems tremendous but they also demand computers that are efficient to solve problems exhibiting vastly dif ferent characteristics. With recent advances in VLSI design technology, Single Instruction Multiple Data (SIMD) mass...
This three-volume work presents a compendium of current and seminal papers on parallel/distributed processing offered at the 22nd International Conference on Parallel Processing, held August 16-20, 1993 in Chicago, Illinois. Topics include processor architectures; mapping algorithms to parallel systems, performance evaluations; fault diagnosis, recovery, and tolerance; cube networks; portable software; synchronization; compilers; hypercube computing; and image processing and graphics. Computer professionals in parallel processing, distributed systems, and software engineering will find this book essential to their complete computer reference library.
Content Description #Includes bibliographical references and index.
This book constitutes the refereed proceedings of the 5th International Conference on Web-Age Information Management, WAIM 2004, held in Dalian, China in July 2004. The 57 revised full papers and 23 revised short and industrial papers presented together with 3 invited contributions were carefully reviewed and selected from 291 submissions. The papers are organized in topical sections on data stream processing, time series data processing, security, mobile computing, cache management, query evaluation, Web search engines, XML, Web services, classification, and data mining.
This book constitutes the refereed proceedings of the 11th Asia-Pacific Computer Systems Architecture Conference, ACSAC 2006. The book presents 60 revised full papers together with 3 invited lectures, addressing such issues as processor and network design, reconfigurable computing and operating systems, and low-level design issues in both hardware and systems. Coverage includes large and significant computer-based infrastructure projects, the challenges of stricter budgets in power dissipation, and more.
Many federal funding requests for more advanced computer resources assume implicitly that greater computing power creates opportunities for advancement in science and engineering. This has often been a good assumption. Given stringent pressures on the federal budget, the White House Office of Management and Budget (OMB) and Office of Science and Technology Policy (OSTP) are seeking an improved approach to the formulation and review of requests from the agencies for new computing funds. This book examines, for four illustrative fields of science and engineering, how one can start with an understanding of their major challenges and discern how progress against those challenges depends on high-end capability computing (HECC). The four fields covered are: atmospheric science astrophysics chemical separations evolutionary biology This book finds that all four of these fields are critically dependent on HECC, but in different ways. The book characterizes the components that combine to enable new advances in computational science and engineering and identifies aspects that apply to multiple fields.
Workflows may be defined as abstractions used to model the coherent flow of activities in the context of an in silico scientific experiment. They are employed in many domains of science such as bioinformatics, astronomy, and engineering. Such workflows usually present a considerable number of activities and activations (i.e., tasks associated with activities) and may need a long time for execution. Due to the continuous need to store and process data efficiently (making them data-intensive workflows), high-performance computing environments allied to parallelization techniques are used to run these workflows. At the beginning of the 2010s, cloud technologies emerged as a promising environmen...
Details a real-world product that applies a cutting-edge multi-core architecture Increasingly demanding modern applications—such as those used in telecommunications networking and real-time processing of audio, video, and multimedia streams—require multiple processors to achieve computational performance at the rate of a few giga-operations per second. This necessity for speed and manageable power consumption makes it likely that the next generation of embedded processing systems will include hundreds of cores, while being increasingly programmable, blending processors and configurable hardware in a power-efficient manner. Multi-Core Embedded Systems presents a variety of perspectives th...
Mathematics of Computing -- Parallelism.