You may have to Search all our reviewed books and magazines, click the sign up button below to create a free account.
Large Scale and Big Data: Processing and Management provides readers with a central source of reference on the data management techniques currently available for large-scale data processing. Presenting chapters written by leading researchers, academics, and practitioners, it addresses the fundamental challenges associated with Big Data processing tools and techniques across a range of computing environments. The book begins by discussing the basic concepts and tools of large-scale Big Data processing and cloud computing. It also provides an overview of different programming models and cloud-based deployment models. The book’s second section examines the usage of advanced Big Data processin...
In practice, the design and architecture of a cloud varies among cloud providers. We present a generic evaluation framework for the performance, availability and reliability characteristics of various cloud platforms. We describe a generic benchmark architecture for cloud databases, specifically NoSQL database as a service. It measures the performance of replication delay and monetary cost. Service Level Agreements (SLA) represent the contract which captures the agreed upon guarantees between a service provider and its customers. The specifications of existing service level agreements (SLA) for cloud services are not designed to flexibly handle even relatively straightforward performance and...
The chapter “An Efficient Index for Reachability Queries in Public Transport Networks” is available open access under a Creative Commons Attribution 4.0 International License via link.springer.com.
This edited book collects state-of-the-art research related to large-scale data analytics that has been accomplished over the last few years. This is among the first books devoted to this important area based on contributions from diverse scientific areas such as databases, data mining, supercomputing, hardware architecture, data visualization, statistics, and privacy. There is increasing need for new approaches and technologies that can analyze and synthesize very large amounts of data, in the order of petabytes, that are generated by massively distributed data sources. This requires new distributed architectures for data analysis. Additionally, the heterogeneity of such sources imposes sig...
This book constitutes the refereed post-conference proceedings of the 6th TPC Technology Conference, TPCTC 2014, held in Hangzhou, China, in September 2014. It contains 12 selected peer-reviewed papers, a report from the TPC Public Relations Committee. Many buyers use TPC benchmark results as points of comparison when purchasing new computing systems. The information technology landscape is evolving at a rapid pace, challenging industry experts and researchers to develop innovative techniques for evaluation, measurement and characterization of complex systems. The TPC remains committed to developing new benchmark standards to keep pace and one vehicle for achieving this objective is the sponsorship of the Technology Conference on Performance Evaluation and Benchmarking (TPCTC). Over the last five years TPCTC has been held successfully in conjunction with VLDB.
This book constitutes the workshop proceedings of the 15th International Conference on Database Systems for Advanced Applications, DASFAA 2010, held in Tsukuba, Japan, in April 2010. The volume contains six workshops, each focusing on specific research issues that contribute to the main themes of the DASFAA conference: The First International Workshop on Graph Data Management: Techniques and Applications (GDM 2010), The Second International Workshop on Benchmarking of Database Management Systems and Data-Oriented Web Technologies (BenchmarkX'10); The Third International Workshop on Managing Data Quality in Collaborative Information Systems (MCIS2010), The Workshop on Social Networks and Social Media Mining on the Web (SNSMW2010), The Data Intensive eScience Workshop (DIEW 2010), and The Second International Workshop on Ubiquitous Data Management (UDM2010).
This book presents an end-to-end architecture for demand-based data stream gathering, processing, and transmission. The Internet of Things (IoT) consists of billions of devices which form a cloud of network connected sensor nodes. These sensor nodes supply a vast number of data streams with massive amounts of sensor data. Real-time sensor data enables diverse applications including traffic-aware navigation, machine monitoring, and home automation. Current stream processing pipelines are demand-oblivious, which means that they gather, transmit, and process as much data as possible. In contrast, a demand-based processing pipeline uses requirement specifications of data consumers, such as failu...
The LNCS journal Transactions on Large-Scale Data- and Knowledge-Centered Systems focuses on data management, knowledge discovery, and knowledge processing, which are core and hot topics in computer science. Since the 1990s, the Internet has become the main driving force behind application development in all domains. An increase in the demand for resource sharing across different sites connected through networks has led to an evolution of data- and knowledge-management systems from centralized systems to decentralized systems enabling large-scale distributed applications providing high scalability. Current decentralized systems still focus on data and knowledge as their main resource. Feasib...
The realization that the use of components off the shelf (COTS) could reduce costs sparked the evolution of the massive parallel computing systems available today. The main problem with such systems is the development of suitable operating systems, algorithms and application software that can utilise the potential processing power of large numbers of processors. As a result, systems comprising millions of processors are still limited in the applications they can efficiently solve. Two alternative paradigms that may offer a solution to this problem are Quantum Computers (QC) and Brain Inspired Computers (BIC). This book presents papers from the 14th edition of the biennial international confe...
Big Data has been much in the news in recent years, and the advantages conferred by the collection and analysis of large datasets in fields such as marketing, medicine and finance have led to claims that almost any real world problem could be solved if sufficient data were available. This is of course a very simplistic view, and the usefulness of collecting, processing and storing large datasets must always be seen in terms of the communication, processing and storage capabilities of the computing platforms available. This book presents papers from the International Research Workshop, Advanced High Performance Computing Systems, held in Cetraro, Italy, in July 2014. The papers selected for p...