You may have to Search all our reviewed books and magazines, click the sign up button below to create a free account.
Scientific workflows have emerged as a key technology that assists scientists with the design, management, execution, sharing and reuse of in silico experiments. Workflow management systems simplify the management of scientific workflows by providing graphical interfaces for their development, monitoring and analysis. Nowadays, e-Science combines such workflow management systems with large-scale data and computing resources into complex research infrastructures. For instance, e-Science allows the conveyance of best practice research in collaborations by providing workflow repositories, which facilitate the sharing and reuse of scientific workflows. However, scientists are still faced with di...
The 7 revised full papers, 11 revised medium-length papers, 6 revised short, and 7 demo papers presented together with 10 poster/abstract papers describing late-breaking work were carefully reviewed and selected from numerous submissions. Provenance has been recognized to be important in a wide range of areas including databases, workflows, knowledge representation and reasoning, and digital libraries. Thus, many disciplines have proposed a wide range of provenance models, techniques, and infrastructure for encoding and using provenance. The papers investigate many facets of data provenance, process documentation, data derivation, and data annotation.
This book constitutes the refereed proceedings of the 4th International Workshop on Data Integration in the Life Sciences, DILS 2007, held in Philadelphia, PA, USA in July 2007. It covers new architectures and experience on using systems, managing and designing scientific workflows, mapping and matching techniques, modeling of life science data, and annotation in data integration.
This volume comprises papers from the following three workshops that were part of the complete program for the International Conference on Extending Database Technology (EDBT) held in Prague, Czech Republic, in March 2002: XML-Based Data Management (XMLDM) Second International Workshop on Multimedia Data and Document Engineering (MDDE) Young Researchers Workshop (YRWS) Together, the three workshops featured 48 high-quality papers selected from approximately 130 submissions. It was, therefore, difficult to decide on the papers that were to be accepted for presentation. We believe that the accepted papers substantially contribute to their particular fields of research. The workshops were an ex...
This book constitutes the refereed proceedings of the First International Workshop on Data Integration in the Life Sciences, DILS 2004, held in Leipzig, Germany, in March 2004. The 13 revised full papers and 2 revised short papers presented were carefully reviewed and selected from many submissions. The papers are organized in topical sections on scientific and clinical workflows, ontologies and taxonomies, indexing and clustering, integration tools and systems, and integration techniques.
Conceptual modeling is fundamental to any domain where one must cope with complex real-world situations and systems because it fosters communication - tween technology experts and those who would bene?t from the application of those technologies. Conceptual modeling is the key mechanism for und- standing and representing the domains of information system and database - gineering but also increasingly for other domains including the new “virtual” e-environmentsandtheinformationsystemsthatsupportthem.Theimportance of conceptual modeling in software engineering is evidenced by recent interest in “model-drivenarchitecture”and“extremenon-programming”.Conceptualm- eling also plays a pr...
This book reports on the results of an interdisciplinary and multidisciplinary workshop on provenance that brought together researchers and practitioners from different areas such as archival science, law, information science, computing, forensics and visual analytics that work at the frontiers of new knowledge on provenance. Each of these fields understands the meaning and purpose of representing provenance in subtly different ways. The aim of this book is to create cross-disciplinary bridges of understanding with a view to arriving at a deeper and clearer perspective on the different facets of provenance and how traditional definitions and applications may be enriched and expanded via an interdisciplinary and multidisciplinary synthesis. This volume brings together all of these developments, setting out an encompassing vision of provenance to establish a robust framework for expanded provenance theory, standards and technologies that can be used to build trust in financial and other types of information.
This is a timely book presenting an overview of the current state-of-the-art within established projects, presenting many different aspects of workflow from users to tool builders. It provides an overview of active research, from a number of different perspectives. It includes theoretical aspects of workflow and deals with workflow for e-Science as opposed to e-Commerce. The topics covered will be of interest to a wide range of practitioners.
LNBIP 99 and LNBIP 100 together constitute the thoroughly refereed proceedings of 12 international workshops held in Clermont-Ferrand, France, in conjunction with the 9th International Conference on Business Process Management, BPM 2011, in August 2011. The 12 workshops focused on Business Process Design (BPD 2011), Business Process Intelligence (BPI 2011), Business Process Management and Social Software (BPMS2 2011), Cross-Enterprise Collaboration (CEC 2011), Empirical Research in Business Process Management (ER-BPM 2011), Event-Driven Business Process Management (edBPM 2011), Process Model Collections (PMC 2011), Process-Aware Logistics Systems (PALS 2011), Process-Oriented Systems in Heal...
Data quality is one of the most important problems in data management. A database system typically aims to support the creation, maintenance, and use of large amount of data, focusing on the quantity of data. However, real-life data are often dirty: inconsistent, duplicated, inaccurate, incomplete, or stale. Dirty data in a database routinely generate misleading or biased analytical results and decisions, and lead to loss of revenues, credibility and customers. With this comes the need for data quality management. In contrast to traditional data management tasks, data quality management enables the detection and correction of errors in the data, syntactic or semantic, in order to improve the...