You may have to Search all our reviewed books and magazines, click the sign up button below to create a free account.
Many databases today capture both, structured and unstructured data. Making use of such hybrid data has become an important topic in research and industry. The efficient evaluation of hybrid data queries is the main topic of this thesis. Novel techniques are proposed that improve the whole processing pipeline, from indexes and query optimization to run-time processing. The contributions are evaluated in extensive experiments showing that the proposed techniques improve upon the state of the art.
This book constitutes the thoroughly refereed joint post-proceedings of five workshops held as part of the 9th International Conference on Extending Database Technology, EDBT 2004, held in Heraklion, Crete, Greece, in March 2004. The 55 revised full papers presented together with 2 invited papers and the summaries of 2 panels were selected from numerous submissions during two rounds of reviewing and revision. In accordance with the topical focus of the respective workshops, the papers are organized in sections on database technology in general (PhD Workshop), database technologies for handling XML information on the Web, pervasive information management, peer-to-peer computing and databases, and clustering information over the Web.
First in the Field: Breaking Ground in Computer Science at Purdue University chronicles the history and development of the first computer science department established at a university in the United States. The backdrop for this groundbreaking academic achievement is Purdue in the 1950s when mathematicians, statisticians, engineers, and scientists from various departments were searching for faster and more efficient ways to conduct their research. These were fertile times, as recognized by Purdue’s President Frederick L. Hovde, whose support of what was to become the first “university-centered” computer center in America laid the foundation for the nation’s first department of comput...
Text data that is associated with location data has become ubiquitous. A tweet is an example of this type of data, where the text in a tweet is associated with the location where the tweet has been issued. We use the term spatial-keyword data to refer to this type of data. Spatial-keyword data is being generated at massive scale. Almost all online transactions have an associated spatial trace. The spatial trace is derived from GPS coordinates, IP addresses, or cell-phone-tower locations. Hundreds of millions or even billions of spatial-keyword objects are being generated daily. Spatial-keyword data has numerous applications that require efficient processing and management of massive amounts ...
The two-volume set LNAI 10751 and 10752 constitutes the refereed proceedings of the 10th Asian Conference on Intelligent Information and Database Systems, ACIIDS 2018, held in Dong Hoi City, Vietnam, in March 2018. The total of 133 full papers accepted for publication in these proceedings was carefully reviewed and selected from 423 submissions. They were organized in topical sections named: Knowledge Engineering and Semantic Web; Social Networks and Recommender Systems; Text Processing and Information Retrieval; Machine Learning and Data Mining; Decision Support and Control Systems; Computer Vision Techniques; Advanced Data Mining Techniques and Applications; Multiple Model Approach to Mach...
This book constitutes the thoroughly refereed joint post-proceedings of nine workshops held as part of the 10th International Conference on Extending Database Technology, EDBT 2006, held in Munich, Germany in March 2006. The 70 revised full papers presented were selected from numerous submissions during two rounds of reviewing and revision.
This book constitutes the refereed proceedings of the 14th International Conference on Database Systems for Advanced Applications, DASFAA 2009, held in Brisbane, Australia, in April 2009. The 39 revised full papers and 22 revised short papers presented together with 3 invited keynote papers, 9 demonstration papers, 3 tutorial abstracts, and one panel abstract were carefully reviewed and selected from 186 submissions. The papers are organized in topical sections on uncertain data and ranking, sensor networks, graphs, RFID and data streams, skyline and rising stars, parallel and distributed processing, mining and analysis, XML query, privacy, XML keyword search and ranking, Web and Web services, XML data processing, and multimedia.
Conceptual modeling is fundamental to any domain where one must cope with complex real-world situations and systems because it fosters communication - tween technology experts and those who would bene?t from the application of those technologies. Conceptual modeling is the key mechanism for und- standing and representing the domains of information system and database - gineering but also increasingly for other domains including the new “virtual” e-environmentsandtheinformationsystemsthatsupportthem.Theimportance of conceptual modeling in software engineering is evidenced by recent interest in “model-drivenarchitecture”and“extremenon-programming”.Conceptualm- eling also plays a pr...
"My absolute favorite for this kind of interview preparation is Steven Skiena’s The Algorithm Design Manual. More than any other book it helped me understand just how astonishingly commonplace ... graph problems are -- they should be part of every working programmer’s toolkit. The book also covers basic data structures and sorting algorithms, which is a nice bonus. ... every 1 – pager has a simple picture, making it easy to remember. This is a great way to learn how to identify hundreds of problem types." (Steve Yegge, Get that Job at Google) "Steven Skiena’s Algorithm Design Manual retains its title as the best and most comprehensive practical algorithm guide to help identify and so...