You may have to Search all our reviewed books and magazines, click the sign up button below to create a free account.
The ?fth campaign of the Cross-Language Evaluation Forum (CLEF) for Eu- pean languages was held from January to September 2004. Participation in the CLEF campaigns has increased each year and CLEF 2004 was no exception: 55 groups submitted results for one or more of the di?erent tracks compared with 42 groups in the previous year. CLEF 2004 also marked a breaking point with respect to previous campaigns. The focus was no longer mainly concentrated on multilingual document retrieval as in previous years but was diversi?ed to include di?erent kinds of text retrieval across languages (e. g. , exact answers in the question-answering track) and retrieval on di?erent kinds of media (i. e. , not ju...
The ninth campaign of the Cross-Language Evaluation Forum (CLEF) for European languages was held from January to September 2008. There were seven main eval- tion tracks in CLEF 2008 plus two pilot tasks. The aim, as usual, was to test the p- formance of a wide range of multilingual information access (MLIA) systems or s- tem components. This year, 100 groups, mainly but not only from academia, parti- pated in the campaign. Most of the groups were from Europe but there was also a good contingent from North America and Asia plus a few participants from South America and Africa. Full details regarding the design of the tracks, the methodologies used for evaluation, and the results obtained by t...
This book constitutes the thoroughly refereed postproceedings of the 7th Workshop of the Cross-Language Evaluation Forum, CLEF 2006, held in Alicante, Spain, September 2006. The revised papers presented together with an introduction were carefully reviewed and selected for inclusion in the book. The papers are organized in topical sections on Multilingual Textual Document Retrieval, Domain-Specifig Information Retrieval, i-CLEF, QA@CLEF, ImageCLEF, CLSR, WebCLEF and GeoCLEF.
This book constitutes the thoroughly refereed proceedings of the 10th Workshop of the Cross Language Evaluation Forum, CLEF 2010, held in Corfu, Greece, in September/October 2009. The volume reports experiments on various types of textual document collections. It is divided into six main sections presenting the results of the following tracks: Multilingual Document Retrieval (Ad-Hoc), Multiple Language Question Answering (QA@CLEF), Multilingual Information Filtering (INFILE@CLEF), Intellectual Property (CLEF-IP) and Log File Analysis (LogCLEF), plus the activities of the MorphoChallenge Program.
This book constitutes the refereed proceedings of the 10th European Conference on Research and Advanced Technology for Digital Libraries, ECDL 2007, held in Budapest, Hungary. The papers are organized in topical sections on ontologies, digital libraries and the web, models, multimedia and multilingual DLs, grid and peer-to-peer, preservation, user interfaces, document linking, information retrieval, personal information management, new DL applications, and user studies.
The fourth campaign of the Cross-language Evaluation Forum (CLEF) for European languages was held from January to August 2003. Participation in this campaign showed a slight rise in the number of participants from the previous year, with 42 groups submitting results for one or more of the different tracks (compared with 37 in 2002), but a steep rise in the number of experiments attempted. A distinctive feature of CLEF 2003 was the number of new tracks and tasks that were offered as pilot experiments. The aim was to try out new ideas and to encourage the development of new evaluation methodologies, suited to the emerging requirements of both system developers and users with respect to todayâ€...
"This book provides pertinent and vital information that researchers, postgraduate, doctoral students, and practitioners are seeking for learning about the latest discoveries and advances in NLP methodologies and applications of NLP"--Provided by publisher.
The second evaluation campaign of the Cross Language Evaluation Forum (CLEF) for European languages was held from January to September 2001. This campaign proved a great success, and showed an increase in participation of around 70% com pared with CLEF 2000. It culminated in a two day workshop in Darmstadt, Germany, 3–4 September, in conjunction with the 5th European Conference on Digital Libraries (ECDL 2001). On the first day of the workshop, the results of the CLEF 2001 evalua tion campaign were reported and discussed in paper and poster sessions. The second day focused on the current needs of cross language systems and how evaluation cam paigns in the future can best be designed to sti...
This book constitutes the thoroughly refereed postproceedings of the 6th Workshop of the Cross-Language Evaluation Forum, CLEF 2005. The book presents 111 revised papers together with an introduction. Topical sections include multilingual textual document retrieval, cross-language and more, monolingual experiments, domain-specific information retrieval, interactive cross-language information retrieval, multiple language question answering, cross-language retrieval in image collections, cross-language speech retrieval, multilingual Web track, cross-language geographical retrieval, and evaluation issues.
The tenth campaign of the Cross Language Evaluation Forum (CLEF) for European languages was held from January to September 2009. There were eight main eval- tion tracks in CLEF 2009 plus a pilot task. The aim, as usual, was to test the perfo- ance of a wide range of multilingual information access (MLIA) systems or system components. This year, about 150 groups, mainly but not only from academia, reg- tered to participate in the campaign. Most of the groups were from Europe but there was also a good contingent from North America and Asia. The results were presented at a two-and-a-half day workshop held in Corfu, Greece, September 30 to October 2, 2009, in conjunction with the European Conference on Digital Libraries. The workshop, attended by 160 researchers and system developers, provided the opportunity for all the groups that had participated in the evaluation campaign to get together, compare approaches and exchange ideas.