Voici les éléments 1 - 10 sur 14
  • Publication
    Accès libre
    Influence of language morphological complexity on information retrieval
    (2010)
    Dolamic, Ljiljana
    ;
    ;
    In this dissertation two aspects of information retrieval are elaborated. The frst involves the creation and evaluation of various linguistic tools for languages less studied than English, and in our case we have chosen to work with the two Slavic languages Czech and Russian, and three languages widely spoken on the Indian subcontinent, Hindi, Marathi and Bengali. To do so we compare various indexing strategies and IR models most likely to obtain the best possible performance. The second part involves an evaluation of the effectiveness of queries written in different languages when searching collections written in either English or French. To cross the language barriers we apply publicly available machine translation services, analyze the results and then explain the poor performances obtained by the translated queries.
  • Publication
    Accès libre
    UniNE at CLEF 2008: TEL, and Persian IR
    (2009)
    Dolamic, Ljiljana
    ;
    Abdou, Samir
    ;
    In our participation in this evaluation campaign, our first objective was to analyze retrieval effectiveness when using The European Library (TEL) corpora composed of very short descriptions (library catalog records) and also to evaluate the retrieval effectiveness of several IR models. As a second objective we wanted to design and evaluate a stopword list and a light stemming strategy for the Persian (Farsi), a member of the Indo-European family of languages and whose morphology is more complex than of the English language.
  • Publication
    Accès libre
    Indexing and searching strategies for the Russian language
    (2009)
    Dolamic, Ljiljana
    ;
    This paper describes and evaluates various stemming and indexing strategies for the Russian language. We design and evaluate two stemming approaches, a light and a more aggressive one, and compare these stemmers to the Snowball stemmer, to no stemming, and also to a language-independent approach (n-gram). To evaluate the suggested stemming strategies we apply various probabilistic information retrieval (IR) models, including the Okapi, the Divergence from Randomness (DFR), a statistical language model (LM), as well as two vector-space approaches, namely, the classical tf idf scheme and the dtu-dtn model. We find that the vector-space dtu-dtn and the DFR models tend to result in better retrieval effectiveness than the Okapi, LM, or tf idf models, while only the latter two IR approaches result in statistically significant performance differences. Ignoring stemming generally reduces the MAP by more than 50%, and these differences are always significant. When applying an n-gram approach, performance differences are usually lower than an approach involving stemming. Finally, our light stemmer tends to perform best, although performance differences between the light, aggressive, and Snowball stemmers are not statistically significant.
  • Publication
    Accès libre
    How effective is Google's translation service in search?
    (2009) ;
    Dolamic, Ljiljana
    In multilingual countries (Canada, Hong Kong, India, among others) and large international organizations or companies (such as, WTO, European Parliament), and among Web users in general, accessing information written in other languages has become a real need (news, hotel or airline reservations, or government information, statistics). While some users are bilingual, others can read documents written in another language but cannot formulate a query to search it, or at least cannot provide reliable search terms in a form comparable to those found in the documents being searched. There are also many monolingual users who may want to retrieve documents in another language and then have them translated into their own language, either manually or automatically.
    Translation services may however be too expensive, not readily accessible or not available within a short timeframe. On the other hand, many documents contain non-textual information such as images, videos and statistics that do not need translation and can be understood regardless of the language involved. In response to these needs and in order to make the Web universally available regardless of any language barriers, in May 2007 Google launched a translation service that now provides two-way online translation services mainly between English and 41 other languages, for example, Arabic, simplified and traditional Chinese, French, German, Italian, Japanese, Korean, Portuguese, Russian, and Spanish (http://translate.google.com/). Over the last few years other free Internet translation services have been made available as for example by BabelFish (http://babel.altavista.com/) or Yahoo! (http://babelfish.yahoo.com/). These two systems are similar to that used by Google, given they are based on technology developed by Systran, one of the earliest companies to develop machine translation. Also worth mentioning here is the Promt system (also known as Reverso, http://translation2.paralink.com/), which was developed in Russia to provide mainly translation between Russian and other languages.
    The question we would like to address here is to what extent a translation service such as Google can produce adequate results in the language other than that being used to write the query. Although we will not evaluate translations per se we will test and analyze various systems in terms of their ability to retrieve items automatically based on a translated query. To be adequate, these tests must be done on a collection of documents written in one given language plus a series of topics (expressing user information needs) written in other languages, plus a series of relevance assessments (relevant documents for each topic).
  • Publication
    Accès libre
    When stopword lists make the difference
    (2009)
    Dolamic, Ljiljana
    ;
    In this brief communication, we evaluate the use of two stopword lists for the English language (one comprising 571 words and another with 9) and compare them with a search approach accounting for all word forms. We show that through implementing the original Okapi form or certain ones derived from the Divergence from Randomness (DFR) paradigm, significantly lower performance levels may result when using short or no stopword lists. For other DFR models and a revised Okapi implementation, performance differences between approaches using short or long stopword lists or no list at all are usually not statistically significant. Similar conclusions can be drawn when using other natural languages such as French, Hindi, or Persian.
  • Publication
    Accès libre
    Variations autour de tf idf et du moteur Lucene
    (2008) ;
    Dolamic, Ljiljana
    A l'aide d’un corpus écrit en langue française et composé de 299 requêtes, cet article analyse et compare l’efficacité du dépistage de diverses stratégies d’indexation et de recherche basées sur le modèle classique « tf idf ». Cette dernière formulation demeure ambiguë et cache diverses variantes possédant des performances différentes, performance mesurée soit par la précision moyenne (MAP) soit par le rang moyen de la première bonne réponse (MRR). Notre analyse confirme que la meilleure efficacité s’obtient par le modèle Okapi. Mais lorsque nous sommes dans des contextes particuliers (e.g., systèmes distribués) dans lesquels la valeur de l’idf n’est pas connue lors de l’indexation des documents, nous démontrons que des stratégies simples, basées uniquement sur la fréquence d’occurrence (ou tf) permettent d’obtenir une performance significativement meilleure que le modèle classique « tf idf ». En utilisant le moteur Lucene (logiciel libre), nous avons également évalué deux de ses facettes, à savoir l’accroissement d’importance attachée aux mots des titres et la prise en compte du nombre de termes en commun entre le document dépisté et la requête., This paper evaluates and compares the retrieval effectiveness resulting from various models derived from the classical tf idf paradigm when searching into a test-collection written in the French language (CLEF, 299 queries). We show that the simple paradigm “tf idf” may hide various formulations providing different retrieval effectiveness measured either by the mean average precision (MAP) or the mean reciprocal rank (MRR). Our analysis demonstrates that the best retrieval performance can be obtained from applying the Okapi probabilistic model. However, when faced with particular contexts (e.g. distributed IR) where the idf value cannot be obtained during the indexing process, we demonstrated that a simple indexing scheme (based only the frequency of occurrence or tf) may produce a significantly better performance than the classical « tf idf » model. Using the Lucene search engine, we also analyze and evaluate two particular features of this open-source system (namely the boost and coordinate level match).
  • Publication
    Accès libre
    Stemming Approaches for East European Languages
    (2008)
    Dolamic, Ljiljana
    ;
    During this CLEF evaluation campaign, the first objective is to propose and evaluate various indexing and search strategies for the Czech language that will hopefully result in more effective retrieval than language-independent approaches (n-gram). Based on the stemming strategy we developed for other languages, we propose that for the Slavic language a light stemmer (inflectional only) and also a second one based on a more aggressive suffix-stripping scheme that will remove some derivational suffixes. Our second objective is to undertake further study of the relative merit of various search engines when exploring Hungarian and Bulgarian documents. To evaluate these solutions we use various effective IR models. Our experiments generally show that for the Bulgarian language, removing certain frequently used derivational suffixes may improve mean average precision. For the Hungarian corpus, applying an automatic decompounding procedure improves the MAP. For the Czech language a comparison of a light and a more aggressive stemmer to remove both inflectional and some derivational suffixes, reveals only small performance differences. For this language only, performance differences between a word-based or a 4-gram indexing strategy are also rather small.
  • Publication
    Accès libre
    Domain-Specific IR for German, English and Russian Languages
    (2008)
    Fautsch, Claire
    ;
    Dolamic, Ljiljana
    ;
    Abdou, Samir
    ;
    In participating in this domain-specific track, our first objective is to propose and evaluate a light stemmer for the Russian language. Our second objective is to measure the relative merit of various search engines used for the German and to a lesser extent the English languages. To do so we evaluated the tf •idf, Okapi, IR models derived from the Divergence from Randomness (DFR) paradigm, and also a language model (LM). For the Russian language, we find that word-based indexing using our light stemming procedure results in better retrieval effectiveness than does the 4-gram indexing strategy (relative difference around 30%). Using the German corpus, we examine certain variations in retrieval effectiveness after applying the specialized thesaurus to automatically enlarge topic descriptions. In this case, the performance variations were relatively small and usually non significant.
  • Publication
    Accès libre
    Information retrieval with Hindi, Bengali, and Marathi languages: evaluation and analysis
    ;
    Akasereh, Mitra
    ;
    Dolamic, Ljiljana
    Our first objective in participating in FIRE evaluation campaigns is to analyze the retrieval effectiveness of various indexing and search strategies when dealing with corpora written in Hindi, Bengali and Marathi languages. As a second goal, we have developed new and more aggressive stemming strategies for both Marathi and Hindi languages during this second campaign. We have compared their retrieval effectiveness with both light stemming strategy and n-gram language-independent approach. As another language-independent indexing strategy, we have evaluated the trunc-n method in which the indexing term is formed by considering only the first n letters of each word. To evaluate these solutions we have used various IR models including models derived from Divergence from Randomness (DFR), Language Model (LM) as well as Okapi, or the classical tf idf vector-processing approach.
    For the three studied languages, our experiments tend to show that IR models derived from Divergence from Randomness (DFR) paradigm tend to produce the best overall results. For these languages, our various experiments demonstrate also that either an aggressive stemming procedure or the trunc-n indexing approach produces better retrieval effectiveness when compared to other word-based or n-gram language-independent approaches. Applying the Z-score as data fusion operator after a blind-query expansion tends also to improve the MAP of the merged run over the best single IR system.
  • Publication
    Accès libre
    Recherche d’information dans un corpus bruité (OCR)
    ; ;
    Dolamic, Ljiljana
    Cet article désire mesurer la perte de performance lors de la recherche d'information dans une collection de documents scannés. Disposant d'un corpus sans erreur et de deux versions renfermant 5 % et 20 % d'erreurs en reconnaissance, nous avons évalué six modèles de recherche d'information basés sur trois représentations des documents (sac de mots, n-grammes, ou trunc-n) et trois enracineurs. Basé sur l'inverse du rang du premier document pertinent dépisté, nous démontrons que la perte de performance se situe aux environs de - 17 % avec un taux d'erreur en reconnaissance de 5 % et s'élève à – 46 % si ce taux grimpe à 20 %. La représentation par 4-grammes semble apporter une meilleure qualité de réponse avec un corpus bruité. Concernant l'emploi ou non d'un enracineur léger ou la pseudo-rétroaction positive, aucune conclusion définitive ne peut être tirée., This paper evaluates the retrieval effectiveness degradation when facing with noisy text corpus. With the use of a test-collection having the clean text, another version with around 5% error rate in recognition and a third with 20% error rate, we have evaluated six IR models based on three text representations (bag-of-words, n-grams, trunc-n) as well as three stemmers. Using the mean reciprocal rank as performance measure, we show that the average retrieval effectiveness degradation is around -17% when dealing with an error rate of 5%. This average decrease is around -46% when facing with an error rate of 20%. The representation by 4-grams tends to offer the best retrieval when searching with noisy text. Finally, we are not able to obtain clear conclusion about the impact of different stemming strategies or the use of blind-query expansion.