Voici les éléments 1 - 2 sur 2
  • Publication
    Accès libre
    Etude comparative de l’efficacité du dépistage de l’information dans des manuscrits médiévaux
    This paper presents, evaluates and compares the effectiveness of information retrieval (IR) for medieval manuscripts when facing with noisy texts. The corpus used in our experiments is based on a well-known medieval epic poem written in Middle High German dating to the thirteenth century (Parzival). An error-free transcription of the poem was created manually and made available by experts. This error-free transcription represents our baseline that we used to assess the performance levels. In practice, the document noise could be caused by different sources (e.g., spelling variations due to non-normalized medieval text, recognition mistake). To overcome these difficulties, we suggest several query expansion strategies, hence allowing some form of spelling variation between the requests and the searchable items. To analyze these performances under several conditions, we have evaluated five IR models, three forms of stemming and three text representations. We show that incorporating the maximum spelling variation possibilities in the query expansion process does not produce the best results, while a wiser and more conservative approach of involving expansion terms yields better performance levels.
  • Publication
    Accès libre
    Information Retrieval Strategies for Digitized Handwritten Medieval Documents
    This paper describes and evaluates different IR models and search strategies for digitized manuscripts. Written during the thirteenth century, these manuscripts were digitized using an imperfect recognition system with a word error rate of around 6%. Having access to the internal representation during the recognition stage, we were able to produce four automatic transcriptions, each introducing some form of spelling correction as an attempt to improve the retrieval effectiveness. We evaluated the retrieval effectiveness for each of these versions using three text representations combined with five IR models, three stemming strategies and two query formulations. We employed a manually-transcribed error-free version to define the ground-truth. Based on our experiments, we conclude that taking account of the single best recognition word or all possible top-k recognition alternatives does not provide the best performance. Selecting all possible words each having a log-likelihood close to the best alternative yields the best text surrogate. Within this representation, different retrieval strategies tend to produce similar performance levels.