Voici les éléments 1 - 10 sur 246
  • Publication
    Accès libre
    A consistent research design for value relevance studies
    We argue that Ohlson’s linear solution to the residual earnings (RE) equation, a crucial component of a widely used value relevance research design, is generally not a linear regression. Moreover, its coefficients are firm-dependent. As such, its empirical specifications, the price-levels regression and the returns-earnings regressions are structurally ill-suited for consistent inference in cross-sections. To address this issue we, first, prove the existence of a regression solution to the RE equation and, second, introduce a valuation-based research design that builds on such a solution and warrants a consistent estimation of the empirical specification (which takes the form of a non-linear regression). Its estimation turns out to be an optimal implementation of the price-to-book (P/B ) multiple valuation, a technique that is easy to implement and familiar to the accounting community. The regression view on multiple valuation identifies P/B value with a price that incorporates earnings expectations formed only on the basis of the current levels of the RE drivers. Using a large sample of US non-financial firms over an almost 40 year-period, we document the usefulness of the alternative research design through a comparative testing of four economically-motivated and intuitively-appealing predictions: size, earnings predictability and volatility, and the quality of accruals are value-relevant. While the current research design does not validate them, the approach based on the regression solution shows a significant association between prices and the four attributes for most of the years in the sample.
  • Publication
    Restriction temporaire
    A different approach for the analysis of web access logs
    (Berlin: Springer, 2004) ;
    Schoier, Gabriella
    ;
    Vichi, Maurizio
    ;
    Monari, Paola
    ;
    Mignani, Stefania
    ;
    Montanari, Angela
    The development of Internet-based business has pointed out the importance of the personalisation and optimisation of Web sites. For this purpose the study of users behaviours are of great importance. In this paper we present a solution to the problem of identification of dense clusters in the analysis of Web Access Logs. We consider a modification of an algorithm recently proposed in social network analysis. This approach is illustrated by analysing a log-file of a web portal.
  • Publication
    Métadonnées seulement
  • Publication
    Accès libre
    A Hybrid Algorithm for Generating Formal Concepts and Building Concept Lattice Using NextClosure and Nourine Algorithms
    (2016-7-18)
    Concept lattice produced from a set of formal concepts is used for representing concept hierarchy and has many applications in knowledge representation and data mining. Different algorithms have been proposed in the past for efficiently generating formal concepts and building concept lattices. In this paper we introduce the idea of combining existing algorithms in FCA with the aim of benefiting from their specific advantages. As an example, we propose a hybrid model that utilizes the NextClosure (NC) algorithm for generating formal concepts and parts of the Nourine algorithm for building concept lattices.We compare the proposed hybrid model with two of its counterparts: pure NC and pure Nourine. Our experiments show that the hybrid model always outperforms pure NC and for very large datasets can surpass the pure Nourine, as well.
  • Publication
    Accès libre
    A knowledge extraction framework for crime analysis: unsupervised methods in uncertain environments
    Cette thèse de doctorat investigue le rôle des méthodes d'extraction de connaissances dans l'analyse criminelle en tant que projet interdisciplinaire, avec une orientation sur les méthodes non supervisées traitant les aspects d'incertitude qui sont intrinsèques à l'environnement du crime.
    Dans un contexte où les données générées par les activités criminelles sont de plus en plus disponibles grâce à l'évolution des technologies, l'utilisation de méthodes automatisées pour créer de la valeur à partir de ces données devient une nécessité. Ces méthodes d'analyse requièrent une conception spécifique selon la nature des données qu'elles traitent, principalement collectées à partir de scènes de crimes. Les analystes criminels ont désespérément besoin de telles méthodes pour être mieux informés et efficients dans la lutte perpétuelle contre le crime. Cependant, leurs choix en termes d’étendue et de disponibilité sont très limités.
    Un framework qui délimite et explique le rôle des méthodes d’extraction de connaissance pour l’analyse criminelle est proposé. Ce framework adresse un défi particulier : développer des méthodes de data mining non supervisées qui permettent de traiter l’incertitude des données criminelles.
    Trois approches sont développées pour confronter ce défi. (1) Comment structurer et représenter des données criminelles pour exploiter pleinement leur potentiel à révéler des connaissances par la conduite d’autres analyses ? (2) Quelle est la méthode appropriée d’analyse de liens entre les crimes qui prenne en compte des données à la fois quantitatives et qualitatives ? Et (3) quelle est la méthode appropriée pour aider les analystes criminels à détecter des changements dans des tendances criminelles d’une manière flexible et compréhensible ?
    L’importance de cette recherche interdisciplinaire peut être résumée en deux points : elle clarifie et délimite le rôle du data mining dans l’analyse criminelle, en fournissant une perspective sur son applicabilité dans cet environnement particulier ; et elle facilite l’extraction de connaissances par l’utilisation des méthodes proposée guidées par le métier., This doctoral thesis investigates the role of knowledge extraction methods in the analysis of crime as an interdisciplinary project, with a focus on unsupervised methods dealing with the uncertain aspects that are intrinsic to the crime environment.
    In a context where data generated from criminal activities are increasingly available due to the evolution of technology, the use of automated methods to create value from these data becomes a necessity. These analytic methods require a specific design with regard to the nature of the data they deal with, mostly gathered from crime scenes. Crime analysts desperately need such methods to be better informed and efficient in the perpetual struggle against crime. However, their choices in terms of range and availability are very limited.
    A framework delineating and explaining the role of knowledge extraction methods for crime analysis is provided. This framework addresses a particular challenge: developing unsupervised data mining methods dealing with the uncertainty of crime data.
    Three approaches are developed to confront this challenge. (1) How to structure and represent crime data to fully exploit the potential of revealing knowledge with further analyses? (2) What is the appropriate method to analyze links between crimes that can deal with both qualitative and quantitative crime data? And (3) what is the appropriate method to help crime analysts to flexibly and understandably detect changes in crime trends?
    The significance of this interdisciplinary research can be summarized in two points: it clarifies and delineates the role of data mining in crime analysis, by giving some insights into its applicability in this particular environment; and it makes easier the extraction of knowledge by the use of the proposed domain-driven methods.
  • Publication
    Métadonnées seulement
  • Publication
    Métadonnées seulement
  • Publication
    Accès libre
    A Methodology for Extracting Knowledge about Controlled Vocabularies from Textual Data using FCA-Based Ontology Engineering
    (: IEEE International Conference on Bioinformatics and Biomedicine (BIBM), 2018-12-3)
    We introduce an end-to-end methodology (from text processing to querying a knowledge graph) for the sake of knowledge extraction from text corpora with a focus on a list of vocabularies of interest. We propose a pipeline that incorporates Natural Language Processing (NLP), Formal Concept Analysis (FCA), and Ontology Engineering techniques to build an ontology from textual data. We then extract the knowledge about controlled vocabularies by querying that knowledge graph, i.e., the engineered ontology. We demonstrate the significance of the proposed methodology by using it for knowledge extraction from a text corpus that consists of 800 news articles and reports about companies and products in the IT and pharmaceutical domain, where the focus is on a given list of 250 controlled vocabularies.