Voici les éléments 1 - 3 sur 3
  • Publication
    Accès libre
    Robust inference with censored survival data
    (2022-1-9) ;
    Ronchetti, Elvezio
    Randomly censored survival data appear in a wide variety of applications in which the time until the occurrence of a certain event is not completely observable. In this paper, we assume that the statistician observes a possibly censored survival time along with a censoring indicator. In this setting, we study a class of M-estimators with a bounded influence function, in the spirit of the infinitesimal approach to robustness. We outline the main asymptotic properties of the robust M-estimators and characterize the optimal B-robust estimator according to two possible measures of sensitivity. Building on these results, we define robust testing procedures which are natural counterparts to the classical Wald, score, and likelihood ratio tests. The empirical performance of our robust estimators and tests is assessed in two extensive simulation studies. An application to data from a well-known medical study on head and neck cancer is also presented.
  • Publication
    Accès libre
    Semiparametric segment M-estimation for locally stationary diffusions
    (2019) ;
    La Vecchia, Davide
    We develop and implement a novel M-estimation method for locally stationary diffusions observed at discrete time-points. We give sufficient conditions for the local stationarity of general time-inhomogeneous diffusions. Then we focus on locally stationary diffusions with time-varying parameters, for which we define our M-estimators and derive their limit theory.
  • Publication
    Accès libre
    Error bounds for the convex loss Lasso in linear models
    (2017)
    Hannay, Mark
    ;
    In this paper we investigate error bounds for convex loss functions for the Lasso in linear models, by first establishing a gap in the theory with respect to the existing error bounds. Then, under the compatibility condition, we recover bounds for the absolute value estimation error and the squared prediction error under mild conditions, which appear to be far more appropriate than the existing bounds for the convex loss Lasso. Interestingly, asymptotically the only difference between the new bounds of the convex loss Lasso and the classical Lasso is a term solely depending on a well-known expression in the robust statistics literature appearing multiplicatively in the bounds. We show that this result holds whether or not the scale parameter needs to be estimated jointly with the regression coefficients. Finally, we use the ratio to optimize our bounds in terms of minimaxity.