Voici les éléments 1 - 10 sur 12
  • Publication
    Accès libre
    Modeling latent variables in economics and finance
    (2019) ; ;
    Boudt, Kris
    Le sujet des variables latentes est au cœur de cette thèse. Ces variables latentes (i.e., non observables) doivent être inférées à l’aide de modèles statistiques ou de variables proxys observables. Les objectifs de ma thèse de doctorat sont de développer et de tester de nouveaux modèles statistiques pour déduire ces variables afin de les utiliser pour l’amélioration des décisions économiques et financières.
    Dans mon premier chapitre de thèses, je traite de l’évaluation des modèles de volatilité qui intègre de possible changement (latent) structurels dans les paramètres du modèle. Il est d’une importance capitale de capturer ces changements structurels rapidement, comme une mesure précise de la volatilité est cruciale pour la prise optimale de décision qui nécessite un compromis entre le rendement prévu et le risque, ainsi que pour des applications dans l’évaluation des prix d’actifs et en gestion des risques. Cependant, aucune étude empirique n’a été réalisée pour évaluer la performance globale de modèles de volatilité qui prennent en compte les changements structurels. À cette fin, j’entreprends une étude à grande échelle empirique pour comparer la` performance de prévision de modèle GARCH sans changement de régime et de modèle GARCH à changement de régimes Markovien (MSGARCH) du point de vue d’un gestionnaire des risques. Les résultats indiquent que, pour tous les horizons de prédictions considérées, les modèles MSGARCH génère des prédictions plus précises de la Value-at-Risk, d’Expected Shortfall et de la densité que les modèles GARCH sans changement de régime. De plus, mes résultats indiquent que la prise en compte de l’incertitude des paramètres améliore les prévisions de la densité, indépendamment de l’inclusion du mécanisme Markovien.
    Tandis que mon premier chapitre de thèses à une emphase sur la modélisation de variables latentes d’un point de vue de la modélisation statistique, le second et troisième chapitre tentent de capturer une variable plus originale : le sentiment exprimé dans les communications écrites.
    Mon deuxième chapitre de thèses fait face au développement et l’évaluation de nouveau proxy de sentiment économique basé sur des documents textuels. Spécifiquement, j’introduis une infrastructure générale de développement d’indices de sentiment qui sont optimisés avec l’objectif de faire de la prédiction dans un contexte de régression à grande dimension. J’applique cette nouvelle méthodologie à la prédiction de la production industrielle américaine. Mes résultats indiquent que, comparé à l’utilisation unique de variables économiques et financières, l’ajout de l’utilisation d’indices de sentiments textuelles économiques optimisés ajoute de manière importante au pouvoir de prédiction de la croissance économique américaine pour des horizons de neuf mois et un an.
    Mon troisième chapitre de thèses à comme emphase l’analyse de la dynamique de sentiment textuelles anormaux près d’évènement financier. J’introduis, en autre, l’analyse d’évènement basé sur le sentiment anormal cumulatif et le Generalized Word Power, une méthode de calcul du sentiment. J’applique ces méthodologies sur des articles médiatiques provenant de journaux, du web et de fil de presse qui discutent des entreprises publiques non-financières près de l’annonce des résultats trimestriels. Les résultats indiquent que le sentiment anormal est plus sensible aux surprises de bénéfices négatives qu’aux surprises de bénéfices positives. De plus, je note que les investisseurs ont une réaction trop forte à la contribution au sentiment anormal des articles provenant du web. Cela, en autre, fait en sorte que l’on observe un inversement du prix de l’action après l’annonce des bénéfices pour le mois qui suit les annonces. Les résultats sont conformes avec des évidences dans le domaine de la psychologie humaine telle que le biais heuristique de représentation. De plus, les résultats mettent en évidence qu’il y a de l’hétérogénéité dans la valeur de l’information de différents types de médias. Abstract: The subject of unobservable variables encompasses this thesis. These latent (i.e., unobservable) variables must be inferred using statistical models or observable proxies. The objectives of my doctoral thesis are to develop and test new statistical models to infer these variables and link them to the analysis and improvement of economic and financial decisions.
    In my first essay, I tackle the evaluation of volatility models which allow for (latent) structural breaks. It is of utmost importance to capture these breaks in a timely manner, as a precise measure of volatility is crucial for optimal decision–making that requires a trade–off between expected return and risk, as well as for applications in asset pricing and risk management. However, no empirical study has been done to evaluate the overall performance of volatility model considering structural breaks. To that end, I perform a large–scale empirical study to compare the forecasting performance of single–regime and Markov–switching GARCH (MSGARCH) models, from a risk management perspective. I find that, for daily, weekly, and ten–day equity log–returns, MSGARCH models yield more accurate Value–at–Risk, Expected Shortfall, and left–tail distribution forecasts than their single–regime counterpart. Also, my results indicate that accounting for parameter uncertainty improves left–tail predictions, independently of the inclusion of the Markov–switching mechanism.
    While my first essay tackles the modeling of latent variables from a statistical point of view, my second and third essay capture a more novel variable, namely the sentiment expressed in written communications.
    My second essay addresses the development and testing of new text–based proxies for economic sentiment. More specifically, I introduce a general sentiment engineering framework that optimizes the design for forecasting purposes in a high–dimensional context. I apply the new methodology to the forecasting of the US industrial production, which is usually predicted using available quantitative variables from a large panel of indicators. I find that, compared to the use of high–dimensional forecasting techniques based solely economic and financial indicators, the additional use of optimized news–based sentiment values yield significant forecasting accuracy gains for the nine–month and annual growth rates of the US industrial production.
    My third essay focuses on the analysis of the dynamics of abnormal tone or sentiment around the time of events. To do so, I introduce the Cumulative Abnormal Tone (CAT) event study and Generalized Word Power methodologies. I apply these methodologies to media reports in newswires, newspapers, and web publications about firms’ future performance published around the quarterly earnings announcements of non–financial S&P 500 firms over the period 2000–2016. I find that the abnormal tone is more sensitive to negative earnings surprises than positive ones. Additionally, I report that investors overreact to the abnormal tone contribution of web publications at earnings announcement dates, which generates a stock price reversal in the following month. This result is consistent with an overreaction pattern on the abnormal tone and psychological biases such as the representativeness heuristic. Moreover, it highlights that there is heterogeneity in the informational value of different types of media.
  • Publication
    Accès libre
    nse: Computation of numerical standard errors in R
    nse is an R package (R Core Team (2016)) for computing the numerical standard error (NSE), an estimate of the standard deviation of a simulation result if the simulation experiment were to be repeated many times. The package provides a set of wrappers around several R packages, which give access to more than thirty estimators, including batch means estimators (Geyer (1992 Section 3.2)), initial sequence estimators (Geyer (1992 Equation 3.3)), spectrum at zero estimators (Heidelberger and Welch (1981),Flegal and Jones (2010)), heteroskedasticity and autocorrelation consistent (HAC) kernel estimators (Newey and West (1987),Andrews (1991),Andrews and Monahan (1992),Newey and West (1994),Hirukawa (2010)), and bootstrap estimators Politis and Romano (1992),Politis and Romano (1994),Politis and White (2004)).
  • Publication
    Accès libre
    Stress-testing with parametric models and Fully Flexible Probabilities
    We propose a simple methodology to simulate scenarios from a parametric risk model while accounting for stress-test views via fully flexible probabilities (Meucci, 2010, 2013).
  • Publication
    Accès libre
    A Note on Jointly Backtesting Models for Multiple Assets and Horizons
    (2016-5) ;
    Guerrouaz, Anas
    ;
    Hoogerheide, Lennart
    We propose a simulation-based methodology, which allows us to test the performance of multi-level and/or multi-horizon value-at-risk forecasts.
  • Publication
    Accès libre
    Worldwide equity risk prediction
    (2014) ;
    Hoogerheide, Lennart
    Various GARCH models are applied to daily returns of more than 1200 constituents of major stock indices worldwide. The value-at-risk forecast performance is investigated for different markets and industries, considering the test for correct conditional coverage using the false discovery rate (FDR) methodology. For most of the markets and industries we find the same two conclusions. First, an asymmetric GARCH specification is essential when forecasting the 95% value-at-risk. Second, for both the 95% and 99% value-at-risk it is crucial that the innovations’ distribution is fat-tailed (e.g., Student-t or – even better – a non-parametric kernel density estimate).
  • Publication
    Métadonnées seulement
    Estimation frequency of GARCH-type models: Impact on Value-at-Risk and Expected Shortfall forecasts?
    (2014) ;
    Hoogerheide, Lennart
    We analyze the impact of the estimation frequency - updating parameter estimates on a daily, weekly, monthly or quarterly basis - for commonly used GARCH models in a large-scale study, using more than twelve years (2000-2012) of daily returns for constituents of the S&P 500 index. We assess the implication for one-day ahead 95% and 99% Value-at-Risk (VaR) forecasts with the test for correct conditional coverage of Christoffersen (1998) and for Expected Shortfall (ES) forecasts with the block-bootstrap test of ES violations of Jalal and Rockinger (2008). Using the false discovery rate methodology of Storey (2002) to estimate the percentage of stocks for which the model yields correct VaR and ES forecasts, we conclude that there is no difference in performance between updating the parameter estimates of the GARCH equation at a daily or weekly frequency, whereas monthly or even quarterly updates are only marginally outperformed.
  • Publication
    Accès libre
    Cross-sectional distribution of GARCH coefficients across S&P 500 constituents
    (2013) ;
    Hoogerheide, Lennart
    We investigate the time-variation of the cross-sectional distribution of asymmetric GARCH model parameters over the S&P 500 constituents for the period 2000-2012. We find the following results. First, the unconditional variances in the GARCH model obviously show major time-variation, with a high level after the dot-com bubble and the highest peak in the latest financial crisis. Second, in these more volatile periods it is especially the persistence of deviations of volatility from its unconditional mean that increases. Particularly in the latest financial crisis, the estimated models tend to Integrated GARCH models, which can cope with an abrupt regime-shift from low to high volatility levels. Third, the leverage effect tends to be somewhat higher in periods with higher volatility. Our findings are mostly robust across sectors, except for the technology sector, which exhibits a substantially higher volatility after the dot-com bubble. Further, the financial sector shows the highest volatility during the latest financial crisis. Finally, in an analysis of different market capitalizations, we find that small cap stocks have a higher volatility than large cap stocks where the discrepancy between small and large cap stocks increased during the latest financial crisis. Small cap stocks also have a larger conditional kurtosis and a higher leverage effect than mid cap and large cap stocks.
  • Publication
    Accès libre
    Density prediction of stock index returns using GARCH models: Frequentist or Bayesian estimation?
    (2012)
    Hoogerheide, Lennart
    ;
    ;
    Corré, Nienke
    Using GARCH models for density prediction of stock index returns, a comparison is provided between frequentist and Bayesian estimation. No significant difference is found between qualities of whole density forecasts, whereas the Bayesian approach exhibits significantly better left-tail forecast accuracy.
  • Publication
    Accès libre
    Efficient Bayesian estimation and combination of GARCH-type models
    (London: Klaus Bocker, 2010) ;
    Hoogerheide, Lennart
    This chapter proposes an up-to-date review of estimation strategies available for the Bayesian inference of GARCH-type models. The emphasis is put on a novel efficient procedure named AdMitIS. The methodology automatically constructs a mixture of Student-t distributions as an approximation to the posterior density of the model parameters. This density is then used in importance sampling for model estimation, model selection and model combination. The procedure is fully automatic which avoids difficult and time consuming tuning of MCMC strategies. The AdMitIS methodology is illustrated with an empirical application to S&P index log-returns where non-nested GARCH-type models are estimated and combined to predict the distribution of next-day ahead log-returns.
  • Publication
    Accès libre
    Bayesian estimation of the GARCH(1,1) model with Student-t innovations in R
    (2010) ;
    Hoogerheide, Lennart
    This paper presents the R package bayesGARCH which provides functions for the Bayesian estimation of the parsimonious but effective GARCH(1,1) model with Student-t innovations. The estimation procedure is fully automatic and thus avoids the time-consuming and difficult task of tuning a sampling algorithm. The usage of the package is shown in an empirical application to exchange rate log-returns.