# Econometrics and Statistics Seminar

Thursdays, 11:00-12:00 Uhr in the Faculty Lounge, Juridicum, Adenauerallee 24-42, 53113 Bonn

**02.02.2018 , 14-15:00**

**Wolfgang Karl Härdle** (Humboldt-Universität zu Berlin)

**Title:** What are cryptocurrencies? A visit to Pythia

**Abstract: ** Blockchains are one of the latest examples of how cryptography works. Mathematical scrambling keeps information which is saved on a Blockchain secret, but paradoxically the way this works makes it a perfect tool for open dealing. How can this paradoxon generate so much trust with so much ba

**25.01.2018**

**Christoph Breunig (HU Berlin) **

**Title: **Varying Random Coefficient Models

**Abstract:** This paper is about inference in a varying random coefficients (VRC) model. While the VRC model is linear, its coefficients are determined by nonlinear functions of observed regressors and an additive separable error term. This model generalizes the ordinary random coefficient model by allowing for nonlinearities in observed regressors. We propose an estimator of the VRC density based on weighted sieve minimum distance. Its L2 rate of convergence is derived and shown to be sensitive to the support of covariates. Pointwise limit theory of linear functionals is derived. In addition, we provide a bootstrap procedure for uniform confidence bands. The proposed estimator is easy to implement and performs well in finite samples. The procedure is applied to analyze heterogeneity in an hedonic model of house prices.

**18.01.2018 **

**Fabian Krüger** (Uni Heidelberg)** **

**Title:** Probalistic Forecasting and Comparative Model Assessment Based on Markov Chain Monte Carlo Output

**Abstract:** In Bayesian inference, predictive distributions are typically available only through a sample generated via Markov chain Monte Carlo (MCMC) or related algorithms. In this paper, we conduct a systematic analysis of how to make and evaluate probabilistic forecasts from such simulation output. Based on proper scoring rules, we develop a notion of consistency that allows to assess the adequacy of methods for estimating the stationary distribution underlying the simulation output. We then provide asymptotic results that account for the salient features of Bayesian posterior simulators, and derive conditions under which choices from the literature satisfy our notion of consistency. Importantly, these conditions depend on the scoring rule being used, such that the choices of approximation method and scoring rule are intertwined. While the logarithmic rule requires fairly stringent conditions, the continuous ranked probability score (CRPS) yields consistent approximations under minimal assumptions. These results are illustrated in a simulation study and an economic data example. Overall, we find that mixture-of-parameters approximations which exploit the parametric structure of Bayesian models perform particularly well.

**11.01.2017 **

**Claudia Kirch** (Uni Magdeburg)

**Title:** Frequency domain likelihood approximations for time series bootstrapping and Bayesian nonparametrics

**Abstract:** A large class of time series methods are based on a Fourier analysis, which can be considered as a whitening of the data, giving rise for example to the famous Whittle likelihood. In particular, frequency domain bootstrap methods have been successfully applied in a large range of situations. In this talk, we will first review existing frequency domain bootstrap methodology for stationary time series before generalizing them for locally stationary time series. To this end, we first introduce a moving Fourier transformation that captures the time-varying spectral density in a similar manner as the classical Fourier transform does for stationary time series. We obtain consistent estimators for the local spectral densities and show that the corresponding bootstrap time series correctly mimics the covariance behavior of the original time series. The approach is illustrated by means of some simulations and an application to a wind data set. All time series bootstrap methods are implicitely using a likelihood approximation, which could be used explicitely in a Bayesian nonparametric framework for time series. So far, only the Whittle likelihood has been used in this context to get a nonparametric Bayesian estimation of the spectral density of stationary time series. In a second part of this talk we generalize this approach based on the implicit likelihood from the autoregressive aided periodogram bootstrap introduced by Kreiss and Paparoditis (2003). This likelihood combines a parametric approximation with a nonparametric correction making it particularly attractive for Bayesian applications. Some theoretic results about this likelihood approximation including posterior consistency in the Gaussian case are given. The performance is illustrated in simulations and an application to LIGO gravitational wave data.

**14.12.2017**

**Daniel Gutknecht **(Uni Mannheim)

**Title:** Intercept Estimation in (Non-)Additive Semiparametric Sample Selection Models

**Abstract:** This paper develops new estimators of the intercept for semiparametric sample selection models, which allows to recover treatment effects for these models in non-experimental settings. In the linear additive case, we introduce a local polynomial estimator which achieves the `optimal' univariate nonparametric rate and improves over existing estimators as it does not require independence of the selection error and the regressor(s), and may be implemented using a data-driven bandwidth procedure. In the additive nonlinear and in the multiplicative model, estimation of the intercept has, to the best of our knowledge, not yet been addressed. We aim at filling this gap, deriving a bias corrected nonlinear least squares estimator. Depending on the properties of the (conditional) distribution of the index-valued instrument vector, this estimator converges at an `optimal' univariate nonparametric rate. Re-visiting a famous training data example, we find that applying our method to the non-experimental control group recovers the experimental average treatment effect.

**07.12.2017**

** Axel Bücher** (Uni Bochum)

**Title:** Weak convergence of a pseudo maximum likelihood estimator for the extremal index (joint work with Betina Berghaus)

**Abstract:** The extremes of a stationary time series typically occur in clusters. A primary measure for this phenomenon is the extremal index, representing the reciprocal of the expected cluster size. Both a disjoint and a sliding blocks estimator for the extremal index are analyzed in detail. In contrast to many competitors, the estimators only depend on the choice of one parameter sequence. We derive an asymptotic expansion, prove asymptotic normality and show consistency of an estimator for the asymptotic variance. Explicit calculations in certain models and a finite-sample Monte Carlo simulation study reveal that the sliding blocks estimator outperforms other blocks estimators, and that it is competitive to runs- and inter-exceedance estimators in various models. The methods are applied to a variety of financial time series.

**30.11.2017**

**Olivier Linton** (Cambridge)

**Title:** Estimation and Inference in Semiparametric Quantile Factor Models (joint work with Shujie Ma and Jiti Gao)

**Abstract:** We propose an estimation methodology for a semiparametric quantile factor panel model. We provide tools for inference that are robust to the existence of moments and to the form of weak cross-sectional dependence in the idiosyncratic error term. We apply our method to daily stock return data.