Econometrics and Statistics Seminar
Time and place: Thursdays, 14:00-15:00 h in the Faculty Lounge (Room 0.036), Juridicum, Adenauerallee 24-42, 53113 Bonn
October 25, 2018 Andreas Dzemski, University of Gothenburg
Title: "Confidence set for group membership" (together with Ryo Okui)
Abstract: We develop new procedures to quantify the statistical uncertainty from sorting units in panel data into groups using data-driven clustering algorithms. In our setting, each unit belongs to one of a finite number of latent groups and its regression curve is determined by which group it belongs to. Our main contribution is a new joint confidence set for group membership. Each element of the joint confidence set is a vector of possible group assignments for all units. The vector of true group memberships is contained in the confidence set with a pre-specified probability. The confidence set inverts a test for group membership. This test exploits a characterization of the true group memberships by a system of moment inequalities. Our procedure solves a high-dimensional one-sided testing problem and tests group membership simultaneously for all units. We also propose a procedure for identifying units for which group membership is obviously determined. These units can be ignored when computing critical values. We justify the joint confidence set under N,T→∞ asymptotics where we allow T to be much smaller than N. Our arguments rely on the theory of self-normalized sums and high-dimensional central limit theorems. We contribute new theoretical results for testing problems with a large number of moment inequalities, including an anti-concentration inequality for the quasi-likelihood ratio (QLR) statistic. Monte Carlo results indicate that our confidence set has adequate coverage and is informative. We illustrate the practical relevance of our confidence set in two applications.
November 22, 2018 Anna Simoni, Crest Paris
Title: Bayesian Estimation and Comparison of Conditional Moment Models
Abstract: In this paper we consider models characterized by conditional moment conditions and construct a semiparametric Bayesian inference for them. Our procedure utilizes and completes the Bayesian exponentially tilted empirical likelihood (BETEL) framework developed in Chib, Shin and Simoni (2018) for unconditional moment condition models. The starting point is a conversion of the conditional moments into a sequence of unconditional moments by using a vector of approximating functions (such as tensor splines based on the splines of each conditioning variable) with dimension that is increasing with the sample size. We establish that the BETEL posterior distribution satisfies the Bernstein-von Mises theorem, subject to a rate condition on the number of approximating functions. We also develop an approach based on marginal likelihoods and posterior odds ratios for comparing different conditional moment restricted models and establish the model selection consistency of this procedure. Unlike the set up in Chib, Shin and Simoni (2018), the model selection theory is different because the extra parameter that is needed for validly comparing such models has dimension that grows with the sample size, and, therefore, the rate of contraction of the posterior distribution is nonparametric. We treat both the cases where the models to be compared are nested and the case where they are non-nested. We establish that when we compare correctly specified models, the marginal likelihood criterion selects the model that is estimable at the faster rate. In the nested case, the model selected by the posterior odds criterion is the model that is estimable at the parametric rate. When we compare misspecified models we select the model that is less misspecified, that is, the model that contains the smaller number of misspecified moment restrictions. This theory breaks substantial new ground in the area of Bayesian model comparisons. Several examples are used to illustrate the framework and results.
November 29, 2018 - Cancelled - Johan Vikström, Uppsala University.
Title: Empirical Monte Carlo Evaluation of the Timing-of-Events Approach
Abstract: This paper uses an Empirical Monte Carlo simulation approach to study estimation of Timing-of-Events (ToE) models. We exploit rich Swedish data for job seekers with information on participation in a training program. The real data is used to simulate placebo treatment durations using a large set of covariates. Then, we generate correlated unobserved heterogeneity by omitting some of the covariates when we estimate the ToE models. We estimate ToE models with a discrete support point distribution for the unobserved heterogeneity, and compare different specifications of the model. One result is that the ToE model performs well, in particular, if time-varying covariates in the form of calendar-time variation are exploited. For the discrete support distribution, we find that both over-correcting for unobserved heterogeneity with too many points and under-correcting with too few points lead to large bias. Another result is that information criteria which penalize parameter abundance are a very useful way to select the number of support points, but information criteria with little penalty should be avoided because they lead to problems with over-correction.
January 10, 2019 Christophe Ley, Ghent University
January 17, 2019 Koen Jochmans, Cambridge University
January 24, 2019 Johannes Lederer, University Bochum
January 31, 2019 David Kraus, University Brno