Bayes Lectures Programme

All lectures will be held on the fourth floor of the Informatics Forum. The lectures in room 4.31/33, and breaks in mini-forum 2, room 4.40.

Discussion sessions will be determined at the meeting. Bring your ideas.

WEDNESDAY 29 AUGUST
13:30 – 13:55 Welcome, Tea + Coffee
13:55 – 14:00 Opening remarks
14:00 – 15:15 New Challenges and Bayes: The world of computer models
M J Bayarri, Universitat de València, Spain.
15:15 – 15:45 Tea break
15:45 – 17:00 Aspects of spatial point process modelling and Bayesian inference
Jesper Møller, Aalborg University, Denmark.
17:00 – 17:15 Discussion session planning
17:15 – 19:00 Poster reception
THURSDAY 30 AUGUST
9:45 – 11:00 Confidence in Nonparametric Bayes?
Aad van der Vaart, Leiden University, Netherlands.
11:00 – 11:30 Tea break
11:30 – 13:00 General Discussion session 1
13:00 – 14:00 Lunch
14:00 – 15:15 Safe Learning: How to adjust the learning rate in Bayesian inference when all models are wrong
Peter Grünwald, Centrum voor Wiskunde en Informatica, Netherlands.
15:15 – 15:45 Tea break
15:45 – 17:00 General Discussion session 2
New Challenges and Bayes: The world of computer models
M J Bayarri, Universitat de València, Spain.

[slides]

Computer models (or “simulators”), like rock and roll, are here to stay. They are numerical solutions to complex math/physical models which try to ‘mimic’ reality. Analyses using data from both computer models and field are very tricky, and abound on uncertainties that need to be quantified, transmitted and combined, and Bayes methods are ideally suited for these tasks.

In this talk, we concentrate on a natural area in which computer models fit naturally: the quantification of risks. Indeed, because catastrophic events are fortunately rare, it is generally not appropriate to use purely statistical models for meaningful quantification of the risk of hazards. A promising approach uses computer models to simulate the phenomena under extreme conditions, thus allowing for extrapolation past the range of the data. Our proposed approach uses a combination of a state-of-the-art computer model, models for extreme events, statistical models to take into account the numerous uncertainties present, and spatial models to interpolate the output of the complex computer model at untried inputs. Uncertainties are combined through a Bayesian analysis. The methodology is exemplified for catastrophic pyroclastic flows of the Soufriere Hills Volcano on the island of Montserrat.

Aspects of spatial point process modelling and Bayesian inference
Jesper Møller, Aalborg University, Denmark.

[slides]

We start by a brief introduction to fundamental spatial point process characteristics and models, including Poisson process, Poisson cluster processes, Cox processes, and Gibbs point processes. Then we review Bayesian modelling strategies and discuss the computational aspects when estimating intensity functions and spatial interactions of spatial point processes. If time allows we will also consider determinantal point processes, which constitute a promising alternative to repulsive Gibbs point processes. Background material can be found in the literature below and the references therein.

Confidence in Nonparametric Bayes?
Aad van der Vaart, Leiden University, Netherlands.

[slides]

Nonparametric inference has the purpose of not making restrictive a-priori assumptions. Nonparametric Bayesian inference starts with a prior on a function space, which should not restrict the shape of the unknown function too much. We illustrate this with the example of Gaussian process priors. The usual Bayesian machine produces a posterior distribution, also on the function space, which one would like to use both for estimating the unknown function and for quantifying the remaining uncertainty of the inference. We study the success of these procedures from a frequentist perspective. For the second this involves the frequentist coverage of a posterior credible set, a central set of prescribed posterior probability. We show that there is a danger of prior oversmoothing, and we ask some questions about preventing this by a hierarchical or empirical Bayes method.

Safe Learning: How to adjust the learning rate in Bayesian inference when all models are wrong
Peter Grünwald, Centrum voor Wiskunde en Informatica, Netherlands

[slides]

Standard Bayesian inference can behave badly if the model under consideration is wrong: in some simple settings, the posterior fails to concentrate even in the limit of infinite sample size. We introduce a test that can tell from the data whether we are heading for such a situation. If we are, we can adjust the learning rate (equivalently: make the prior lighter-tailed, or penalize the likelihood more) in a data-dependent way. The resulting “safe” estimator continues to achieve good rates with wrong models. For example, when applied to a union of classification models, the safe estimator achieves the optimal rate for the Tsybakov exponent of the underlying distribution. This establishes a connection between Bayesian inference (based on probability models) and statistical learning theory (based on predictor models such as classifiers).