This colloquium takes place every other Wednesday afternoon, 16:00-17:00. For more information, please contact one of the organizers Joost Hulshof and Dennis Dobler.

A database of earlier years' talks can be found here.

## Upcoming talks in 2019:

Wed 20 March: **Bernard Geurts** (UTwente), Room WN-P647, 16:00-17:00

*Title:* Mathematics for Turbulence

*Abstract:* Turbulent flow arises in a wide variety of natural and technological situations. While the full richness of turbulence is appreciated qualitatively, a quantitatively accurate prediction is often outside the scope of numerical computations. As an alternative, filtered flow descriptions, such as large-eddy simulation (LES), have been proposed and studied intensively, promising a combination of accuracy and computational feasibility. A brief review of mathematical cornerstones for LES is given. Many heuristic closure models for small-scale turbulence have been put forward to represent dynamic small scale effects on the large-scale characteristics of a flow. While these models are often effective in reducing the dynamic complexity of the LES approach, accuracy limitations of LES are a matter of ongoing discussion.In this presentation, mathematical regularization for turbulence, pioneered already by Leray in the 1930s, is explored. Following the regularization approach for the nonlinear convective terms, the closure model is uniquely connected to the underlying regularization principle, thereby by-passing heuristic closure modeling that is characteristic of the filtering approach to LES. A number of regularization models will be reviewed and their performance in turbulence will be discussed. It will be shown that regularization methods can be accurate at strongly reduced computational costs.

Wed 01 May: **Magdalena Kedziorek** (UU), Room WN-S623, 16:00-17:00

*Title:* TBA

## Previous talks:

Wed 06 March: **Ronald Meester** (VU Amsterdam), room WN-P647, 16:00-17:00

**Title**:**The DNA Database Controversy 2.0**

*Abstract:*** ** What is the evidential value of a unique match of a DNA profile in database? Although the probabilistic analysis of this problem is in principle not difficult, it was the subject of a heated debate in the literature around 15 years ago, to which today's speaker also contributed. Very recently, to my surprise, the debate was re-opened by the publication of a paper by Wixted, Christenfeld and Rouder, in which a new element to the discussion was introduced. In this lecture I will first review the problem, together with the principal solution. Then I will explain what has recently been proposed as a new element in the analysis, and also explain why this new ingredient does not add anything, and only obscures the picture. The fact that not everybody agrees with us will be illustrated by some interesting quotes from the recent literature, which might be a nice subject for discussion during the drinks in the Basket afterwards. If you thought that mathematics could not be polemic you should certainly come and listen. (Joint work with Klaas Slooten.)

Wed 20 February: **Nick Lindemulder** (TU Delft), Room WN-S607, 16:00-17:00

*Title:* A randomized difference norm for vector-valued fractional Sobelev spaces
*Abstract:* Sobolev spaces of Banach space-valued distributions and variants with fractional smoothness play an important role in the $L_{p}$-approach to evolution equations. In this talk we discuss several (equivalent) ways how to define a suitable scale of fractional Sobolev spaces. In particular, we discuss the well known Fourier analytic definition by means of the Bessel potential operator and the less well known classical characterization of the latter by means of differences due to Strichartz from the scalar-valued setting. The main aim is to discuss extensions of the classical scalar-valued setting to the Banach space-valued setting, where the concept of randomization comes into play.

Wed 06 February: **Sophia B. Coban** (CWI), 16:00-17:00

*Title:***Things your radiologist would not tell you about
**

*Abstract:* Computed tomography is the perfect example of a large-scale, mildly ill-conditioned inverse problem, and one that is highly important to accurately solve in many real world applications. In today's talk, I will be introducing the basics of computed tomography, in particular X-ray CT; discuss some of the building blocks and novel trends of image reconstruction, and finish with the state-of-the-art methods developed within the Computational Imaging group at Centrum Wiskunde & Informatica.

Wed 12 December: **Floske Spieksma** (LU), Room WN-P663, 16:00-17:00

*Title:* Alternative formula for the Deviation Matrix

*Abstract:* In Markov process theory, the deviation matrix measures the total deviation over time of the marginal distributions from stationarity. As such, it plays a central role in the determination of average optimal policies in Markov decision processes. In finite state space, the deviation matrix is minus the generalised inverse of the generator or rate matrix of a continuous time Markov process. However, the generalised inverse is not simply computable. In countable space, if it exists at all, it may even not be unique. The importance of the deviation matrix is not restricted to Markov process theory, but e.g. it plays an important role in network robustness of undirected graphs. This motivates the study of alternative computations methods. In my talk I will discuss this, as well as some applications.

Wed 28 November: **Max Welling** (UvA), Room WN-M639, 16:00-17:00

*Title:* Combining Deep Learning with External and Expert Knowledge

*Abstract:* Deep learning is a typical 'black box’ technique: create lots of labeled examples between input and output and train a general purpose map between the two. This works great when you can create a very large annotated dataset but has it limitations when the dataset is not so large, or poorly annotated. In the latter case we should try to inject our inductive biases or expert knowledge into the model. We will discuss two new directions to achieve this. 1) extend the translational equivariance of traditional convolutional neural networks to larger groups of symmetries, such as rotations and reflections, and 2) to incorporate bits and pieces of the (known) generative process of the data into the NN. We will illustrate both examples in the medical imaging domain: using group convolutions to improve performance in pathology slide analysis and using generative knowledge to speed up MRI reconstruction. Reversely, or more generally, one could ask the question, how can deep learning integrate with rich data sources such as knowledge graphs. A successful integration could lead to improved high level reasoning and systems that have a deeper understanding of the world they operate in. We will discuss a method called graph convolutions which allows us to embed rational data into a semantic space from which reasoning becomes easier.

Wed 14 November: ** Stéphanie van der Pas** (LU), Room WN-S623, 16:00-17:00

**Title:****Posterior concentration for Bayesian regression trees and their ensembles**

*Abstract:* Since their inception in the 1980's, regression trees have been one of the more widely used nonparametric prediction methods. Tree-structured methods yield a histogram reconstruction of the regression surface, where the bins correspond to terminal nodes of recursive partitioning. Trees are powerful, yet susceptible to overfitting. Strategies against overfitting have traditionally relied on pruning greedily grown trees. The Bayesian framework offers an alternative remedy against overfitting through priors. Roughly speaking, a good prior charges smaller trees where overfitting does not occur. In this paper, we take a step towards understanding why/when do Bayesian trees and their ensembles not overfit. We study the speed at which the posterior concentrates around the true smooth regression function. We propose a spike-and-tree variant of the popular Bayesian CART prior and establish new theoretical results showing that regression trees (and their ensembles) (a) are capable of recovering smooth regression surfaces, achieving optimal rates up to a log factor, (b) can adapt to the unknown level of smoothness and (c) can perform effective dimension reduction. These results provide a piece of missing theoretical evidence explaining why Bayesian trees (and additive variants thereof) have worked so well in practice.

Wed 31 October: **Magnus Botnan** (VU), Room WN-P633, 16:00-17:00

**Title:****From Clustering to Quiver Representations**

*Abstract:* Clustering analysis is a statistical method for uncovering structure in large and complicated data. In this talk I will show how the desire for a parameter-free, stable, and density sensitive hierarchical clustering method inspired research in the field of representation theory of quivers.

Wed 17 October: **Viresh Patel** (UvA), Room WN-F123, 16:00-17:00

*Title:* Quasi Ramsey problems*Abstract:* Ramsey theory is currently one of the most active areas of research in combinatorics. The seminal question in the area, raised by Ramsey in 1930 can be formulated as follows: how large does n have to be to guarantee that in any room with n people we can find a set S of k people such that either every pair in S is acquainted or every pair in S is not acquainted. It is not immediately clear that such an n exists, although it is not hard to show. On the other hand the known bounds for n as a function of k are quite poor. I will discuss the Ramsey problem as well as variants of it. In particular I will discuss a relaxation of the problem above for which we are able to give quite precise bounds. This is based on joint work with Janos Pach, Ross Kang, Eoin Long and Guus Regts.

Wed 03 October: **Ben Moonen** (RU Nijmegen), Room WN-S655, 16:00-17:00

*Title:* Curves, Jacobians and CM points

*Abstract:* I will tell a story of two moduli spaces: the moduli space M_g of curves of genus g, and the moduli space A_g of abelian varieties of dimension g. To a curve C of genus g (or if you prefer: a compact Riemann surface of genus g) we can associate its Jacobian J(C), an abelian variety of dimension g. This gives a map t: M_g --> A_g that is known to be injective. As I will explain in my talk, though M_g and A_g have been studied extensively and much is known about them, there are many basic questions to which we don't know the answer. I will illustrate this by discussing a conjecture of Coleman about curves whose Jacobian is of CM-type (which, informally, means that it is 'maximally symmetrical'). Along the way I will review some important developments of the last three decades, notably the André-Oort conjecture, which has now been proved using a rather spectacular variety of techniques.

*On October 3 we cannot go to the Basket. Instead, we will meet in De Tegenstelling (D103, WN gebouw) after the talk!
*

Wed 19 September: **Jan Bouwe van den Berg** (VU), Room WN-S623, 16:00-17:00

*Title:* Computer-assisted theorems in dynamics

*Abstract:* In nonlinear analysis we often simulate dynamics on a computer, or calculate a numerical solution to a partial differential equation. This gives very detailed, stimulating information. However, it would be even better if we can be sure that what we see on the screen genuinely represents a solution of the problem. In particular, rigorous validation of the computations would allow such objects to be used as ingredients of theorems. In this talk we explore an approach based on a Newton-Kantorovich type argument in a suitable neighborhood of a numerically computed candidate. This method has been applied successfully for various problems in ordinary differential equations, delay differential equations and partial differential equations. We will illustrate the general setup using an example stemming from the Navier-Stokes equations in two dimensions. The latter is joint work in progress with Maxime Breden, Jean-Philippe Lessard and Lennaert van Veen.

Wed 13 Juni: **Paola Gori-Giorgi** (VU), Room WN-M143, 16:00-17:00

*Title:* Multi-marginal Optimal Transport and Density Functional Theory: A mathematical setting for physical ideas

*Abstract:* Electronic structure calculations are at the very heart of predictive computational materials science, chemistry and biochemistry. Their goal is to solve, in a reliable and computationally affordable way, the many-electron problem, a complex combination of quantum-mechanical and many-body effects. The most widely used approach, which achieves a reasonable compromise between accuracy and computational cost, is Kohn-Sham (KS) density-functional theory (DFT). Although exact in principle, practical implementations of KS-DFT must heavily rely on approximations for the so-called exchange-correlation (XC) functional. Empirical approximations (e.g., fitted on several data sets) are successful in normal cases, but typically lack predictive power for systems outside the training set. For this reason, exact mathematical conditions and rigorous guiding principles to build the XC functional have always played a key role in the field. In the recent years, it has been shown that there is a special semiclassical limit of the XC functional, relevant for the most challenging cases in KS DFT, which can be reformulated as a multi-marginal optimal transport problem, linking two rather distant research fields. In this talk I will review this reformulation, providing an overview of the key results from the optimal transport community, and discussing some of the open questions and conjectures that still need a rigorous proof.

Wed 30 Mei: **Michiel Bertsch** (University of Rome Tor Vergata), Room WN-M143, 16:00-17:00

*Title:***Mathematical modelling of Alzheimer's disease**

*Abstract:* Up to now there is no effective cure for Alzheimer's disease (AD). One of the major reasons is its complexity. Although the biomedical knowledge about AD is rapidly increasing, there is not yet a clear picture available about the major causes and the evolution of the disease. In such circumstances, can mathematical modelling be useful at all? In the colloquium I propose a modelling approach which, in a certain sense, is characterized by flexibility. I present a "toy model", which deliberately takes into account only a very limited amount of aspects of the disease (in this case the role of beta-amyloid and the existence of different time scales). The toy model seems to be flexible enough to include other aspects (such as the role of the tau protein) or novel biomedical insight. Surprisingly, the toy model itself suggests the possible importance of a very specific biomedical process, which is also discussed in the biomedical literature.BA

**Viresh Patel**(UvA), Room WN-M143, 16:00-17:00

*Title:* Quasi Ramsey problems

*Abstract:* Ramsey theory is currently one of the most active areas of research in combinatorics. The seminal question in the area, raised by Ramsey in 1930 can be formulated as follows: how large does n have to be to guarantee that in any room with n people we can find a set S of k people such that either every pair in S is acquainted or every pair in S is not acquainted. It is not immediately clear that such an n exists, although it is not hard to show. On the other hand the known bounds for n as a function of k are quite poor. I will discuss the Ramsey problem as well as variants of it. In particular I will discuss a relaxation of the problem above for which we are able to give quite precise bounds. This is based on joint work with Janos Pach, Ross Kang, Eoin Long and Guus Regts.

Wed 02 Mei: **Joris Mooij** (UvA), Room WN-M143, 16:00-17:00

*Title:* Joint Causal Inference from Observational and Experimental Data

*Abstract:*The standard method to discover causal relations is by using experimentation. Over the last decades, alternative methods have been proposed: constraint-based causal discovery methods can sometimes infer causal relations from certain statistical patterns in purely observational data. We introduce Joint Causal Inference (JCI), a novel constraint-based approach to causal discovery from multiple data sets that elegantly unifies both approaches. Compared with existing constraint-based approaches for causal discovery from multiple data sets, JCI offers several advantages: it deals with several different types of interventions in a unified fashion, it can learn intervention targets, it systematically pools data across different datasets which improves the statistical power of independence tests, and most importantly, it improves on the accuracy and identifiability of the predicted causal relations.

Wed 18 April: **Sjoerd Verduyn Lunel** (UU), Room WN-M143, 16:00-17:00

*Title:* Transfer operators, Hausdorff dimension and the spectral theory of positive operators

*Abstract:* In this talk we present a new approach to compute the Hausdorff dimension of conformally self-similar invariant sets using an elementary direct spectral analysis of a transfer operator associated with the problem. We start from scratch, introduce the notion of transfer operator and combine ideas from the theory of positive operators and from the theory of trace class operators and their determinants. Our approach is illustrated with examples from dynamical systems and number theory via Diophantine approximations.

Wed 21 Maart: **Peter Grunwald** (CWI, Leiden), Room WN-M143, 16:00-17:00

*Title:* Safe Testing

*Abstract:*A large fraction (some claim > 1/2) of published research in top journals in applied sciences such as medicine and psychology is irreproduceable. In light of this 'replicability crisis', standard p-value based hypothesis testing has come under intense scrutiny. One of its many problems is the following: if our test result is promising but nonconclusive (say, p = 0.07) we cannot simply decide to gather a few more data points. While this practice is ubiquitous in science, it invalidates p-values and error guarantees. Here we propose an alternative hypothesis testing methodology based on supermartingales - it has both a gambling and a data compression interpretation. This method allows us to consider additional data and freely combine results from different tests by multiplication (which would be a mortal sin for p-values!), and avoids many other pitfalls of traditional testing as well. If the null hypothesis is simple (a singleton), it also has a Bayesian interpretation, and essentially coincides with a proposal by Vovk (1993). We work out the case of composite null hypotheses, which allows us to formulate safe, nonasymptotic versions of the most popular tests such as the t-test and the chi square tests. Safe tests for composite H0 are not always Bayesian, but rather based on the 'reverse information projection', an elegant concept with roots in information theory rather than statistics.

Wed 07 Maart: **Nelly Litvak**, Room WN-M143, 16:00-17:00

*Title:* Power-law hypothesis for PageRank

*Abstract:* PageRank is a well-known algorithm, which has been proposed by Google for ranking pages in the World-Wide Web. PageRank can be interpreted as a stationary distribution of a random walk of a user that hops from one web page to another. Beyond the web search, PageRank has many applications in network of different kinds, for example, discovering communities in social networks, or finding endangered species in ecological networks. Most of these real-life networks have so-called power-law degree distribution: if a network is represented as a graph, then the fraction of vertices with degree k is approximately proportional to a negative power of k. Moreover, many empirical studies confirm that PageRank also has a power law distribution, with the same negative power as in-degree. In this talk I will discuss to which extend we can formalize this empirical observations analytically. Formally, we will model networks as random graphs and investigate the limiting behavior of PageRank as the graph size goes to infinity. I will present results for some specific random graph models, and very recent general limiting results for a large class of random graphs. This talk is based on joint works with Remco van der Hofstand and Alessandro Garavaglia (Eindhoven University of Technology) and Mariana Olvera-Cravioto (Univerity of California at Berkley).

Wed 21 Februari: **Gijs Heuts** (UU), Room WN-M143, 16:00-17:00

*Title:* Lie algebras and periodicity in homotopy theory

*Abstract:* Homotopy theory is the study of continuous deformations of spaces. The general problem of classifying such deformations is notoriously hard. However, if one is only interested in rational invariants of spaces then there are good algebraic tools available: Quillen constructed for every space a Lie algebra from which such invariants can be calculated, whereas Sullivan built a commutative algebra (much like the algebra of differential forms on a manifold) that retains essentially the same information. I will discuss a modern viewpoint of homotopy theory called the "chromatic perspective": much like a ray of white light is broken into different colours through a prism, a space can be decomposed into pieces corresponding to various "frequencies". The rational invariants correspond to one of these pieces. It turns out that Lie algebras may also be used to give models for the others.

Wed 07 Februari: **Damaris Schindler** (UU), Room WN-M143, 16:00-17:00

*Title:***Systems of quadratic forms
**

*Abstract:* In this talk we discuss some aspects concerning the arithmetic of systems of quadratic forms. Our focus will be on the local-global principle for the existence of rational or integral solutions and we will discuss some failures of this principle.