Time
Welcome talk
-----------------------------------------------------------------------------------------------
H. G. Matthies: Probability, Random Variables, and Algebra
Abstract: Mathematical resp. computational models with uncertain elemnts often do occur in scientific practice. One way to deal with these uncertain
elements (parameters,processes, fields) is to model them probabilistically. The usual way of dealing with such probablistically modelled uncertainties is to perform
Monte Carlo simulations, i.e. to compute the results of the mathematical model for very many draws resp. samples of the uncertain input parameters. Here a different tack will be explained: The results of the computation (e.g. state of the system) will be assumed (an ansatz just like in Galerkin methods) to be functions of known random variables, with as yet unknown coefficients. Computing these coefficients --- there are different ways --- is called solving the forward problem or ``uncertainty quantification''. Approximating the
solution in the manner just sketched is called a spectral resp. functional approximation. It puts random variables in a theoretical and computational basic position, and not the associated probability measures. This allows the numerical use of powerful linear algebra computational kernels. Theoretically, such random variables form an algebra, and this explains the theoretical background of these spectral resp. functional approximation methods from a different angle. In addition, this functional analysis view of probability algebras
allows to draw some close connections to other fields of analysis, like spectral theory and factorisation of operators, which plays an important role in approximation methods used in this domain, like the Karhunen-Loeve expansion.
B. Rosic: Stochastic modelling and learning based on functional approximation
Abstract: The mathematical description of heterogeneous materials in general includes different sources of uncertainties which broadly can be classified into epistemic and aleatoric ones. In this lecture the probabilistic view on the material description will be taken such that the positive definite consitutive material tensors are modelled with the help of
the maximum entropy approach and the family of minimally parameterised random fields discretised via Karhunen-Loeve expansions/proper orthogonal decompositions. The material models further are introduced into the variational formulations describing material behaviour giving their stochastic counterparts. To obtain the statistics of the material response, the propagation of uncertaintities will be achieved in a purely functional approximation setting by using the polynomial chaos ansatz functions and Galerkin formulation of the problem.
Time
U. Romer: Uncertainty Quantification with Monte Carlo and stochastic collocation methods
Abstract: This lecture deals with Uncertainty Quantification (UQ) based on sampling of the underlying numerical model or quantity of interest. Both stochastic and deterministic approaches will be covered. The classical Monte Carlo method and more efficient variants, e.g., the mutlilevel Monte Carlo method will be recalled. In multilevel Monte Carlo, multiple grids are used to enhance the convergence order. For problems with sufficient regularity (with respect to the random parameters) the stochastic collocation method will be introduced. We will show how this method can be used to either approximate some statistical moments or to obtain an accurate surrogate model. The relationship to the previously introduced stochastic Galerkin method will also be discussed as well as convergence results and numerical examples.
C. Schillings: Bayesian inference for high- or infinite-dimensional models
Abstract: In this lecture we will focus on the identification of parameters through observations of the response of the system - the inverse problem. The uncertainty in the solution of the inverse problem will be described via the Bayesian approach. We will derive Bayes' theorem in the setting of finite dimensional parameter spaces, and discuss properties such as well-posedness, statistical estimates and connections to classical regularization methods. We will briefly examine the extension of the Bayesian approach to the infinite dimensional setting.
more info
Time
H. G. Matthies: Bayesian Updating by Conditional Expectation
Abstract: Introducing new information into a probabilistic description of knowledge is typically performed via some kind of application of Bayes´s by now classical theorem. To avoid ambiguities (which did arise historically), the mathematically precise description of conditional probabilities in Bayes´s theorem, especially when conditioning on events of vanishing probability, is formulated via conditional expectations, and is due to Kolmogorov. Nevertheless, most sampling approaches to Bayesian updating normally start from the classical formulation involving conditional measures and densities. These are usually the distributions of some random variables describing the prior knowledge. Here an alternative track is taken, in that the notion of conditional expectation is also taken computationally as the prime object. Being able to numerically approximate conditional expectations, one has a complete description of the posterior probability. A further task is to construct a new --- transformed, or filtered - random variable which has a distribution as required by the (posterior) conditional expectations. In the talk, the abstract task and its solution will be presented first, and then different computational approximations will be sketched, as well as different ways of
stochastic discretisations, adding another level of approximation.
C. Schillings: Sparse quadrature algorithms for Bayesian inverse problems
Abstract: This lecture will be devoted to algorithms for the efficient approximation of the solution of the inverse problem. We will give an overview of methods in this context and then focus on sparse quadrature algorithms. For forward problems belonging to a certain sparsity class, we quantify analytic regularity of the Bayesian posterior and prove that the parametric, deterministic density of the Bayesian posterior belongs to the same sparsity class. These results suggest in particular dimension-independent convergence rates for data-adaptive Smolyak integration algorithms.
address
Time
B. Rosic: Polynomial chaos based Bayesian updating
Abstract: The estimation of unknown parameters of nonlinear system is often difficult due to complexity of the model behaviour. In a probabilistic manner the incorporation of
new information through Bayes theorem has two constitutents: the measurable function or random variable, and the probability measure. One group of methods updates the measure, the other group changes the function. To make Bayesian identification feseable for partially observed nonlinear system, here will be presented the second group of methods in a form of approximated Bayesian filters designed from the variation formulation associated with the conditional expectation. The procedure updates the measurable function in a functional approximation form by employing the powerful uncertainty quantification tools
J. Denzler: Life-Long and Incremental Learning
Abstract: Life-long learning systems are system that are equipped with the capability to improve their performance continuously over time by observing data, detecting unexpected objects or events, adapting to shifts in the underlying distribution of the data, and by feedback by the human. This tutorial introduced some of the basic elements of such system from a machine learning perspective. I will introduce methods for novelty and anomaly detection, incremental updating classifiers, as well as active learning, i.e. the selection of the most promising unlabeled samples with respect to expected gain by annotating them by the human expert. A first version of such life-long learning system, called WALI, is introduced that implements already key elements and shows its performance in visual monitoring of biodiversity.
J. Denzler: Deep Learning
Abstract: This tutorial gives a basic introduction into key elements of deep learning, current methods and algorithms, as well as good practices in specific applications. The tutorial will stay in shallow waters and focus on an overview on Deep Learning and its applications, including general overview on machine learning and Deep Learning, showcases, applications, and some live-demos, as well as practical Issues to consider. In essence, it aims at demystifying the deep learning hype, showing what deep learning can achieve especially for visual recognition.
Time
Y. Marzouk: Markov chain Monte Carlo methods for large-scale Bayesian inverse problem
Abstract: Inverse problems formalize the process of learning about a system through indirect, noisy, and often incomplete observations. Casting inverse problems in the Bayesian statistical framework provides a natural framework for quantifying uncertainty in parameter values and model predictions, for fusing heterogeneous sources of information, and even for optimally selecting experiments or observations. Markov chain Monte Carlo (MCMC) is an enormously flexible workhorse approach for posterior simulation in the Bayesian setting, but the associated computational expense is a major bottleneck for complex posteriors and large-scale models. This lecture will discuss modern MCMC algorithms for inverse problems. We will discuss methods that expose and exploit low-dimensional structure in inverse problems, that attempt to mitigate the computational cost of repeated forward model evaluations, and that exhibit discretization-invariant performance.
Y. Marzouk: Transport maps for Bayesian computation
Abstract: This lecture will discuss how deterministic couplings of probability measures, induced by transport maps, can enable useful new approaches to Bayesian computation. In particular, we will describe variational inference methods that construct a deterministic transport map from a reference distribution to the posterior, without resorting to MCMC. Independent and unweighted samples can then be obtained by pushing forward reference samples through the map. Making this approach efficient in high dimensions, however, requires identifying and exploiting low-dimensional structure. We will present new results relating the sparsity and decomposability of transports to the conditional independence structure of the target distribution. We will also describe conditions, common in inverse problems, under which transport maps have a particular low-rank structure. In general, these properties of transports can yield more efficient algorithms. As a particular example, we propose new variational algorithms for online inference in nonlinear and non-Gaussian state-space models with static parameters. These algorithms implicitly characterize---via a transport map---the full posterior distribution of the sequential inference problem using only local operations, while avoiding importance sampling or resampling. Other illustrative applications in the lecture will involve spatial statistics and partial differential equations.
M. Eigel: Explicit Bayesian inversion in hierarchical tensor representations
Abstract: Bayesian inversion can be understood as a high-dimensional integration with respect to some prior measure, resulting in the posterior measure of the sought parameters. Most often, an approximation is obtained by sampling from the posterior. Such a numerical approach (MCMC) usually
exhibits rather slow convergence. In contrast to this, given sufficient smoothness, functional approximations (e.g. stochastic Galerkin and collocation methods) are known to potentially deliver much better convergence rates with uncertainty propagation, albeit at the expense of a somewhat higher (implementation) complexity. Based on an adaptive stochastic Galerkin FEM, we discuss a function space discretization with a hierarchical tensor format of the forward and the Bayesian inverse
problem. The described approach allows for a sampling-free approximation of the posterior density given as explicit multivariate polynomial.
A. Feras: Solving Non-Uniqueness Issues in Parameter Identification problems for Pre-stressed Concrete Poles by Multiple Bayesian updating
Abstract: A proposed approach is utilized for coupling the experimental and numerical models with the purpose of inversely identifying the material properties of the considered structure. To do so, Bayesian inference and random processes in term of Markov chain Monte Carlo (MCMC) methods are used. The material properties are derived in form of probability distributions that describe the uncertainty in each parameter. One of the main problems facing this approach is the ill-posedness of the problem since in many cases the solution is not unique. In the current approach, multiple experimental models were simultaneously coupled with the selected numerical model for solving this problem. The selected case study is pre-stressed concrete poles that carry the catenary cables along high-speed train's tracks in Germany.
A. Kodakkal: Uncertainty Quantification Using Gradient Enhanced Stochastic Collocation for Geometric Uncertainties
Abstract: Uncertainty quantification(UQ) for Fluid-Structure Interaction (FSI) problems is challenging in terms of the computational cost, time and efficiency. Sampling algorithms like Monte-Carlo(MC) are not feasible for these problems due to constraints in computational time and cost. Random geometry variation arises due to manufacturing tolerances, icing phenomenon, wear and tear during operation etc. Quantification of these geometric uncertainties is challenging due to large number of geometric parameters resulting in a high dimensional stochastic problem. A two-step UQ using the gradient information obtained by solving the adjoint equation is employed in the current study. A gradient enhanced stochastic collocation(GESC) using polynomial chaos(PC) approach is used. The uncertainty in geometry is represented by Karhunen-Loeve expansion with predefined covariance function. The Quantity of Interest (QoI) is represented by a PC representation. The gradient of the QoI with respect to the input parameters are obtained by an adjoint approach. A set of collocation points is selected on the stochastic domain and the FEM model along with the adjoint equation is solved at each of these points to obtain QoI and gradient of QoI with respect to each of the uncertain input parameters. The stochastic collocation(SC) strategy is modified to incorporate the additional gradient information. The deterministic coefficients in the PC expansion of the QoI are determined by a least square regression of system of equation that contains QoI and gradients of QoI. The method is tested for a cylinder in flow with uncertain geometric perturbations and uncertainties in drag (QoI) is evaluated. The accuracy is compared with SC and MC. The method is computationally more efficient than SC and MC.
B. Peng: Angle-of-arrival estimation using Bayesian inference considering space-time correlations for future communication
Abstract: The angle-of-arrival (AoA) estimation is required by the adaptive directive antennas in order to realize a high antenna gain, which is necessary for the future high frequency communications due to its high path loss. In a dynamic scenario, where the user equipment (UE) is moving during the data transmission, the temporal correlation of AoA change can be utilized to improve the AoA estimation precision using Bayesian inference. Furthermore, if we have distributed antennas or hybrid massive MIMO array, the AoA changes of different antennas reveal spatial correlation because all the AoA changes are caused by the same spatial displacement of UE. This spatial correlation can be used to further improve the estimation accuracy by passing messages between antennas and combining estimate by each antenna itself and extrinsic information from other antennas. To demonstrate the algorithm performance, we use a ray-launching simulator in a lecture room scenario to generate realistic channel models. A distributed antenna system and a hybrid massive MIMO array (an array of multiple directive antenna arrays) are considered as use cases. The simulation results show that the cooperative estimation brings significant advantage in both use cases in respect of mean effective antenna gain and level crossing rate (LCR).
L. Sun: State Estimation for Nonlinear Systems with Set-membership Uncertainty
Abstract: In this presentation, I will show a state estimation model which including both traditional stochastic uncertainty and a new set-membership uncertainty. An ellipsoid will be introduced to define the unknown but bounded error. Simulations will follow after the model to compare this method with other widely used methods like Extended Kalman filter. The new method could be regarded as an option for nonlinear system state estimation