7th Workshop on High-Dimensional Approximation
February 13 – 17, 2017 — The University of New South Wales, Sydney, Australia

Program

A pdf version of the program book is available.

The workshop takes place in:

A map (pdf) is provided for your convenience.

Monday Tuesday Wednesday Thursday Friday
08:20 - 08:55 Registration
08:55 - 09:00 Opening
Session chair: Sloan Wozniakowski Griewank Hegland Griebel
09:00 - 09:25 Wozniakowski Ritter Hackbusch Garcke Schwab
09:25 - 09:50 Goda Wnuk Howse Pflüger Robbe
09:50 - 10:15 Ganesh Kritzer Naraparaju Zhou Griewank
10:15 - 10:45 Break
Session chair: Ritter Turner Schwab Roberts Dick
10:45 - 11:10 Zhuang Mengersen Vybiral Oates K-Suzuki
11:10 - 11:35 Wolfers Botev Harbrecht Burrage Gnewuch
11:35 - 12:00 Kazashi Ebert Migliorati Le Gia
12:00 - 02:00 Lunch and collaboration Farewell lunch
Session chair: Lamichhane
02:00 - 02:25 Yserentant De Sterck Burrage Nuyens
02:25 - 02:50 Y-Suzuki Cui Turner Kryven
02:50 - 03:15 Tan Dinh Hegland Laing
03:15 - 03:45 Break
Session chair: Hackbusch Yserentant Garcke
03:45 - 04:10 Beck Gilbert Jbilou
04:10 - 04:35 Brauchart Wang Valentin
04:35 - 05:00 Feischl Litvinenko Lamichhane
-
05:30 - ... Welcome BBQ Dinner

Monday

Multivariate approximation for analytic functions with Gaussian kernels
Monday, 09:00 - 09:25
Henryk Wozniakowski (Columbia University, University of Warsaw)
Joint work with Ian H. Sloan (UNSW Australia)
Keywords: multivariate approximation

We study $d$-variate approximation of analytic functions defined on $\mathbb{R}^d$ from a tensor product reproducing kernel Hilbert space whose kernel is Gaussian with positive shape parameters $\gamma_j^2$. The worst case setting and the class of arbitrary linear functionals is considered. We find necessary and sufficient conditions on various notions of tractability in terms of $\gamma_j^2$.

    Optimal order quadrature error bounds for higher order digital nets and sequences
    Monday, 09:25 - 09:50
    Takashi Goda (The University of Tokyo)
    Joint work with Kosuke Suzuki (Hiroshima University, Japan); Takehito Yoshiki (Osaka City University, Japan)
    Keywords: Quasi-Monte Carlo; Higher order digital nets and sequences; Sobolev spaces; Walsh functions

    Quasi-Monte Carlo (QMC) rules using order $\alpha$ digital nets and sequences have been shown to achieve the almost optimal order of convergence $N^{-\alpha}(\log N)^{s\alpha}$ for numerical integration in a reproducing kernel Sobolev space of arbitrary fixed smoothness $\alpha\in \mathbb{N}$, $\alpha \geq 2$. In this talk, building upon the previous existence result on optimal order QMC rules by the authors, we prove that order $2\alpha +1$ digital nets and sequences can achieve the best possible order of convergence $N^{-\alpha}(\log N)^{(s-1)/2}$ in the same function space. Our approach for the proof is to exploit both the decay and the sparsity of the Walsh coefficients of the reproducing kernel at the same time.

      High-order approximations of high-dimensional Gaussian integrals
      Monday, 09:50 - 10:15
      Mahadevan Ganesh (Colorado School of Mines)
      Joint work with Victor Dominguez; Stuart C. Hawkins; Hrushikesh N. Mhaskar
      Keywords: High-dimensional approximations; Interpolatory hyperspherical approximations

      Large-scale simulation of a partial differential equation (PDE) governed by an $N$-term approximation of a lognormal random field is dictated by the efficient choice of realization points in $\mathbb{R}^N$. Such points are typically obtained through quadrature rules to approximate a statistical moment of the quantity of interest (QoI) that is a functional of the solution the PDE. The statistical moment of the QoI is an $N$-dimensional Gaussian integral. High-order approximations of the Gaussian integrals with fewer points, compared to the Monte Carlo (MC) sampling, facilitates reduction in the number of realization of the PDEs. High-order Quasi-MC (QMC) approximations of the Gaussian integral require transformation of the integral to an $N$-dimensional cube through the inverse of a cumulative distribution function. We avoid such inverse transformations and develop a new class of high-order approximations to the Gaussian integral through orthogonal and interpolatory $(N-1)$-dimensional hyper-spherical functions and one-dimensional generalized Laguerre polynomials. For hyper-spherical approximations and FFT based dimension by dimension (DBD) construction, we generalize the ideas developed for the case $N = 3$ in [1,2,3].

      1. V. Dom\'\inguez and M. Ganesh, Interpolation and cubature approximations and analysis for a class of wideband integrals on the sphere. Adv. Comp. Math., 39, 547–584, 2013.
      2. V. Dom\'\inguez and M. Ganesh, Sobolev estimates for constructive uniform-grid FFT interpolatory approximations of spherical functions. Adv. Comp. Math., 41:843–887, 2016.
      3. M. Ganesh and H. N. Mhaskar. Matrix-free interpolation on the sphere. SIAM J. Numer. Anal., 44:1314–1331, 2006.
      Multiscale data analysis: affine systems, framelets, manifolds and beyond
      Monday, 10:45 - 11:10
      Xiaosheng Zhuang (City University of Hong Kong)
      Keywords: Framelets; Manifolds; Graphs

      While Big Data are high-volume, high-dimensional, and high complexity, they are typically concentrated on low-dimensional manifolds or represented by graphs, digraphs, etc. Sparsity is the key to analysis of data in various forms. Multiscale representation systems provide efficient and sparse representation of various data sets. In this talk, we will discuss construction and applications of multiscale representation based on affine systems. We show that using affine systems, we can provide characterization and construction of various representation systems including wavelets, framelets, shearlets. We demonstrate that tight framelets can be constructed on compact Riemannian manifolds, and fast algorithmic realizations exist for framelet transforms on manifolds. Explicit construction of tight framelets on the sphere as well as numerical examples will be shown. Extension to graphs will be discussed.

        Generalized Smolyak algorithm with applications to discrete least squares polynomial approximation
        Monday, 11:10 - 11:35
        Sören Wolfers (King Abdullah University of Science and Technology)
        Joint work with Fabio Nobile (École Polytechnique Fédérale de Lausanne); Raúl Tempone (King Abdullah University of Science and Technology)
        Keywords: Smolyak algorithm; uncertainty quantification; parametric PDEs; multilevel methods; stochastic collocation methods; polynomial approximation

        We provide a general discussion of Smolyak's algorithm for the acceleration of scientific computations. Originally described in the context of high-dimensional integration and interpolation, Smolyak's idea has since found applications in various other disciplines of scientific computation. In recent years, uncertainty quantification and the study of parametric partial differential equations (PDEs) stimulated reinforced interest in sparse numerical approximation methods and motivated the study of multilevel and multi-index algorithms. We show how these algorithms can be interpreted, and analyzed, as general versions of Smolyak's algorithm. This allows us to provide theoretical analysis and practical implementations in a concise application-independent manner. In a second part, we apply the general theory to discrete least squares (DLS) polynomial approximation of parametric PDEs. After presenting novel results on a weighted version of DLS approximation, we introduce and analyze a corresponding multilevel algorithm using the results from the first part. Finally, we discuss an adaptive algorithm for the choice of optimal polynomial approximation spaces in a multilevel setting.

          Quasi-Monte Carlo integration with product weights for elliptic PDEs with log-normal coefficients
          Monday, 11:35 - 12:00
          Yoshihito Kazashi (UNSW Australia)
          Keywords: Quasi-Monte Carlo methods, Partial differential equations with random coefficients, Lognormal

          Quasi-Monte Carlo (QMC) integration of output functionals of solutions of the diffusion problem with a log-normal random coefficient is considered. The random coefficient is assumed to be given by an exponential of a Gaussian random field that is represented by a series expansion of some system of functions. Graham et al. [1] developed a lattice-based QMC theory for this problem and established a quadrature error decay rate $\approx1$ with respect to the number of quadrature points. The key assumption there was a suitable summability condition on the aforementioned system of functions. As a consequence, product-order-dependent (POD) weights were used to construct the lattice rule. In this talk, a different assumption on the system is considered. This assumption, originally considered by Bachmayr et al. [2] to utilise the locality of support of basis functions in the context of polynomial approximations applied to the same type of the diffusion problem, is shown to work well in the same lattice-based QMC method considered by Graham et al.: the assumption leads us to product weights, which enables the construction of the QMC method with a smaller computational cost than Graham et al. A quadrature error decay rate $\approx1$ is established, and the theory developed here is applied to a wavelet stochastic model. By a characterisation of the Besov smoothness, it is shown that a wide class of path smoothness can be treated with this framework.

          1. I. G. Graham, F.  Y. Kuo, J. A. Nichols, R. Scheichl, Ch. Schwab, and I. H. Sloan. Quasi-Monte Carlo finite element methods for elliptic PDEs with lognormal random coefficients Numerische Mathematik, 131(2):329–368, 2015.
          2. M. Bachmayr, A. Cohen, R. DeVore, and G. Migliorati, Sparse polynomial approximation of parametric elliptic PDEs Part II: lognormal coefficients ESAIM: Mathematical Modelling and Numerical Analysis, 2016. DOI: http://dx.doi.org/10.1051/m2an/2016051
          On the approximation of electronic wavefunctions by anisotropic Gauss and Gauss-Hermite functions
          Monday, 02:00 - 02:25
          Harry Yserentant (Technische Universität Berlin)
          Joint work with Stephan Scholz (Technische Universität Berlin)
          Keywords: high-dimensional approximation, electronic wavefunctions, Gauss functions

          The electronic Schrödinger equation describes the motion of $N$ electrons under Coulomb interaction forces in a field of clamped nuclei. The solutions of this equation, the electronic wavefunctions, depend on $3N$ variables, three spatial dimensions for each electron. Approximating them is therefore inordinately challenging. We will discuss the approximability of these wavefunctions by linear combinations of anisotropic Gauss functions, or more precisely Gauss-Hermite functions, products of polynomials and anisotropic Gauss functions in the narrow sense. Main result is that the original, singular wavefunctions can up to given accuracy and a negligibly small residual error be approximated with only insignificantly more such terms than their convolution with a Gaussian kernel of sufficiently small width and that basically arbitrary orders of convergence can be reached. This is a fairly surprising result, since it essentially means that by this type of approximation, the intricate hierarchies of non-smooth cusps in electronic wavefunctions have almost no impact on the convergence, once the global structure is resolved.

          1. S. Scholz and H. Yserentant. On the approximation of electronic wavefunctions by anisotropic Gauss and Gauss-Hermite functions. Numer. Math. (2017), doi:10.1007/s00211-016-0856-4.
          Rank-1 lattices and Strang splitting for the time-dependent Schrödinger equation
          Monday, 02:25 - 02:50
          Yuya Suzuki (KU Leuven)
          Joint work with Gowri Suryanarayana (EnergyVille Genk and Vito Mol); Dirk Nuyens (KU Leuven)
          Keywords: Quasi-Monte Carlo; Time-dependent Schrödinger equation; Strang splitting

          We solve the time-dependent Schrödinger equation:

          \begin{align*} \mathrm{i} \epsilon \frac{\partial u}{\partial t}\, &= -\frac{1}{2} \epsilon^2\, \nabla^2 u +vu,\\ u(\boldsymbol{x},0) &= g(\boldsymbol{x}), \end{align*}

          where $\boldsymbol{x} \in [0,1]^d$, $t, \epsilon > 0$ and $v(\boldsymbol{x})$ is the potential function.

          Our proposed method consists of two steps: first, by using rank-1 lattices, we discretize the physical domain and then we apply Strang splitting for the time step discretization.

          Several previous works have already been done for solving the same problem. Theoretical results on Strang splitting by using regular grids was done in [1]. The application of sparse grids instead of regular grids was studied in [2] and they showed numerical results up to a 7-dimensional case. By using lattice rules we can solve higher-dimensional cases.

          Our results consist of both theoretical and numerical parts. In the theoretical part we show that an anti-aliasing set generated by a lattice rule has good commutator bounds for Strang splitting. We also show several numerical results for different parameter values.

          1. Jahnke, Tobias and Lubich, Christian, Error bounds for exponential operator splittings, BIT Numerical Mathematics, 40(4):735–744, 2000.
          2. Gradinaru, Vasile, Strang splitting for the time-dependent Schrödinger equation on sparse grids, SIAM Journal on Numerical Analysis, 46(1):103–123, 2007.
          High dimensional finite elements for parabolic equations with multiple temporal-spatial scales
          Monday, 02:50 - 03:15
          Wee Chin Tan (Nanyang Technological University)
          Joint work with Viet Ha Hoang (Nanyang Technological University)
          Keywords: parabolic equations; finite element methods; sparse tensor; multiple temporal-spatial scales

          We consider parabolic equations which depend on microscopic scales in both the time and spatial variables. A direct numerical scheme needs to use very small step size to resolve the microscopic time scale and a very small mesh width to resolve the microscopic spatial scales, leading to impossible complexity. We employ the multiscale convergence method in both time and space to deduce the multiscale homogenized equation which contains the solution to the homogenized equation, which approximates the multiscale solution macroscopically, and the scale interacting terms, which encodes the microscopic scale information. The equation is posed in a high dimensional temporal-spatial space. We employ the sparse tensor finite element method to solve this equation with essentially optimal complexity. We then employ the finite element solutions of the multiscale homogenized equation to construct a numerical corrector. In the case of one microscopic time scale and one macroscopic spatial scale, we derive a homogenization error estimate; this implies an error estimate for the numerical corrector which is of the order of the summation of the homogenization error and the finite element error. If time permits, we will present also multiscale nonlinear parabolic problems.

            Multi-level / multi-index UQ methods with a sparse-grid version of IGA solvers
            Monday, 03:45 - 04:10
            Joakim Beck (KAUST)
            Joint work with Lorenzo Tamellini (CNR-IMATI); Raúl Tempone (KAUST)
            Keywords: Isogeometric analysis, uncertainty quantification, sparse grids, combination technique, multilevel methods

            Isogeometric Analysis (IGA) has been introduced to link PDE-based engineering analysis with Computer Aided Design (CAD) by considering the basis functions used within CAD to describe geometries, typically cubic B-splines or Non-Uniform Rational B-Splines (NURBS), as a basis for the approximation of the solution of the PDE. In this way, one does not need to generate a Finite-Elements-suitable triangulation of the computational domain, which could be a significantly expensive task in practical applications. The PDE is then solved with well-established methods such as hp-Galerkin and Collocation. Two- and tri-dimensional splines/NURBS are built by tenzorization of univariate bases, which implies that the computational cost of IGA solvers typically grows exponentially with respect to the dimension of the problem. We therefore introduce a sparse-grids construction in its 'combination technique' form for IGA to reduce the computational cost of the solution of the PDE. In this talk, we numerically test to which extent IGA solvers could benefit, in connection to parallel computing, from such a construction. As expected, the sparse-grid approach can be useful if the solution of the PDE is sufficiently smooth. Moreover, the combination technique may also provide a simple way to parallelize existing serial IGA solvers. We also demonstrate the use of sparse IGA solvers in a multi-level / multi-index Uncertainty Quantification (UQ) framework for the solution of PDEs with high-dimensional random coefficients.

              Uniformly distributed points: optimize the sum-of-distance in the presence of an external field
              Monday, 04:10 - 04:35
              Johann Brauchart (TU Graz)
              Joint work with Douglas P. Hardin (Vanderbilt University); Edward B. Saff (Vanderbilt University)
              Keywords: Cube; External field problem; Quasi-Monte Carlo methods; Sphere; Uniform distribution

              Configurations on the unit sphere that maximize the sum of all mutual Euclidean distances among $N$ points form a sequence with the following properties: (i) the sequence is asymptotically uniformly distributed (i.e., reasonable test sets get a fair share of points as $N \to \infty$); (ii) maximizing configurations have minimal spherical cap $L_2$-discrepancy; and (iii) maximizing configurations define Quasi-Monte Carlo methods with minimal worst-case error for integration of functions from the Sobolev space over the sphere provided with the distance kernel as reproducing kernel. All three properties follow from Stolarsky's invariance principle, which gives a direct relation between spherical cap $L_2$-discrepancy, worst-case error in the given setting, and the sum-of-distances of a given $N$-point set.

              Points that maximize the sum of distances on the unit square or $d$-dimensional cube will stay on the boundary. This phenomenon is a consequence of potential-theoretic results. We show that a suitable external field forces points into the interior of the set so that a sequence of weighted maximal sum-of-distance points will be asymptotically uniformly distributed over the whole square or $d$-cube.

              In our talk we provide preliminary theoretical results, numerics, and comparisons with point sets that arise from other types of "energy functionals" like Warnock's formula (which allows to directly compute the $L_2$-star discrepancy of an $N$-point set on the $d$-cube).

                Fast random field generation with $H$-matrices
                Monday, 04:35 - 05:00
                Michael Feischl (UNSW Australia)
                Joint work with Frances Kuo (UNSW Australia); Ian Sloan (UNSW Australia)
                Keywords: random fields; h-matrix; PDE with random coefficients

                We use the $H$-matrix technology to compute the approximate square root of a covariance matrix in linear complexity. This allows us to generate normal and log-normal random fields on general point sets with optimal complexity. We derive rigorous error estimates which show convergence of the method. Our approach requires only mild assumptions on the covariance function and on the point set. Therefore, it might be also a nice alternative to the circulant embedding approach which is well-defined only for regular grids and stationary covariance functions.

                  Tuesday

                  Integration and approximation on tensor products of Korobov spaces of increasing smoothness
                  Tuesday, 09:00 - 09:25
                  Klaus Ritter (TU Kaiserslautern)
                  Joint work with Michael Gnewuch (Kiel); Mario Hefter (Kaiserslautern); Aicke Hinrichs (Linz); Greg Wasilkowski (Lexington)
                  Keywords: Korobov space; integration; approximation; high-dimensional; infinite-dimensional

                  We consider functions of $d \in \mathbb{N} \cup \{\infty\}$ variables that belong to the unit ball in the corresponding tensor product of Korobov spaces of increasing smoothness. For the $L_2$-approximation problem these spaces have been considered, e.g., by Papageorgiou, Woźniakowski (2010), who have studied tractability in the case $d<\infty$, and by Dung, Griebel (2016), who have studied the case $d=\infty$. We present new results for the integration and the $L_2$-approximation problem in both cases $d< \infty$ and $d= \infty$, which are established by means of embedding theorems between scales of spaces of increasing smoothness and weighted function spaces.

                    General cost bounds for randomized Smolyak algorithm
                    Tuesday, 09:25 - 09:50
                    Marcin Wnuk (University of Kiel)
                    Joint work with Michael Gnewuch (Kiel University)
                    Keywords: Sparse grids, multidimensional integration

                    We are considering randomized sparse grid (Smolyak) algorithms $A(q,d)$ on $d-$dimensional tensor product spaces $F_d = \bigotimes_{j = 1}^{d} F^{(j)}$, where $F^{(j)}, j = 1, \ldots, d,$ are separable Hilbert spaces of functions. We measure the quality of algorithm meant to approximate a given linear tensor product operator $S_d:F_d \rightarrow G_d$ ( $G_d = \bigotimes_{j = 1}^{d} G^{(j)}$, where $G^{(j)}, j = 1, \ldots, d,$ are separable Hilbert spaces) via the randomized error: $$e^{ran}(A(q,d)) = ( \sup\limits_{\substack{f \in F_d, \ \lVert f \rVert_{F_d} \leq 1}}\mathbb{E}\lVert (S_d-A(q,d))f \rVert^2_{G_d})^{\frac{1}{2}} $$ and the worst case error: $$e^{wce}(A(q,d)) = ( \mathbb{E}\sup\limits_{\substack{f \in F_d, \ \lVert f \rVert_{F_d} \leq 1}}\lVert (S_d-A(q,d))f \rVert^2_{G_d})^{\frac{1}{2}}. $$ We prove upper bound on the approximation error of $A(q,d)$ with explicit dependence on the number of variables and the number $N$ of information used.

                    As an application we rigorously prove a claim from [1] that the results obtained there hold also for randomized algorithms. Moreover, our findings prove a substantial generalization of the main results from the aforementioned paper.

                    Under additional conditions on $F_d$ our upper bounds may be to some extent improved upon. We demonstrate this for a multivariate integration problem on a family of Haar wavelet spaces. On those spaces we perform a rigorous comparison of pure QMC quadratures based on scrambled $(t,m,s)-$ nets and the Smolyak algorithm.

                    1. L. Plaskota, G.W. Wasilikowski. Tractability of infinite-dimensional integration in the worst case and randomized settings. Journal of Complexity 27 (2011), 505-518.
                    Approximation in Korobov spaces with exponential weights
                    Tuesday, 09:50 - 10:15
                    Peter Kritzer (Austrian Academy of Sciences)
                    Joint work with Friedrich Pillichshammer (JKU Linz); Henryk Woźniakowski (University of Warsaw/Columbia University)
                    Keywords: Function approximation; L_\infty approximation; Korobov space; Periodic functions; Exponential convergence; Exponential weights

                    We study multivariate $\mathbb{L}_{\infty}$-approximation for a weighted Korobov space of periodic functions for which the Fourier coefficients decay exponentially fast. The weights are defined, in particular, in terms of two sequences $\boldsymbol{a}=\{a_j\}$ and $\boldsymbol{b}=\{b_j\}$ of positive real numbers bounded away from zero. We study the minimal worst-case error $e^{\mathbb{L}_{\infty}\mathrm{-app},\Lambda}(n,s)$ of all algorithms that use $n$ information evaluations from a class $\Lambda$ in the $s$-variate case. We consider two classes $\Lambda$: the class $\Lambda^{\mathrm{all}}$ of all linear functionals and the class $\Lambda^{\mathrm{std}}$ of only function evaluations.

                    We derive necessary and sufficient conditions on the sequences $\boldsymbol{a}$ and $\boldsymbol{b}$ for obtaining exponential error convergence, and also for obtaining various notions of tractability. The results are the same for both classes $\Lambda$.

                    Surprisingly, most results for $\mathbb{L}_{\infty}$-approximation coincide with their counterparts for $\mathbb{L}_2$-approximation. This allows us to deduce also results for $\mathbb{L}_p$-approximation for $p \in [2,\infty]$.

                      Merging worlds: experimental design and big data
                      Tuesday, 10:45 - 11:10
                      Kerrie Mengersen (Queensland University of Technology)
                      Joint work with Christopher C. Drovandi (QUT); Christopher Holmes (Oxford University); James McGree (QUT); Sylvia Richardson (MRC Biostatistics Unit; University of Cambridge); Elizabeth G Ryan (Kings College London)
                      Keywords: Big Data; Bayesian

                      Experimental design is a fundamental tenet of traditional statistics, but its relevance has diminished with the advent of big data. Or has it? We have been exploring ways in which experimental design, particularly adaptive and Bayesian approaches, can be employed to analyse big data, and the consequent benefits that this can bring. These ideas are described in the following article under review: http://eprints.qut.edu.au/87946/

                        Efficient computation of the distribution of the sum of correlated log-normal variables via QMC
                        Tuesday, 11:10 - 11:35
                        Zdravko Botev (UNSW Australia)
                        Joint work with Dan MacKinlay (UNSW); Robert Salomone (UQ)
                        Keywords: multivariate Log-Normal distribution; Rare-event probability; asymptotic efficiency; integrand smoothing; Quasi Monte Carlo

                        The distribution of the sum of dependent log-normal variables has numerous applications in, for example, the assessment of insurance risk, the valuation of an asset portfolio driven by the Black-Scholes model, and the performance analysis of wireless communication systems. In this talk we propose the first (quasi) Monte Carlo estimator for the efficient computation of the distribution of the sum of dependent log-normals. We show that the estimator is asymptotically efficient, or robust, in both tails of the distribution. The estimator enjoys the additional advantage that it is infinitely smooth, which yields, not only additional error reduction via Quasi Monte Carlo, but also a reliable estimator of the corresponding probability density function – an object of significant interest in communication systems.

                          Parameter identification using QMC in an insulin-glucose model with uncertain data
                          Tuesday, 11:35 - 12:00
                          Adrian Ebert (KU Leuven)
                          Joint work with Dirk Nuyens (KU Leuven); Pieter Gillard (University Hospital Leuven)
                          Keywords: Bayesian; QMC; UQ

                          We investigate the problem of parameter identification of a mathematical model using uncertain data. In particular, we consider the human insulin-glucose system which can be modelled by a system of parameter-dependent differential equations and aim to estimate parameters such as insulin sensitivity. To identify the desired parameters, we apply the concept of Bayesian inversion in combination with quasi-Monte Carlo point sets in the sense of Stuart (2010), Stuart and Schwab (2012) and Dick, Gantner, Le Gia and Schwab (2016). In contrast to often used Markov chain Monte Carlo (MCMC) methods, we aim for QMC methods that can achieve much faster convergence rates of $O(1/N^{\alpha})$, with $\alpha > 1/2$, to estimate the occurring high-dimensional integrals. The used data is uncertain in several ways as we deal with noisy measurement data, uncertain inputs to the ODE and modelling uncertainties and assumptions.

                            Joint model and parameter reduction for large-scale Bayesian inversion
                            Tuesday, 02:25 - 02:50
                            Tiangang Cui (Monash University)
                            Joint work with Youssef Marzouk (MIT, USA); Karen Willcox (MIT, USA)
                            Keywords: Bayesian inversion, mode reduction, parameter reduction

                            Algorithmic scalability to high dimensional parameters and computational efficiency of numerical solvers are two significant challenges in large-scale, PDE-constrained inverse problems. Here we will explore the intrinsic dimensionality in both state space and parameter space of inverse problems by analysing the interplay between noisy data, ill-posed forward model and smoothing prior. The resulting reduced subspaces naturally lead to a scalable and fast model reduction framework for solving large-scale inverse problems with high-dimensional parameters.

                              Sampling representation and recovery in high dimensions on sparse grids
                              Tuesday, 02:50 - 03:15
                              Dung Dinh (Vietnam National University)
                              Keywords: Sparse grids; Sampling recovery; B-spline sampling representation; Mixed smoothness

                              Sparse grids are an efficient tool in some high-dimensional approximation and recovery problems involving a big number of variables. We constructed linear algorithms of sampling recovery and cubature formulas on sparse Smolyak grids thickened by large parameter $m$, of $d$-variate periodic functions having Lipschitz-Hölder mixed smoothness $\alpha > 0$, and studied their optimality. Our construction is based on a sampling representation by the Faber series and more generally, high-order B-spline series which is generated from a quasi-interpolation. We established lower bounds (for $\alpha \le 2$) and upper bounds of the error of optimal sampling recovery and optimal integration on Smolyak grids, explicitly in $d$, $m$ and the number $\nu$ of active variables of functions when the dimension (the number of variables) $d$ may be very large.

                                Applying quasi-Monte Carlo integration to a parametrised elliptic eigenproblem
                                Tuesday, 03:45 - 04:10
                                Alexander Gilbert (UNSW Australia)
                                Joint work with Ivan G. Graham (University of Bath); Frances Y. Kuo (UNSW Australia); Robert Scheichl (University of Bath); Ian H. Sloan (UNSW Australia)
                                Keywords: QMC; eigenproblem; stochastic PDE

                                In this talk we study an elliptic eigenproblem, with a random coefficient which can be parametrised by infinitely many stochastic parameters. The randomness in the coefficient also results in randomness in the eigenvalues and corresponding eigenfunctions. As such, our quantity of interest will be the expected value, with respect to the stochastic parameters, of the smallest eigenvalue which we formulate as an integral over the infinite-dimensional parameter domain. Our approximation involves three steps: truncating the stochastic dimension, discretising the spatial domain using finite elements and approximating the now finite but still high-dimensional integral.

                                To approximate the high-dimensional integral we use quasi-Monte Carlo (QMC) methods. These are deterministic or quasi-random quadrature rules that can be proven to be very efficient for the numerical integration of certain classes of high-dimensional functions. QMC methods have previously been applied to similar elliptic source problems, however the existing framework for a rigorous analysis of the integration error does not cover the eigenvalue problem. We show that the minimal eigenvalue belongs to the spaces required for QMC theory, outline the approximation algorithm and provide numerical results.

                                  Tight framelets and fast framelet transforms on manifolds
                                  Tuesday, 04:10 - 04:35
                                  Yu Guang Wang (La Trobe University)
                                  Joint work with Xiaosheng Zhuang (City University of Hong Kong)
                                  Keywords: tight framelets; affine systems; compact Riemannian manifold; quadrature rule; filter bank; FFT; fast spherical harmonic transform; Laplace-Beltrami operator; unitary extension principle

                                  Data in practical application with some structure can be viewed as sampled from a manifold, for instance, data on a graph and in astrophysics. A smooth and compact Riemannian manifold $\mathcal{M}$, including examples of spheres, tori, cubes and graphs, is an important manifold. In this work with Xiaosheng Zhuang, we construct a type of tight framelets using quadrature rules on $\mathcal{M}$ to represent the data (or a function) and to exploit the derived framelets to process the data (for instance, inpainting an image on $\mathcal{M}$).

                                  One critical computation for framelets is to compute, from the framelet coefficients for the input data (which are assumed at the highest level), the framelet coefficients at lower levels, and also to evaluate the function values at new nodes using the framelet representation. We design an efficient computational strategy, said to be fast framelet transform (FMT), to compute the framelet coefficients and to recover the function. Assuming the fast Fourier transform (FFT) and using polynomial-exact quadrature rules on the manifold $\mathcal{M}$, the FMT has the same computational complexity as the FFT. Numerical examples illustrate the efficiency and accuracy of the algorithm for the framelets.

                                    Likelihood approximation with hierarchical matrices for large spatial datasets
                                    Tuesday, 04:35 - 05:00
                                    Alexander Litvinenko (KAUST)
                                    Joint work with Ying Sun (KAUST, Saudi Arabia); Marc G. Genton (KAUST, Saudi Arabia); David E. Keyes (KAUST, Saudi Arabia)
                                    Keywords: Likelihood approximation; parameter identification; hierarchical matrices; MMLE; uncertain covariance length; HLIB, HLIBpro; H-matrix approximation of covariance, determinant; H-Cholesky factorisation

                                    In this work, we use available measurements to estimate unknown/uncertain hyper-parameters (variance, smoothness parameter, and covariance lengths) of covariance functions [4]. We are doing this by looking for the maximum of the joint log-likelihood function, which depends on a covariance matrix non-linearly. The computational issue here is that this covariance matrix can be very large and depends on 1-5 unknown hyper-parameters. The use of the hierarchical ($\mathcal{H}$)-matrix technique reduces the computational cost of computing the log-likelihood function dramatically, from cubic to almost linear [2,1]. We observed that employing $\mathcal{H}$-matrix approximation (even with a small rank) does not change the log-likelihood too much. We considered one synthetic example and one with real data, coming from spatial modeling of the moisture. For both numerical examples, we were able to calculate the maximum of the log-likelihood and the maximum likelihood estimator (MLE). We were able to compute MLE in 1D, 2D, and 3D (when all hyper-parameters are unknown) cases. We researched contribution of the $\mathcal{H}$-matrix approximation error and the statistical error. We researched how the log-likelihood function and MLE depends on the number of locations. We demonstrated that the $\mathcal{H}$-matrix approximation errors are stable with respect to the Matérn hyper-parameters. Since $\mathcal{H}$-matrix approximation is much cheaper to store than the original matrix, a much larger number of locations can be considered. With the parallel $\mathcal{H}$-matrix library HLIBPro we can compute the log-likelihood function for $512.000$ locations in few minutes on a desktop machine. Further possible applications include areas of spatial and environmental statistics [3], data analysis. Additionally, we would like to discuss a possible extension to low-rank tensors. We show that covariance matrices can be very well approximated in different low-rank tensor formats.

                                    1. W. Hackbusch. Hierarchical matrices: Algorithms and Analysis, volume 49 of Springer Series in Comp. Math. Springer, 2015.
                                    2. B. N. Khoromskij, A. Litvinenko, and H. G. Matthies. Application of hierarchical matrices for computing the Karhunen–Loève expansion. Computing, 84(1-2):49–67, 2009.
                                    3. W. Nowak and A. Litvinenko. Kriging and spatial design accelerated by orders of magnitude: combining low-rank covariance approximations with FFT-techniques. Mathematical Geosciences, 45(4):411–435, 2013.
                                    4. Y. Sun and M. L. Stein. Statistically and computationally efficient estimating equations for large spatial datasets. Journal of Computational and Graphical Statistics, 25(1):187–208, 2016.

                                    Wednesday

                                    On the representation of symmetric and antisymmetric tensors
                                    Wednesday, 09:00 - 09:25
                                    Wolfgang Hackbusch (Max-Planck-Institut Mathematik in den Naturwissenschaften)
                                    Keywords: tensor representation, symmetric tensors

                                    Various tensor formats are used for the data-sparse representation of large-scale tensors. Here we investigate how symmetric or antiymmetric tensors can be represented.

                                      Preconditioned optimization algorithms on manifolds for Tucker tensor approximation
                                      Wednesday, 09:25 - 09:50
                                      Alexander Howse (University of Waterloo)
                                      Joint work with Hans De Sterck (Monash University)
                                      Keywords: tensor decomposition; matrix manifold optimization; nonlinear preconditioning

                                      We discuss how nonlinear preconditioning and matrix manifold optimization can be combined to create efficient algorithms for computing approximate Tucker decompositions (ATDs) of tensors. Tensors in Tucker format are represented as a multilinear product of a core tensor and a set of factor matrices, one for each dimension. We compute ATDs by minimizing error in the Frobenius norm, subject to orthonormality constraints on low-rank factor matrices.

                                      Nonlinear preconditioning is a strategy by which nonlinear algebraic solvers can be combined to improve convergence [2]. We implement nonlinear preconditioning for popular algorithms, such as the nonlinear conjugate gradient [1], nonlinear GMRES (a.k.a. Anderson acceleration) [1], and L-BFGS quasi-Newton methods. For the ATD problem we use the higher order orthogonal iteration, the standard workhorse algorithm for computing ATDs, as a nonlinear preconditioner.

                                      Matrix manifold optimization is an optimization framework where the set of feasible solutions is a set of matrices with specific properties [3]. We discuss the benefits of matrix manifold algorithms, and how to implement nonlinear preconditioning for these algorithms.

                                      Numerical results comparing matrix manifold algorithms with and without nonlinear preconditioning show that nonlinear preconditioning results in algorithms which converge faster and more robustly than the existing algorithms.

                                      1. H. De Sterck and A.J.M. Howse. Nonlinearly preconditioned optimization on Grassmann manifolds for computing approximate Tucker tensor decompositions. SIAM Journal on Scientific Computing, 38(2):A997–A1018, 2016.
                                      2. P.R. Brune, M.G. Knepley, B.F. Smith, and X. Tu. Composing scalable nonlinear algebraic solvers. SIAM Review, 57(4):535–565, 2015.
                                      3. P.-A. Absil, R. Mahony, and R. Sepulchre. Optimization algorithms on matrix manifolds. Princeton Press, 2009.
                                      Canonical rank of a function generated data on an uniform mesh in an interval
                                      Wednesday, 09:50 - 10:15
                                      Kishore Kumar Naraparaju (Bits Pilani Hyderabad)
                                      Joint work with Boris N. Khoromskij (Max-Planck-Institute for Mathematics in the Sciences); Jan Schneider (Friedrich Schiller University Jena)
                                      Keywords: Quantics, discretized function, uniform grid, long vector, $L^{th}$ order tensor, Canonical approximation, Alternating least squares methods, Canonical rank

                                      In this talk we present the algorithm to obtain the canonical rank of a data generated by a function which is discretized on a large grid with uniform length in an interval on the real line. The discretized function generates a long vector of length $2^{L}.$ This long vector is reshaped as a $L^{th}$ order tensor (Quantics). We obtain the canonical rank of the reshaped tensor using Alternating Least-Squares method for higher order tensors. We describe the algorithm and obtain the canonical rank of some special functions (the functions are discretized on a uniform grid of size $2^{15}$, so the reshaped tensor is of order 15) which appears in many applications.

                                      1. J. D. Carroll and J. J. Chang, Analysis of individual differences in multidimensional scaling via an N-way generalization of `Eckart-Young' decomposition, Psychometrika, 35, 283-319, 1970.
                                      2. P. Common, X. Luciani and A. L. F. de Almeida, Tensor decomposition, Alternating least squares and other tales, Journal of Chemometrics, 23, 393-405, 2009.
                                      3. I. Domanov, Study of Canonical Polyadic decomposition of higher order tensors, Doctoral thesis, KU Leuven.
                                      4. G. H. Golub and C. F. Van Loan, Matrix computations, 4th edition, Johns Hopkins Studies in the Mathematical Sciences.
                                      5. R. A. Harshman, Foundations of the PARAFAC procedure: Models and conditions for an "explanatory" multi-model factor analysis, UCLA Working Papers in Phonetics, 16, 1-84, 1970. http://publish.uwo.ca/ harshman/wpppfac0.pdf.
                                      6. T. G. Kolda and B. W. Bader, Tensor decompositions and applications, SIAM Review, 51(3), 455-500, 2009.
                                      7. B. N. Khoromskij, $O(d\,log\,N)-$Quantics approximation of $N-d$ tensors in high-dimensional numerical modelling, J. Constr. Appr., 34(2), 257-289, 2011.
                                      8. B. N. Khoromskij and I. Oseledets, Quantics-TT Collocation approximation of parameter-dependent and stochastic elliptic PDEs, Comp. Meth. in Applied Math., 10(4), 34-365, 2010.
                                      9. B. N. Khoromskij and I. Oseledets, Quantics-TT approximation of elliptic solution operators in higher dimensions, Russ. J. Numer. Anal. Math. Modelling, 26(3), 303-322, 2011.
                                      10. Na Li, S. Kindermann and C. Navasca, Some Convergence results on the regularized alternating least-squares method for tensor decomposition, arXiv:1109.3831V1.
                                      11. C. Navasca, L. D. Lathauwer and S. Kindermann, Swamp reducing technique for tensor decompositions, EUSIPCO 2008.
                                      12. A. Uschmajew, Local convergence of the alternative least squares algorithm for canonical tensor approximation, SIAM J. Mat. Anal. App., 33(2), 639-652, 2012.
                                      13. C. F. Van Loan, Lectures, http://issnla2010.ba.cnr.it/Course\_Van\_Loan.htm
                                      Carl's inequality, Schatten classes, and optimality of algorithms
                                      Wednesday, 10:45 - 11:10
                                      Jan Vybiral (Charles University)
                                      Joint work with Aicke Hinrichs (JKU Linz, Austria); Anton Kolleck (TU Berlin, Germany); Joscha Prochno (University of Hull, UK)
                                      Keywords: Carl's inequality; entropy numbers; Gelfand numbers; Schatten classes

                                      Classical Carl's inequality states that for any two Banach spaces $X$ and $Y$ and any $\alpha>0$ there exists a constant $\gamma_\alpha>0$ such that $$ \sup_{1\le k\le n}k^{\alpha}e_k(T)\le \gamma_\alpha \sup_{1\le k\le n} k^\alpha c_k(T) $$ holds for all linear and bounded operators $T:X\to Y$. Here $e_k(T)$ is the $k$-th entropy number of $T$ and $c_k(T)$ is the $k$-th Gelfand number of $T$. This inequality has been used to obtain lower bounds on Gelfand numbers, which in turn are a useful tool in the study of optimality of algorithms. We extend the Carl's inequality to quasi-Banach spaces, which allows to extend this approach also to the area of sparse recovery, where $\ell_p^N$ spaces play an important role also for $p<1.$ Further, we give estimates of entropy numbers of embeddings of Schatten classes of matrices. We close by an application of these results to the optimality of algorithms of low-rank matrix recovery and matrix completion.

                                        Novel results for the anisotropic sparse grid quadrature
                                        Wednesday, 11:10 - 11:35
                                        Helmut Harbrecht (University of Basel)
                                        Joint work with Abdul Lateef Haji-Ali (University of Oxford); Michael Peters (University of Basel); Markus Siebenmorgen (University of Bonn)
                                        Keywords: sparse grid quadrature; anisotropic quadrature; partial differential equation; random input parameter

                                        This talk is dedicated to the anisotropic sparse Gaussian quadrature for functions which are analytically extendable into an anisotropic tensor product domain. Based on a novel estimate for the cardinality of the anisotropic index set, we are able to substantially improve the error versus cost estimates of the anisotropic sparse grid quadrature. To validate the theoretical findings, we use the anisotropic sparse Gaussian quadrature to compute the moments of elliptic partial differential equations with random input parameters.

                                        1. A.L. Haji-Ali, H. Harbrecht, M. Peters, M. Siebenmorgen. Novel results for the anisotropic sparse quadrature and their impact on random diffusion problems. arXiv:1509.05521, 2015.
                                        A meshfree finite volume method with application to multi-phase porous media models
                                        Wednesday, 02:25 - 02:50
                                        Ian Turner (Queensland University of Technology)
                                        Joint work with Brody Foy (Department of Computer Science, University of Oxford), Patrick Perré (LGPM, CentraleSupélec, Université Paris-Saclay)
                                        Keywords: meshless methods; curvature constrained, least squares; exponential Euler method; parameter estimation

                                        We outline a new Meshfree Finite Volume Method (MFVM) that uses a curvature-constrained, least squares interpolant based on Clough-Tocher elements to approximate fluxes in a disjoint finite volume setting. The effectiveness of this new method for solving complex mathematical models based on strong-form PDEs is demonstrated. The complete derivation of the MFVM is presented and solutions of benchmark problems are exhibited to assess the accuracy and computational performance of the underlying algorithms.

                                        We then close with an application of MFVM to a nonlinear multi-phase drying model that is used for simulating the transport phenomena evident at the microscale of wood. Volume averaged, macroscopic conservation laws typically contain effective parameters, such as diffusivity for example, that can be estimated from the micro-scale porous structure via homogenization techniques. As an alternative to this approach, we construct a virtual representation of the micro-structure from a pore morphology image so that a realistic two-phase (gas and solid) domain can be created to model the separate flow behaviours through each phase, thus allowing the estimation of the averaged macro-scale behaviour. We show that the diffusivity estimates provided by MFVM for such models match very closely to previously published results using the Lattice Boltzmann method.

                                          Low rank approximations for multiscale modelling
                                          Wednesday, 02:50 - 03:15
                                          Markus Hegland (Australian National University)
                                          Keywords: Low rank tensor approximation

                                          Low rank tensor approximation have been considered for the solution of high-dimensional problems in quantum chemistry, molecular biology, machine learning and others. These methods can also be used for certain multiscale problems. The underlying techniques are based on matrix factorisations like the LU and QR factorisations and the SVD. Many traditional approximations can also be cast in this framework, in particular the piecewise polynomial approximations which are commonly used in finite element problems. In my talk I will review some of the underlying ideas and present computational experiments done for problems motivated by the wood drying modelling of Ian Turner and his group.

                                            Thursday

                                            Operator based multi-scale analysis of simulation bundles
                                            Thursday, 09:00 - 09:25
                                            Jochen Garcke (University of Bonn)
                                            Joint work with Rodrigo Iza-Teran (Fraunhofer SCAI)
                                            Keywords: high-dimensional data analysis; reduced basis method; numerical simulation bundles

                                            We propose a new mathematical data analysis approach, which is based on the mathematical principle of symmetry, for the post-processing of bundles of finite element data from computer-aided engineering, which we consider as data points in a high-dimensional space, whose dimension is the number of grid points. Since all those numerical simulation data stem from the numerical solution of the same partial differential equation, there exists a set of transformations, albeit unknown, which map simulation to simulation. The transformations can be obtained indirectly by constructing a transformation invariant positive definitive operator valid for all simulations.

                                            The eigenbasis of such an operator turns out to be a convenient basis for the handled simulation set due to two reasons. First, the spectral coefficients decay very fast, depending on the smoothness of the function being represented, and therefore a reduced multi-scale representation of all simulations can be obtained, which depends on the employed operator. Second, at each level of the eigendecomposition the eigenvectors can be seen to recover different independent variation modes like rotation, translation or local deformation. Furthermore, this representation enables the definition of a new distance measure between simulations using the spectral coefficients. From a theoretical point of view the space of simulations modulo a transformation group can be expressed conveniently using the operator eigenbasis as orbits in the quotient space with respect to a specific transformation group.

                                            Based on this mathematical framework we study several examples. We show that for time dependent datasets from engineering simulations only a few spectral coefficients are necessary to describe the data variability, while the coarse variations get separated from the finer ones. Low dimensional structures are obtained in this way, which are able to capture information about the underlying high dimensional simulation space. An effective mechanism to deal effectively with the analysis of many numerical simulations is obtained, due to the achieved dimensionality reduction. Furthermore, we investigate if the derived representation of the simulation space can also be used in the context of reduced basis methods.

                                              From data to uncertainty: efficient data-driven adaptive sparse grids for UQ
                                              Thursday, 09:25 - 09:50
                                              Dirk Pflüger (University of Stuttgart)
                                              Joint work with Fabian Franzelin (University of Stuttgart); David Pfander (University of Stuttgart)
                                              Keywords: adaptive sparse grids, UQ, Bayesian inversion, density estimation

                                              We consider non-intrusive stochastic collocation for uncertainty quantification, as our applications require us to treat the underlying simulation code as a black box. We propose spatially adaptive sparse grids for both the estimation of the stochastic densities and the stochastic collocation.

                                              With sparse grids, the numerical discretization is still possible in higher-dimensional settings, and the integrated sparse grid approach leads to fast and efficient algorithms and implementations. This allows us to start with data that is provided by measurements and to combine the estimated densities with the model function's surrogate without introducing additional sampling or approximation errors. Bayesian inference and Bayesian updating allow us to incorporate observations and to adaptively refine the surrogate based on the posterior. Efficient and scalable algorithms for the evaluation of the surrogate function are available, which can achieve close-to-peak performance even on hybrid hardware.

                                              1. Franzelin, F., Pflüger, D. From Data to Uncertainty: An Efficient Integrated Data-Driven Sparse Grid Approach to Propagate Uncertainty, In Sparse Grids and Applications – Stuttgart 2014 of LNCSE, p. 29–49. Springer, 2016.
                                              2. Franzelin, F., Diehl, P., Pflüger, D., Schweitzer, A., Non-intrusive uncertainty quantification with sparse grids for multivariate peridynamic simulations, In Meshfree Methods for Partial Differential Equations VII, p. 115-143. Springer, 2015.
                                              3. Pflüger, D. Spatially Adaptive Sparse Grids for High-Dimensional Problems, Verlag Dr. Hut, (2010).
                                              The combination technique applied to the computation of quantities of interest
                                              Thursday, 09:50 - 10:15
                                              Yuancheng Zhou (Australian National University)
                                              Joint work with Markus Hegland (ANU); Brendan Harding (ANU)
                                              Keywords: variant combination technique, quantity of interest

                                              In this paper, we discuss the computation of quantity of interest of the form: $$Q(u)=\int_{\Omega}\phi(x,u(x))dx,$$ where $u$ is the solution of partial differential equation and $\phi(x,y)$ is a continuous real function. We will consider the case where $u$ is a solution of the Vlasov equation in more detail.

                                              In earlier works, this has been done by computing a numerical solution $u_{\underline{i}}$ to $u$ and then numerically evaluating the integral $$Q(u_{\underline{i}})=\int_\Omega\phi(x,u_{\underline{i}}(x))dx,$$ by a quadrature rule $Q_h$ such that: $$Q_h(u_{\underline{i}})=\sum_{k=1}^{N}w_k\phi(\xi_k,u_{\underline{i}}(\xi_k))$$ for some weights $w_k$ and nodes $\xi_k$ given be the quadrature rule.

                                              Here we will assume that algorithms and software (the legacy code) are available which compute both $u_{\underline{i}}$ and the quantity of interest $Q_h(u_{\underline{i}})$. The most frequently used form of $\phi(x,y)$ we will consider is the case of moments where $\phi(x,y)=x^ky$ or $l_p$ norms where $\phi(x,y)=y^k$ of the solution $u(x)$ of the Vlasov equations. In this paper, we will focus on the $l_2$ norm. For our computational experiments we will use the GENE code. GENE (Gyrokinetic Electromagnetic Numerical Experiment) is an open source plasma microturbulence code which can be used to efficiently compute gyroradius-scale fluctuations and the resulting transport coefficients in magnetized fusion/astrophysical plasmas.(from genecode.org). Particularly, we can use GENE to compute the solution of the Vlasov equation and some of the quantities of interest showed in GENE document.

                                              The aim of this paper is to improve the performance of the legacy code without recoding major parts of it. In particular, we investigate variants of the sparse grid combination technique applied to the quantity of interest. Thus the legacy code is run multiple times for different discretizations but for the same physical problem and quantity of interest. The approximation based on these computations is a linear combination of the computed quantities.

                                              Earlier attempts use either one of the following approaches. One is that the the PDE is solved using a sparse grid basis, the other is the legacy code is used to solve the PDE (with the usual basis and regular grids) and the solution of the PDE is then approximated using the combination technique. In both cases the quantity of interest is the computed from the approximated solution using a quadrature method. In contrast, we here compute the quantity of interest with the legacy code, then apply the variants of combination technique to these quantities of interest directly.

                                                Bayesian probabilistic numerical computation
                                                Thursday, 10:45 - 11:10
                                                Chris Oates (University of Technology Sydney)
                                                Joint work with Jon Cockayne (University of Warwick); Tim Sullivan (Free University of Berlin); Mark Girolami (Imperial College London and the Alan Turing Institute)
                                                Keywords: Inverse Problem; Regular Conditional Probability; Sequential Monte Carlo; Computational Pipeline; Nonlinear Partial Differential Equation

                                                In this work, numerical computation - such as numerical solution of a PDE - is treated as an inverse problem in its own right. The popular Bayesian approach to inversion is considered, wherein a posterior distribution is induced over the object of interest by conditioning a prior distribution on the same finite information that would be used in a classical numerical method. The main technical consideration is that the data in this context are non-random and thus the standard Bayes' theorem does not hold in general. Sufficient conditions will be presented under which such Bayesian probabilistic numerical methods are well-posed, and a novel sequential Monte-Carlo method will be shown to provide consistent estimation of the posterior distribution. The paradigm is extended to computational "pipelines", through which a distributional quantification of numerical error can be propagated. Additional sufficient conditions are presented for when such propagation can be endowed with a globally coherent Bayesian interpretation, based on a novel class of probabilistic graphical models designed to represent a computational work-flow. The concepts are illustrated through explicit numerical experiments involving both linear and non-linear PDE models. Full details are available in [1].

                                                1. J. Cockayne, C.J. Oates, T. Sullivan, M. Girolami Bayesian Probabilistic Numerical Methods. Technical report, 2017.
                                                A novel approach to calibrate populations of models to data distributions
                                                Thursday, 11:10 - 11:35
                                                Kevin Burrage (Queensland University of Technology)
                                                Joint work with Brodie Lawson (QUT); Pamela Burrage (QUT); Christopher Drovandi (QUT); Blanca Rodriguez (Oxford); Nicole Cusimano (BCAM)
                                                Keywords: populations of models; atrial cardiac electrophysiology models

                                                The populations of models approach is becoming a powerful modelling approach for quantifying and capturing uncertainty in experimental and observational data. Fundamental to this approach are two issues: how to sample parameter space effectively and how to calibrate the models. We first give an overview of these issues and then present a new approach based on Sequential Monte Carlo and Simulated Annealing to calibrate a model against the underlying distributions in the biomarker data, as opposed to the standard approach of calibrating against ranges. We then illustrate these new ideas using the atrial action potential Courtemanche model and a data set of two cohorts of people: sinus rhythm and those with chronic atrial fibrillation. We attempt to describe any new insights that this approach reveals.

                                                  Optimal weighted least-squares approximation
                                                  Thursday, 11:35 - 12:00
                                                  Giovanni Migliorati (Université Pierre et Marie Curie)
                                                  Joint work with Albert Cohen (Université Pierre et Marie Curie)
                                                  Keywords: multivariate approximation; least squares; error estimates; high dimension

                                                  We present some results on the stability and accuracy of weighted least-squares approximations in polynomial spaces with evaluations at random points, and discuss some numerical algorithms for sampling the optimal probability density in high dimension.

                                                    Approximation to master equations by radial basis functions with interface tracking
                                                    Thursday, 02:25 - 02:50
                                                    Ivan Kryven (University of Amsterdam)
                                                    Keywords: master equations; radial basis functions; Level-set method; adaptive basis

                                                    When discussing master equations that govern population dynamics the term ‘high-dimensional’ is rather routine than exceptional. In this case, I mean populations in the broadest sense: whether these are molecules, bacterias, colloids, people, firms, or connected components in a random network, it is natural to represent the system state as a multidimensional probability (or mass) density function. This allows addressing uncertainty in the originally vector-valued system state, or it can also reflect a statistical view on the system as a population of samples with deviating vector-valued properties. I focus on the case when even though the distribution that solves the master equation is multivariate it is supported only on a ‘small’ manifold comparing to the whole state space. Here, the support is defined as a region where the probabilities are larger than a pre-defined threshold. Such manifold may have a non-trivial shape, and even may change its topology as the distribution progresses in time. The radial basis functions are employed to approximate the distribution in the interior of the manifold. In the same time, the shape of the manifold is tracked by the level set method, so that the approximation basis can be adapted to the change of the distribution support. The talk is fortified with examples inspired by problems from system biology [1], polymer chemistry, and colloidal physics [2].

                                                    1. I. Kryven, S. Röblitz, and Ch. Schütte. Solution of the chemical master equation by radial basis functions approximation with interface tracking,BMC systems biology, 2015.
                                                    2. I. Kryven, S. Lazzari, G. Storti. Population balance modeling of aggregation and coalescence in colloidal systems,Macromolecular Theory and Simulations 23(3), 170-181, 2014.
                                                    Equation-free analysis of spike timing dependent plasticity
                                                    Thursday, 02:50 - 03:15
                                                    Carlo Laing (Massey University)
                                                    Joint work with Yannis Kevrekidis (Princeton University)
                                                    Keywords: Spike timing dependent plasticity; Equation-free; Model reduction; Neuronal network

                                                    Spike-timing dependent plasticity is the process by which the strengths of connections between neurons are modified as a result of the precise timing of the action potentials fired by the neurons. We consider a model consisting of one integrate-and-fire neuron receiving excitatory inputs from a large number of Poisson neurons whose synapses are plastic. When correlations are introduced between the firing times of these input neurons, the distribution of synaptic strengths shows interesting, and apparently low-dimensional, dynamical behaviour. This behaviour is analysed in two different parameter regimes using equation-free techniques, which bypass the explicit derivation of the relevant low-dimensional dynamical system. We demonstrate both coarse projective integration (which speeds up the time integration of a dynamical system) and the use of recently-developed data-mining techniques to identify the appropriate low-dimensional description of the complex dynamical systems in our model.

                                                      Krylov subspace methods for model reduction in large-scale dynamical systems
                                                      Thursday, 03:45 - 04:10
                                                      Khalide Jbilou (University Littoral Cote d'Opale)
                                                      Keywords: Arnoldi, Krylov subspaces, model reduction, dynamical systems

                                                      In this talk, we will consider the problem of model reduction in large-scale dynamical systems. We treat only the case when the system is linear continue and time-independent which means that the matrix coefficients involved in these systems are constant. For small problems, many computational methods are effective but for large problems, the question was investigated by many authors these last years. The proposed approach is a projection one in which the original problem is projected onto a small subspace and the new problem is solved by some classical methods. Here, we consider Krylov subspaces and use the block Arnoldi or the extended or rational variants to construct the new reduced order model. The quality of the reduction will be measured by using the transfer functions of the original system and the obtained lower dimensional one. We give some theoretical results such as upper bounds for the error-norms of the transfer functions and present numerical experiments to illustrate our results.

                                                      1. O. Abidi and K. Jbilou, Balanced Truncation-Rational Krylov methods for model reduction in large scale dynamical systems, Comp. Appl. Math., (2016) doi:10.1007/s40314-016-0359-z.
                                                      2. A. C. Antoulas, Approximation of Large-Scale Dynamical Systems, Adv. Des. Contr., SIAM (2005).
                                                      3. K. Jbilou, A survey of Krylov-based methods for model reduction in large-scale MIMO dynamical systems, Appl. Comput. Math., 15 (2016) 24-42.
                                                      Boundary conditions and hierarchization for B-splines on sparse grids
                                                      Thursday, 04:10 - 04:35
                                                      Julian Valentin (University of Stuttgart)
                                                      Joint work with Stefan Zimmer (University of Stuttgart); Dirk Pflüger (University of Stuttgart)
                                                      Keywords: sparse grids; B-splines; not-a-knot conditions; hierarchization; Hermite interpolation; unidirectional principle

                                                      B-splines on sparse grids have proved useful in a number of high-dimensional applications in which one seeks for a smooth interpolant or the approximation of derivatives (continuous up to order $p - 1$ for B-splines of degree $p$). However, the conventional translation-invariant basis only replicates splines and polynomials in a proper subset of the domain $[0, 1]^d$. We show how to incorporate not-a-knot boundary conditions into the hierarchical basis to remedy the underlying cause, a mismatch of dimensions.

                                                      Another issue with B-splines on sparse grids is the hierarchization process (interpolation at the grid points), which is significantly harder than for the standard hat function basis. This is due to the overlapping support of the basis functions of the same hierarchical level. We present transformations into other spline bases that leave the spanned spline space invariant, but enable 1D hierarchization by Hermite interpolation, requiring an effort that is only linear in the number of grid points. For $d$ dimensions, we will elaborate on the duality of the unidirectional principle for hierarchization and evaluation and explain which points have to be inserted to ensure the correctness of the principle.

                                                      Thanks to translation invariance (up to some special cases at the boundary), the hierarchical bases we suggest can be implemented easily and be evaluated efficiently. Some of the statements made in this talk will be transferable to other sparse grid bases (such as pre-wavelets) that have a larger support than the usual hat functions.

                                                        Finite element methods in data smoothing
                                                        Thursday, 04:35 - 05:00
                                                        Bishnu Lamichhane (University of Newcastle)
                                                        Joint work with Markus Hegland (ANU); Linda Stals (ANU); Stephen Roberts (ANU)
                                                        Keywords: Thin plate splines; scattered data smoothing; finite element method; biorthogonal system

                                                        We consider different data smoothing techniques and their finite element approximations. Using finite element methods to approximate different smoothing splines, we consider convergence of the finite element schemes for the related smoothing spline. Numerical results will be presented to demonstrate the performance of the presented numerical schemes.

                                                          Friday

                                                          Deterministic Bayesian inversion of high dimensional parametric PDEs
                                                          Friday, 09:00 - 09:25
                                                          Christoph Schwab (ETH Zürich)
                                                          Joint work with Jakob Zech
                                                          Keywords: Bayesian Inverse Problems; High order QMC; High dimensional integration

                                                          We analyze deterministic quadrature methods for the numerical solution of Bayesian Inverse Problems (BIPs) for high-dimension, parametric operator equations. Particular attention is placed on sparse grid, Smolyak algorithms as proposed in [1], for ratio and splitting estimators, and their dimension-independent convergence rates. Novel convergence rate bounds of single and multi-level algorithms will be presented. Stability in the presence of low-dimensional posterior concentration will be discussed based on [2].

                                                          1. Cl. Schillings and Ch. Schwab. Sparse, adaptive Smolyak quadratures for Bayesian inverse problems, Inverse Problems, 29/6, pp. 1–28, 2013.
                                                          2. Cl. Schillings and Ch. Schwab. Scaling Limits in Computational Bayesian Inversion, ESAIM: M2AN, 50/6, pp. 1825–1856, 2016.
                                                          A dimension-adaptive multi-index Monte Carlo method applied to a model of a heat exchanger
                                                          Friday, 09:25 - 09:50
                                                          Pieterjan Robbe (KU Leuven)
                                                          Joint work with Dirk Nuyens (KU Leuven); Stefan Vandewalle (KU Leuven)
                                                          Keywords: Multi-index Monte Carlo; PDEs with random coefficients

                                                          We present an adaptive Multi-Index Monte Carlo method, as introduced in Haji-Ali, Nobile and Tempone (2016), for simulating PDEs with coefficients that are random fields. A classical technique to sample from these random fields is the KL-expansion. Our adaptive algorithm is similar to adaptive sparse grids, see Bungartz and Griebel (2004), and automatically chooses the number of terms needed in this expansion, as well as the required spatial discretizations of the PDE model. We apply the method to a simplified model of a heat exchanger with insulator material, where the stochastic characteristics are modelled as a lognormal random field.

                                                            Integrals of piecewise smooth integrands and their sensitivities w.r.t. parameters
                                                            Friday, 09:50 - 10:15
                                                            Andreas Griewank (Yachay Tech, Urcuquí Imbabura)
                                                            Joint work with Hernan Leovey (Axpo); Frances Kuo (UNSW); Ian Sloan (UNSW)
                                                            Keywords: conditional integration, presmoothing, Quasi Monte Carlo, Greeks

                                                            Nonsmooth functions are typically piecewise smooth with kinks or jumps between their smooth subdomains. We consider their numerical treatment as integrands by presmoothing and suitably adapted quadratures of the Monte Carlo or quasi-Monte Carlo type. It shown that under certain assumptions the normal 'smooth' convergence rates can be achieved not only for the original integral but also its partial derivatives w.r.t. parameters, such as (co-) variances.

                                                              Polynomial analog of Frolov's cubature formula
                                                              Friday, 10:45 - 11:10
                                                              Kosuke Suzuki (Hiroshima University)
                                                              Joint work with Josef Dick (UNSW); Friedrich Pillichshammer (JKU Linz); Mario Ullrich (JKU Linz); Takehito Yoshiki (Osaka City University)
                                                              Keywords: Quasi-Monte Carlo integration; Frolov's cubature formula; polynomial analog; digital net; t-value

                                                              We investigate a polynomial analog of Frolov's cubature formula on the unit cube. Let $b$ be a prime and $\mathbb{F}_b$ the field of $b$ elements. The integration nodes of our cubature formula is given as follows: choose a suitable full rank $\mathbb{F}_b[x]$-lattice in $\mathbb{F}_b((x^{-1}))^d$ which is identified with points in $\mathbb{R}^d$, polynomially shrink it and restrict it to the unit cube. We show that our cubature rule turns out to be a quasi-Monte Carlo rule after sufficient shrinking, whereas Frolov's is not. Moreover we investigate the $(t,m,d)$-net property of the integration nodes. We give a condition for a lattice such that the $t$-values of its shrunk and restricted version are uniformly bounded. Further we construct lattices which satisfy the condition when the dimension $d$ is a power of $b$ with uniformly bounded $t$-values being of order $O(d^2 \log d)$.

                                                                High-dimensional integration: getting rid of the anchor
                                                                Friday, 11:10 - 11:35
                                                                Michael Gnewuch (Kiel University)
                                                                Joint work with Mario Hefter (TU Kaiserslautern); Aicke Hinrichs (JKU Linz); Klaus Ritter (TU Kaiserslautern)
                                                                Keywords: High-dimensional Integration; Function space embeddings; Randomized algorithms; Multivariate decomposition method; Tractability

                                                                Promising methods for high- and infinite-dimensional integration as dimension-wise quadrature methods or multivariate decomposition methods (aka changing dimension algorithms) rely on the anchored decomposition of the integrand at hand. Recently, there have been some investigations to tailor these methods to spaces of integrands with underlying function space decompositions different than the anchored decomposition.

                                                                In this talk we present and discuss the following result: If the space of integrands is a weighted reproducing kernel Hilbert space, then it can be related in a canonical way to a pair of weighted anchored reproducing kernel Hilbert spaces and the convergence rates of $N$-th minimal errors are the same in all three spaces. Moreover, algorithms that work well for integrands from one space work well for integrands from all three spaces. This holds for product weights in the randomized setting (where we allow randomized algorithms) as well as in the deterministic setting (where we only admit deterministic algorithms). As an example, we present a generalization of sharp error bounds for infinite-dimensional integration in weighted anchored reproducing kernel Hilbert spaces to arbitrary weighted reproducing kernel Hilbert spaces.

                                                                1. M. Gnewuch, M. Hefter, A. Hinrichs, K. Ritter, Embeddings of weighted Hilbert spaces and applications to multivariate and infinite-dimensional integration. Preprint 2016 ( arXiv:1608.00906).
                                                                Multilevel higher order QMC Bayesian estimations in PDEs with random coefficients
                                                                Friday, 11:35 - 12:00
                                                                Quoc Thong Le Gia (UNSW Australia)
                                                                Joint work with Josef Dick (UNSW); Robert Gantner (ETH); Christoph Schwab (ETH)
                                                                Keywords: Bayesian Inverse Problems; Higher order QMC

                                                                Bayesian estimations of solutions to parametric operator equations arise in numerical uncertainty quantification of operator equations with distributed uncertain inputs, such as uncertain coefficients, uncertain domains or uncertain source terms and boundary data.

                                                                We propose and analyze deterministic multilevel approximations for Bayesian estimations of operator equations with uncertain distributed parameters, subject to additive Gaussian measurement data. The algorithms use a multilevel approach based on deterministic, higher order quasi-Monte Carlo quadrature for approximating the high-dimensional expectations, which arise in the Bayesian estimators, and a Petrov-Galerkin method for approximating the solution to the underlying partial differential equation.

                                                                1. J. Dick, R. N. Gantner, Q. T. Le Gia, Ch. Schwab, Multilevel higher order quasi Monte Carlo Bayesian estimation, Preprint, arXiv:1611.08324.
                                                                2. J. Dick, R. N. Gantner, Q. T. Le Gia, Ch. Schwab, Higher order quasi Monte Carlo integration for Bayesian Estimation, Preprint, arXiv:1602.07363.