Clements Scientific Computing Seminar Series

Spring, 2020
Department of Mathematics, SMU
Room: 126 Clements Hall (if not specified, Refreshment starts 15 minutes before talks)


[1] Speaker: Prof. Yuehaw Khoo, Stat. U. of Chicago, Thursday 3:45-4:45p, January 30

Title: Multimarginal Optimal Transport and Density Functional Theory 

Abstract: Density functional theory has been a popular tool in solid state physics and  quantum chemistry for electronic structure calculation. However, current functionals used in density functional theory face difficulties when dealing with strongly correlated systems. In this talk, we examine the regime where the electrons are strictly correlated. This gives rise to a multimarginal optimal transport problem, a direct extension of the optimal transport problem that has applications in other fields such as economics and machine learning. In particular we introduce methods from convex optimization to provide a lower bound to the cost of the multimarginal transport problem with a practical running time. We further propose projection schemes based on tensor decomposition to obtain upper bounds to the energy. Numerical experiments demonstrate a gap of order $10^{-3}$ to $10^{-2}$ between the upper and lower bounds. Its application for second-quantized fermionic system is also discussed.

Biosketch: Yuehaw Khoo is an assistant professor at the statistics department in University of Chicago. Prior to this, he was a post-doc in Stanford and graduate student in Princeton. He is interested in scientific computing problems in protein structure determination and quantum many-body physics. In these problems, he focuses on non-convex, discrete or large scale optimization and representing high-dimensional functions using neural-network and tensor network.
 

[2] CANCELLED- Speaker: Prof. Weinan E, Math, Princeton University, Monday 2:00p, February 10, DLSB 110

Title: A Mathematical Perspective of Machine Learning

Abstract: The heart of modern machine learning is the approximation of high dimensional functions. This problem will be examined from a mathematical perspective. Traditional approaches, such as approximation by piecewise polynomials, wavelets, or other linear combinations of fixed basis functions, suffer from the curse of dimensionality. We will discuss representations and approximations that overcome the curse of dimensionality, and gradient flows that can be used to find the optimal approximation. We will see that at the continuous level, machine learning consists of a series of reasonably nice calculus of variations and  PDE-like problems. Modern machine learning algorithms, such as the ones for neural networks, are special discretizations of these continuous problems. But new models and new algorithms can also be constructed based on the same philosophy.  Finally, we will discuss the fundamental reasons that make modern machine learning successful, as well as the subtleties that still remain to be understood.

Biosketch:


[3] Speaker: Prof. George Karniadakis, Applied Math, Brown University, Tuesday 3:45-4:45p, February 11

Title: Physics-Informed Neural Networks (PINNs) for Physical Problems & Biological Problems  

Abstract: We will present a new approach to develop a data-driven, learning-based framework for predicting outcomes of physical and biological systems and for discovering hidden physics from noisy data. We will introduce a deep learning approach based on neural networks (NNs) and generative adversarial networks (GANs). We also introduce new NNs that learn functionals and nonlinear operators from functions and corresponding responses for system identification. Unlike other approaches that rely on big data, here we “learn” from small data by exploiting the information provided by the physical conservation laws, which are used to obtain informative priors or regularize the neural networks. We will also make connections between Gauss Process Regression and NNs and discuss the new powerful concept of meta-learning. We will demonstrate the power of PINNs for several inverse problems in fluid mechanics, solid mechanics and biomedicine including wake flows, shock tube problems, material characterization, brain aneurysms, etc, where traditional methods fail due to lack of boundary and initial conditions or material properties.

Biosketch: Karniadakis received his S.M. and Ph.D. from Massachusetts Institute of Technology. He was appointed Lecturer in the Department of Mechanical Engineering at MIT in 1987 and subsequently he joined the Center for Turbulence Research at Stanford / Nasa Ames. He joined Princeton University as Assistant Professor in the Department of Mechanical and Aerospace Engineering and as Associate Faculty in the Program of Applied and Computational Mathematics. He was a Visiting Professor at Caltech in 1993 in the Aeronautics Department and joined Brown University as Associate Professor of Applied Mathematics in the Center for Fluid Mechanics in 1994. After becoming a full professor in 1996, he continues to be a Visiting Professor and Senior Lecturer of Ocean/Mechanical Engineering at MIT. He is an AAAS Fellow (2018-), Fellow of the Society for Industrial and Applied Mathematics (SIAM, 2010-), Fellow of the American Physical Society (APS, 2004-), Fellow of the American Society of Mechanical Engineers (ASME, 2003-) and Associate Fellow of the American Institute of Aeronautics and Astronautics (AIAA, 2006-). He received Alexander von Humboldt award in 2017, the Ralf E Kleinman award from SIAM (2015), the J. Tinsley Oden Medal (2013), and the CFD award (2007) by the US Association in Computational Mechanics. His h-index is 96 and he has been cited over 46,500 times. 


[4] Speaker: Prof. Shi Jin, INS, Shanghai Jiaotong Univ, Thursday 3:45-4:45p, February 20

Title: Random Batch Methods for Classical and Quantum N-body Problems

Abstract: We first develop random batch methods for classical  interacting particle systems with large number of particles. These methods use small but random batches for particle interactions, thus the computational cost is reduced from O(N^2) per time step to O(N), for a system with N particles with binary interactions. For one of the methods, we give a particle number independent error estimate under some special interactions. 

For quantum N-body Schrodinger equation, we obtain, for pair-wise random interactions, a convergence estimate for the Wigner transform of the single-particle reduced density matrix of the particle system at time t that is uniform in N > 1 and independent of the Planck constant \hbar. To this goal we need to introduce a new metric specially tailored to handle at the same time the difficulties pertaining to the small \hbar regime (classical limit), and those pertaining to the large N regime (mean-field limit). 

The classical part was a joint work with Lei Li and Jian-Guo Liu, while the quantum part was with Francois Golse and Thierry Paul. 

Biosketch: Shi Jin got his Bachelor's degree from Peking University and Ph.D. from University of Arizona. He was a postdoc at Courant Institute, assistant and associate professors at Georgia Tech, Professor, department chair, and Vilas Distinguished Achievement Professor at University of Wisconsin-Madison. He is currently director and chair professor at Institute of Natural Sciences, Shanghai Jiao Tong University, China. Shi Jin is an (inaugural) Fellow of AMS, Fellow of SIAM, winner of the Fen Kang Prize in Scientific Computing and a Morningside Silver Medal of Mathematics at International Congress of Chinese Mathematicians, and an invited Speaker at the International Congress of Mathematicians in Rio de Janeiro in 2018. Shi Jin's research interests include kinetic theory, hyperbolic conservation laws, quantum dynamics, multiscale computation, and uncertainty quantification. 


[5] Speaker: Prof. Hongkai Zhao, Math, UC Irvine, Thursday 3:45-4:45p, February 27

Title: Intrinsic Complexity: From Approximation Of Random Vectors and Random Fields to Solutions of Pdes

Abstract: We characterize the intrinsic complexity of a set in a metric space by the least dimension of a linear space that can approximate the set to a given tolerance. This is dual to the characterization of the set using Kolmogorov n-width, the distance from the set to the best n-dimensional linear space. In this talk I will start with the intrinsic complexity of a set of random vectors (via principal component analysis) and random fields (via Karhunen–Loève expansion) and then characterize solutions to partial differential equations of different types. Our study provides a mathematical understanding of the complexity and its mechanism of the underlying problem independent of representation basis. In practice, our study is directly related to the question of whether there is a low dimensional structure or a low rank approximation one can exploit for the underlying problem, which is essential for dimension reduction and developing fast algorithms.

Biosketch: Hongkai Zhao is Chancellor’s Professor in the mathematics department at UC Irvine. He got his Ph.D in mathematics from UCLA in 1996 and then went to Stanford as Szego Assistant Professor before joining UCI in 1999. He got Sloan Fellowship in 2002 and Feng Kang Prize in scientific computing in 2007. 


[6] Speaker: Prof., Ren-cang Li, Dept of Math, UT Arlington, Thursday 3:45-4:45p, March 12

Title: Eigenvector-Dependent Nonlinear Eigenvalue Problems with Applications

Abstract: A long standing type of eigenvector-dependent nonlinear eigenvalue problems (NEPv) come from discretizing the Kohn-Sham equation in electronic structure calculations. Recent interests in data science have yielded NEPv of similar types. In this talk, we first establish existence and uniqueness conditions for the solvability of a type of NEPv that include the ones from the Kohn-Sham equation and the linear discriminant analysis (LDA) for dimension reduction, and present a local and global convergence analysis for a self-consistent field (SCF) iteration for solving the problem. Then we will look into another type of NEPv arising from orthogonal Canonical Correlation Analysis (CCA), a standard statistical technique and widely-used feature extraction paradigm for two sets of multidimensional variables. Applications of the results will be discussed.

Biosketch: Ren-Cang Li is currently a professor with UT Arlington. He received his BS from Xiamen University in 1985, his MS from the Chinese Academy of Science in 1988, and his PhD from UC Berkeley in 1995. He was awarded the 1995 Householder Fellowship in Scientific Computing by Oak Ridge National Laboratory, a Friedman memorial prize in Applied Mathematics from UC Berkeley in 1996, and a CAREER award from NSF in 1999. His research interest includes floating-point support for scientific computing, large and sparse linear systems, eigenvalue problems, machine learning, and unconventional schemes for differential equations. He serves on editorial boards of several international journals. Previously, he served as an associate editor of SIMAX.

CANCELLED [7] Speaker: Prof. Josef Sifuente, Math., UTRGV,  Thursday 3:45-4:45p, April 9

Title: GMRES Convergence and Spectral Properties of Approximate Preconditioners for KKT matrices

Abstract: Several important preconditioners for saddle point problems yield linear systems for which the GMRES iterative method converges exactly in just a few iterations. However, these preconditioners all involve inverses of large submatrices. In practical computations such inverses are only approximated, and more iterations are required to solve the preconditioned linear system. How many more iterations? In this talk, we present perturbation analysis results for GMRES that leads to rigorous upper bounds on the number of iterations as a function of the accuracy of the preconditioner to the ideal and spectral properties of the constituent matrices. We also derive a thorough analysis of the spectral properties of these common saddle point preconditioners. We will demonstrate some numerical computations that verify these results for problems from optimization and fluid dynamics.

Biosketch: Josef Sifuentes is an Assistant Professor in the School of Mathematical and Statistical Sciences at the University of Texas Rio Grande Valley. His research focuses on Krylov subspace methods in numerical linear algebra.


CANCELLED [8] Speaker: Alexandre Tartakovsky, Computational Math., PNNL, Thursday 3:45-4:45p, April 23

Title: 

Abstract:  

Biosketch: