Quantitative stability asks how a given functional grows near its minima or critical points. We will see how Lojasiewicz inequalities, which quantify the order to which a real analytic function on Euclidean space can vanish, can lead to quantitative stability results in regimes where explicit linearization around the minima is not possible. If time allows, we will also talk about the implications for the convergence of geometric flows and the accuracy of algorithms from data science.
This talk covers joint work with O. Chodosh (Stanford), D. Mendelson (Radix trading), R. Neumayer (CMU) and L. Spolaor (UCSD), in addition to work in progress with A. De Clercq (U Chicago).
One of the most fundamental unsolved problems in representation theory is to classify the set of irreducible unitary representations of a semisimple Lie group. In this talk, I will define a class of such representations coming from filtered quantizations of certain graded Poisson varieties. The representations I construct are expected to form the "building blocks" of all unitary representations.
Since the middle of last century a substantial part of stochastic analysis has been devoted to the relationship between (parabolic) linear partial differential equations (PDEs), more precisely, linear Fokker-Planck-Kolmogorov equations (FPKEs), and stochastic differential equations (SDEs), or more generally Markov processes. Its most prominent example is the classical heat equation on one side and the Markov process given by Brownian motion on the other. This talk is about the nonlinear analogue, i.e., the relationship between nonlinear FPKEs on the analytic side and McKean-Vlasov SDEs (of Nemytskii-type), or more generally, nonlinear Markov processes in the sense of McKean on the probabilistic side. This program has been initiated by McKean already in his seminal PNASpaper from 1966 and this talk is about recent developments in this field. Topics will include existence and uniqueness results for distributional solutions of the nonlinear FPKEs on the analytic side and equivalently existence and uniqueness results for weak solutions of the McKean-Vlasov SDEs on the probabilistic side. Furthermore, criteria for the corresponding path laws to form a nonlinear Markov process will be presented. Among the applications are e.g. porous media equations (including such with nonlocal operators replacing the Laplacian and possibly being perturbed by a transport term) and their associated nonlinear Markov processes. But also the 2D Naiver-Stokes equation in vorticity form and its associated nonlinear Markov process will be discussed.
Joint work with: Viorel Barbu, Al.I. Cuza University and Octav Mayer Institute of Mathematics of Romanian Academy, Iaşi, Romania; Marco Rehmeier, Bielefeld University and SNS Pisa
Free resolutions, or syzygies, with a graded structure are algebraic objects that encode many geometric properties. This correspondence lies at the heart of classical projective algebraic geometry. By analogy, multigraded resolutions should also provide powerful geometric tools. I will discuss some foundational results from the classical story and give an overview of recent work to extend these tools to the multigraded setting of toric geometry.
Cross-diffusion systems appear for instance in the context of directed motion of cells and species competition in ecology.
From the mathematical point of view this results in systems of strongly coupled parabolic partial differential equations, where the diffusion matrix may neither be symmetric nor positive semidefinite.
For the applications one is interested in pattern formation in these systems, as well as understanding possible underlying microscopic dynamics. Among the many mathematical challenges, when tackling these systems, singularity formation of solutions is one of them.
I will review a general framework that can be used to develop numerical methods for various problems involving non-parametrically defined surfaces.
The main idea is to formulate appropriate extensions of a given problem defined on a surface to ones in the narrow band of the surface in the embedding space. The extensions are arranged so that the solutions to the extended problems are equivalent, in a strong sense, to the surface problems that we set out to solve.
Such extension approaches allow us to analyze the well-posedness of the resulting system, develop systematically and in a unified fashion numerical schemes for treating a wide range of problems involving differential and integral operators, and deal with similar problems in which only point clouds sampling the surfaces are given.
We will demonstrate a few computations of some applications involving integral equations, partial differential equations, and optimal control problems on hypersurfaces.
This talk will describe recent work on mathematical methods for signal recovery in high noise. The first part of the talk will explain the connection between the Wiener filter, singular value shrinkage, and Stein's method for covariance estimation, and review optimal shrinkage in the spiked covariance model. We will then present extensions to heteroscedastic noise and linearly-corrupted observations. Time permitting, we will also give an overview of the related class of orbit recovery problems.
A (convex) polytope is the convex hull of a finite set. There are many ways to measure a polytope: volume, number of vertices, number of lattice points inside the polytope, etc. A wide variety of problems in pure and applied mathematics involve the problem of measuring a polytope. However, these problems are very complex: there is no procedure that can efficiently measure any polytope. In this talk we will discuss these problems for families of polytopes which, due to their symmetries, are of special interest. In the case of the volume of a polytope, we will discuss how the interplay between algebraic geometry and combinatorics sheds light to this problem.
Pretalk: Oct 18, 12:20pm, Vincent Hall 16: Volumes of polytopes and Erhart theory
A (convex) polytope is the convex hull of a finite set. There are many ways to measure a polytope: volume, number of vertices, number of lattice points inside the polytope, etc. In this talk we will focus on the volume of a polytope and its discrete version, the Ehrhart polynomial. We will also discuss how algebraic geometry provides tools to compute volumes of polytopes. The talk will have many examples and pictures.
Braverman and Kazhdan formulated a new approach to establishing conjectures on automorphic L-functions that directly generalizes the Tate thesis and bypasses Langlands' functoriality conjecture. There have been slow but steady advances in this program, with geometric flavors and harmonic analytic flavors. I will report on those advances, including a work in progress in collaboration with Zhilin Luo.
As the name suggests, numerical analysis -- the study of computational algorithms to solve mathematical problems, such as differential equations -- has traditionally been viewed mostly as a branch of analysis. Geometry, topology, and algebra played little role. Indeed, often departments created special degree requirements so that computational and applied math students could avoid studying these subjects altogether. However, in the last decade or so, things have changed. The recent numerical analysis literature is replete with papers using concepts that are new to the subject, say, symplectic differential forms or de Rham cohomology or Hodge theory. In this talk we will discuss some examples of this phenomenon, especially the Finite Element Exterior Calculus. We shall see why these new ideas arise naturally in numerical analysis and how they contribute.
A complex variety with a positive first Chern class is called a Fano variety. The question of whether a Fano variety has a K Æhler-Einstein metric has been a major topic in complex geometry since the 1980s. In the last decade, algebraic geometry, or more specifically higher dimensional geometry has played a surprising role in advancing our understanding of this problem. In fact, the algebraic part of this question is one step of a larger project, namely constructing projective moduli spaces that parametrize Fano varieties satisfying the K-stability condition. The latter is exactly the algebraic characterization of the existence of a K Æhler-Einstein metric. In the lecture, I will explain the main ideas behind the recent progress of the field.
A sequence of positive real numbers a_1, a_2, ..., a_n is log-concave if a_i^2 \geq a_{i-1} a_{i+1} for all i ranging from 2 to n-1. Log-concavity naturally arises in various aspects of mathematics, each characterized by different underlying mechanisms. Examples range from inequalities that are readily provable, such as the binomial coefficients a_i = \binom{n}{i}, to intricate inequalities that have taken decades to resolve, such as the number of forests a_i in a graph G with i edges. It is then natural to ask if it can be shown that the latter type of inequalities is intrinsically more challenging than the former. In this talk, we provide a rigorous framework to answer this type of questions, by employing a combination of combinatorics, complexity theory, and geometry. This is a joint work with Igor Pak.
Time stepping methods are critical to the stability, accuracy, and efficiency of the numerical solution of partial differential equations. In many legacy codes, well-tested low-order time-stepping modules are difficult to change; however, their accuracy and efficiency properties may form a bottleneck. Time filtering has been used to enhance the order of accuracy (as well as other properties) of time-stepping methods in legacy codes. In this talk I will describe our recent work on time filtering methods for the Navier Stokes equations as well as other applications. A rigorous development of such methods requires an understanding of the effect of the modification of inputs and outputs on the accuracy, efficiency, and stability of the time-evolution method. In this talk, we show that time-filtering a given method can be seen as equivalent to generating a new general linear method (GLM). We use this GLM approach to develop an optimization routine that enabled us to find new time-filtering methods with high order and efficient linear stability properties. In addition, understanding the dynamics of the errors allows us to combine the time-filtering GLM methods with the error inhibiting approach to produce a third order A-stable method based on alternating time-filtering of implicit Euler method. I will present our new methods and show their performance when tested on sample problems.
We show that various classical theorems of real/complex linear incidence geometry, such as the theorems of Pappus, Desargues, M Ébius, and so on, can be interpreted as special cases of a single "master theorem" that involves an arbitrary tiling of a closed oriented surface by quadrilateral tiles. This yields a general mechanism for producing new incidence theorems and generalizing the known ones.
This is joint work with Pavlo Pylyavskyy.
Schubert calculus involves studying intersection problems among linear subspaces of C^n. A classical example of a Schubert problem is to find all 2-dimensional subspaces of C^4 which intersect 4 given 2-dimensional subspaces nontrivially (it turns out there are 2 of them). In the 1990's, B. and M. Shapiro conjectured that a certain family of Schubert problems has the remarkable property that all of its complex solutions are real. This conjecture inspired a lot of work in the area, including its proof by Mukhin-Tarasov-Varchenko in 2009. I will present a strengthening of this result which resolves some conjectures of Sottile, Eremenko, Mukhin-Tarasov, and myself, based on surprising connections with total positivity, the representation theory of symmetric groups, symmetric functions, and the KP hierarchy. This is joint work with Kevin Purbhoo.
In the 1970s, Thurston generalized the classification of self-maps of the torus to surfaces of higher genus, thereby completing the work initiated by Nielsen. This is known as the Nielsen-Thurston Classification Theorem, a cornerstone in low-dimensional topology. Over time, numerous alternative proofs have been developed, leveraging different aspects of surface theory. In this talk, I will provide an overview of the classical theory and then outline the ideas of a new proof, one that offers some new insights into the hyperbolic geometry of surfaces. This is joint work with Camille Horbez.
I will present two elementary structures that provide generalizations of differential calculus beyond Cartesian spaces (R^n): diffeology (a-la-Souriau), and differential structures (a-la-Sikorski).
A diffeology on a set declares which maps from Cartesian spaces to the set are smooth; a differential structure declares which real-valued functions on the set are smooth.
In spite of their simplicity, these structures often capture surprisingly rich information. They also lead to intriguing open questions. For example, for a linear flow of irrational slope on the 2-torus, the topology of its quotient is trivial, but the diffeology of its quotient determines the slope (up to an automorphism of the torus). On the other hand, we do not know if the differential structure (or even the topology) of a trajectory of this flow also determines the slope.
Mean field game (MFG) problems study how a large number of similar rational agents make strategic movements to minimize their costs. They have recently gained great attention due to their connection to various problems, including optimal transport, gradient flow, deep generative models, and reinforcement learning. In this talk, I will elaborate our recent computational efforts on MFGs and their inverse problems. I will start with a low-dimensional setting, employing conventional discretization and optimization methods, delving into the convergence results of our proposed approach. Afterwards, I will extend my discussion to high-dimensional problems by bridging the trajectory representation of MFG with a special type of deep generative model---normalizing flows. This connection not only helps solve high-dimensional MFGs but also provides a way to improve the robustness of normalizing flows. I will further address its extension to its associated inverse problems for learning dynamics, where the cost function of MFGs may not be available, rendering the associated agent dynamics unavailable. To tackle this, we propose a bilevel optimization formulation for learning dynamics guided by MFGs with unknown obstacles and metrics. Additionally, we establish local unique identifiability results and design an alternating gradient algorithm with convergence analysis. Furthermore, we extend our proposed bi-level method to a deep learning-based algorithm using normalizing flows. Our numerical experiments demonstrate the efficacy of the proposed methods.
The compressible Euler equation can lead to the emergence of shocks-discontinuities in finite time, notably observed behind supersonic planes. A very natural way to justify these singularities involves studying solutions as inviscid limits of Navier-Stokes solutions with evanescent viscosities. The mathematical study of this problem is however very difficult because of the destabilization effect of the viscosities.
Bianchini and Bressan proved the inviscid limit to small BV solutions using the so-called artificial viscosities in 2004. However, until very recently, achieving this limit with physical viscosities remained an open question.
In this presentation, we will present the basic ideas of classical mathematical theories to compressible fluid mechanics and introduce the recent a-contraction with shifts method. This method is employed to describe the physical inviscid limit in the context of the barotropic Euler equation.
Contact 3-manifolds occupy a central role in low-dimensional topology and are reasonably well-understood, partly due to their interactions with Floer-theoretic invariants such as Heegaard Floer homology. Convex surface theory and bypasses are extremely powerful tools for cutting up and analyzing contact 3-manifolds and in particular have been successfully applied to many classification problems. Higher-dimensional contact topology, on the other hand, is still the "wild west". After reviewing convex surface theory in dimension three, we explain how to generalize some of their fundamental properties to higher dimensions. This is made possible by a key technical advance which allows us to apply a carefully constructed C^0-small perturbation to a given hypersurface in a contact manifold so that becomes "convex", i.e., has nice dynamical properties that make it suitable for cutting. This is joint work with Yang Huang.
Quantum optics is the quantum theory of the interaction of light and matter. In this talk, I will describe a real-space formulation of quantum electrodynamics. The goal is to understand the propagation of nonclassical states of light in systems consisting of many atoms. In this setting, there is a close relation to kinetic equations for nonlocal PDEs with random coefficients.
We propose a general method to identify nonlinear Fokker-Planck-Kolmogorov equations (FPK equations) as gradient flows on the space of Borel probability measures on R^d with a natural differential geometry. Our notion of gradient flow does not depend on any underlying metric structure such as the Wasserstein distance, but is derived from purely differential geometric principles. Moreover, we explicitly identify the associated energy functions and show that these are Lyapunov functions for the FPK solutions. Our main result covers classical and generalized porous media equations, where the latter have a generalized diffusivity function and a nonlinear transport-type first-order perturbation.
The goal of this talk is to explain how basic representation theory of compact Lie groups can be very useful for giving insights into mathematical problems at the foundations of cryo-electron microscopy and X-ray crystallography - two important techniques in structural biology. Although they involve quite different experimental setups, mathematically they can be viewed as group orbit recovery problems where the experimental data determines invariant tensors (moments) of the unknown molecular structure. For computational reasons there is particular interest in determining prior conditions, such as sparsity, on the structure which ensure that the structure can be resolved from moments of low degree. We will discuss recent results in this direction.
An exotic sphere is a manifold homeomorphic, but not diffeomorphic, to a standard sphere of some dimension. I will explain the telescope conjecture in stable homotopy theory, recently settled in joint work with Burklund, Levy, and Schlank, through its implications for the diffeomorphism classification of exotic spheres. I will end by explaining how a key ingredient in our understanding of the telescope conjecture generalizes a classic conjecture in algebraic geometry.
The Neumann-Poincar Á operator (NPO) is an integral operator that allows the representation of the solutions to elliptic PDE's with piecewise constant coefficients using layer potentials. We discuss examples of asymptotic problems in composite media, in which the spectral properties of the NPO prove quite useful in the analysis.
There are many situations in geometry and group theory where it is natural, convenient or necessary to explore infinite groups via their actions on finite objects --- i.e., via their finite quotients. A natural question is therefore: to what extent do they determine the infinite group? In this talk I will discuss the rich history of this problem and describe some of the major progress of recent years.
The classical Schur duality is a simple yet powerful concept which relates the representations of the symmetric group and general linear Lie algebra, as well as combinatorics of symmetric functions. We will explain several settings where "dual" algebraic structures arise in parallel:
(1) Quantum deformation: duality between a quantum group and Hecke algebra of type A;
(2) Duality between an i-quantum group and Hecke algebra of type B;
(3) Geometric realizations and canonical bases for (1)-(2);
(4) (Affine) Hecke and Schur categories.
The hydrodynamics stability has been a main theme in fluid mechanics since Reynolds's famous experiment in 1883. This field is mainly concerned with the transition of fluid motion from laminar to turbulent flow. On one hand, the eigenvalue analysis showed that the plane Couette flow and pipe Poiseuille flow are linearly stable for any Reynolds number. On the other hand, the experiments showed that these flows could be unstable and transition to turbulence for small but finite perturbations at high Reynolds number. This is the so-called Sommerfeld paradox. The resolution of these paradoxes is a long-standing problem in fluid mechanics. In this talk, we introduce some recent progress toward understanding the transition mechanism by two means: pseudospectra and transition threshold problem.
In the talk, regularity results on smoothness of axisymmetric suitable weak solutions to the classical Navier-Stokes equations describing the flow of viscous incompressible fluids will be discussed. The main outcome is that axisymmetric suitable weak solutions have no Type I blowups. In addition, slightly supercritical sufficient conditions on regularity will be given and their proof will be outlined.