Inverse problems arise from the need to extract knowledge from incomplete measurements. Lying at the heart of modern scientific inquiry, inverse problems find extensive applications in various fields, including geophysics, medical imaging, biology, and many others. In this talk, we will first introduce the mathematical background and several interesting applications. We will also demonstrate the high order linearization approach to solve several inverse problems for nonlinear PDEs, including nonlinear magnetic Schrodinger equations and transport equations. Unique determination of unknown coefficients from measurements will be discussed.
A key structural result in Lie theory is Cartan's correspondence between real Lie groups and symmetric spaces. I will explain a geometric refinement of Cartan's correspondence and discuss its application to Langlands duality.
Joint work with Mark Macerato, David Nadler, John O'Brien.
A famous result in number theory is Dirichlet's theorem that there exist infinitely many prime numbers in any given arithmetic progression a, a + N, a + 2 N, ... where a, N are coprime. In fact, a stronger statement holds: the primes are equidistributed in the different residue classes modulo N. In order to prove his theorem, Dirichlet introduced Dirichlet L-functions, which are analogues of the Riemann zeta function which depend on a choice of character of the group of units modulo N.
More general L-functions appear throughout number theory and are closely connected with equidistribution questions, such as the Sato--Tate conjecture (concerning the number of solutions to y^2 = x^3 + a x + b in the finite field with p elements, as the prime p varies). L-functions also play a central role in both the motivation for and the formulation of the Langlands conjectures in the theory of the automorphic forms.
I will give a gentle introduction to some of these ideas and discuss some recent theorems in the area.
The isoperimetric inequality in Euclidean space has a long history in geometry, going back to the legend of Queen Dido. Over the past century, mathematicians have sought to generalize the isoperimetric inequality to various curved spaces. In this lecture, I will discuss how the isoperimetric inequality can be generalized to minimal surfaces. The proof is inspired by, but does not actually use, optimal transport.
We show that counting holomorphic curves by the values of their boundaries in the HOMFLY skein module gives rise to invariant counts of holomorphic curves with boundary in a Maslov zero Lagrangian in a 3-dimensional Calabi-Yau manifold. This leads to simple and powerful recursion relations for curves that we use to prove the Ooguri-Vafa relation between HOMFLY polynomials of knots in the 3-sphere and Gromov-Witten theory in the resolved conifold. The talk reports on joint work with Vivek Shende.
The pioneering work of Langlands has established the theory of reductive algebraic groups and their representations as a key part of modern number theory. I will survey classical and modern results in the representation theory of reductive groups over local fields (the fields of real, complex, or p-adic numbers, or of Laurent series over finite fields) and discuss how they relate to Langlands' ideas, as well as to the various reflections of the basic mathematical idea of symmetry in arithmetic and geometry.
Let G be a reductive group and H be a closed subgroup of G. We say H is a spherical subgroup of G if there exists a Borel subgroup B of G such that BH is Zariski open in G. (I will explain what the above terminologies mean in my talk.) One of the fundamental problems in the relative Langlands program is to study the multiplicity problem for the pair (G,H), i.e. to study the restriction of a representation of G to H. In this talk, I will first recall the multiplicity problem in the finite group case and in the Lie group case. Then I will go over the general conjecture and all the known results for the multiplicity problem of spherical varieties. Lastly, I will explain how to use the trace formula method to study this problem.
In this talk we will give an overview of some recent results on unique continuation property at infinity for solutions of elliptic equations. Our first result is an unexpected uniqueness property for discrete harmonic functions. This property is connected to Anderson localization for Anderson-Bernoulli model in dimensions two and three. We will explain this connection. Another result is the solution of the Landis conjecture on the decay of the real-valued solutions of the Schrodinger equation with bounded potential. The talk is based on joint works with Buhovsky, Logunov, Sodin, Nadirashvili, and Nazarov.
Stochastic fluctuations drive biological processes from particle diffusion to neuronal spike times. Today, we will use a variety of mathematical frameworks to understand such fluctuations and derive insight into the corresponding applications. We start by considering a novel stochastic process motivated by astrocytes, glial cells that ensheath neuronal synapses and can rapidly remove diffusing signaling molecules from the synaptic cleft. We generalize this setup to consider n diffusing particles that may leave a bounded domain by either 'escaping' through an absorbing boundary (i.e., astrocyte) or being 'captured' by traps (i.e., neurotransmitter receptors) that must recharge between captures. We prove that the number of captured particles grows on the order of log n because of this recharge time, which is drastically different than the linear growth observed for instantaneous recharging. We then generalize this framework further to investigate the celebrated formula of Berg and Purcell, which models the rate that cell surface receptors capture extracellular molecules, in the context of such recharging receptors. We end by exploring how the brain leverages interneuron diversity and noisy recurrent connections to assist with cortical computations. Specifically, we analyze a spatial model of the visual cortex with linear response theory and show how interneurons modulate the level of synchrony in visually induced gamma rhythms.
Recent years have witnessed tremendous progress in developing and analyzing quantum computing algorithms for quantum dynamics simulation of bounded operators (Hamiltonian simulation). However, many scientific and engineering problems require the efficient treatment of unbounded operators, which frequently arise due to the discretization of differential operators. Such applications include molecular dynamics, electronic structure theory, quantum control and quantum machine learning. We will introduce some recent advances in quantum algorithms for efficient unbounded Hamiltonian simulation, including Trotter type splitting and the quantum highly oscillatory protocol (qHOP) in the interaction picture. The latter yields a surprising superconvergence result for regular potentials. In the end, I will discuss briefly how Hamiltonian simulation techniques can be applied to a quantum learning task achieving optimal scaling. (The talk does not assume a priori knowledge on quantum computing.)
2D materials are materials consisting of a single sheet of atoms. The first 2D material, graphene, a single sheet of carbon atoms, was isolated in 2005. In recent years, attention has shifted to materials created by stacking 2D materials with a relative twist. Such materials are known as moiré materials because of the approximate periodicity of their atomic structures over long distances, known as the moiré pattern. In 2018, experiments showed that, when twisted to the so-called "magic" angle (approximately 1 degree), twisted bilayer graphene exhibits exotic quantum phenomena such as superconductivity. I will present the first rigorous justification of the Bistritzer-MacDonald moiré-scale PDE model of twisted bilayer graphene, which played a critical role in identifying the magic angle, from a microscopic tight-binding model. I will then discuss generalizations of this work accounting for atomic relaxation and vibration (phonons), mathematical questions posed by moir\'e materials as fundamentally aperiodic/incommensurate systems, and other related work.
A pervading question in the study of stochastic PDE is how small-scale random forcing in an equation combines to create nontrivial statistical behavior on large spatial and temporal scales. I will discuss recent progress on this topic for several related stochastic PDEs - stochastic heat, KPZ, and Burgers equations - and some of their generalizations. These equations are (conjecturally) universal models of physical processes such as a polymer in a random environment, the growth of a random interface, branching Brownian motion, and the voter model. The large-scale behavior of solutions on large scales is complex, and in particular depends qualitatively on the dimension of the space. I will describe the phenomenology, and then describe several results and challenging problems on invariant measures, growth exponents, and limiting distributions.
Energy methods have historically been a useful tool for studying waves on different background geometries. Under the right conditions, solutions to the wave equation satisfy energy estimates which state that the energy of the solution u at time t is controlled by the energy of the initial data. However, such techniques are not always available, such as in the case of rotating cosmic string spacetimes. These geometries are solutions to the Einstein field equations which exhibit a singularity along a timelike curve corresponding to a one-dimensional source of angular momentum. They have a notable unusual feature: they admit closed timelike curves near the so-called "string" and are thus not globally hyperbolic. In joint work with Jared Wunsch, we show that *forward in time* solutions to the wave equation (in an appropriate microlocal sense) do exist on rotating cosmic string spacetimes, despite the causality issues present in the geometry. Our techniques involve proving a statement on propagation of singularities which provides a microlocal version of an energy estimate that allows us to establish existence of solutions.
In statistical physics, many particle models are described by an interaction energy determined by the Coulomb potential, or more generally an inverse power law called a Riesz potential. To this energy, one can associate a dynamics, either conservative or dissipative, which takes the form of a coupled system of nonlinear differential equations. In principle, one could solve this system of differential equations directly and perfectly describe the behavior of every particle in the system. But in practice, the number of particles (e.g., 10^23 in a gas) is too large for this to be feasible. Instead, one can focus on the "average" behavior of a particle, which is encoded by the empirical measure of the system. Formally, this measure converges to a solution of a certain nonlinear PDE, called the mean-field limit, as the number of particles tends to infinity; but proving this convergence is a highly nontrivial matter. We will review results over the past few years on mean-field limits for Riesz systems, including important questions such as how fast this limit occurs and how it deteriorates with time, and discuss open questions that still remain.
We are interested in model reduction for certain classes of high-dimensional stochastic dynamical systems. Model reduction consists in estimating a low-dimensional stochastic dynamical system that approximates, in a suitable sense (e.g. at certain spatial or temporal time-scale), the original system, or at least some observables of the original system. Typically such reduced models may be faster to simulate (e.g. because of their lower dimensionality) and may offer important insights on the dynamical behavior of the original system. Motivated by examples and applications, including molecular dynamics, we consider a special, well-studied class of stochastic dynamical systems that exhibit two important properties. The first one is that the dynamics can be split into two timescales, a slow and a fast timescale, and the second one is that the slow dynamics takes place on, or near to, a low-dimensional manifold M, while the fast dynamics can be thought of as consisting of fast oscillations off that manifold. Given only access to a black-box simulator from which short bursts of simulations can be obtained, and a (possibly small) set of reasonable initial conditions, we design an algorithm that outputs an estimate of the manifold M, a process representing the effective stochastic dynamics on M, which has averaged out the fast modes, and a simulator of such process. The fast modes are not assumed to be small, nor orthogonal to M. This simulator is efficient in that it exploits the low dimension of the manifold M, and takes time steps of size dependent only the regularity of the effective process, and therefore typically much larger than that of the original simulator, which had to resolve the fast modes in high dimensions. Furthermore, the algorithm and the estimation can be performed on-the-fly, leading to efficient exploration of the effective state space, without losing consistency with the underlying dynamics. This construction enables fast and efficient simulation of paths of the effective dynamics, together with estimation of crucial features and observables of such dynamics, including the stationary distribution, identification of metastable states, and residence times and transition rates between them. This is joint work with X.-F. Ye and S. Yang.
Schubert calculus has its origins in enumerative questions asked by the geometers of the 19th century, such as "how many lines meet four fixed lines in three-space?" These problems can be recast as questions about the structure of cohomology rings of geometric spaces such as flag varieties. Borel's isomorphism identifies the cohomology of the complete flag variety with a simple quotient of a polynomial ring. Lascoux and Sch Êtzenberger (1982) defined Schubert polynomials, which are coset representatives for the Schubert basis of this ring. However, it was not clear if this choice was geometrically natural. Knutson and Miller (2005) provided a justification for the naturality of Schubert polynomials via antidiagonal Gr Ébner degenerations of matrix Schubert varieties, which are generalized determinantal varieties. Furthermore, they showed that pre-existing combinatorial objects called pipe dreams govern this degeneration. In this talk, we study the dual setting of diagonal Gr Ébner degenerations of matrix Schubert varieties, interpreting these limits in terms of the "bumpless pipe dreams" of Lam, Lee, and Shimozono (2021). We then use the combinatorics of K-theory representatives for Schubert classes to compute the Castelnuovo-Mumford regularity of matrix Schubert varieties, which gives a bound on the complexity of their coordinate rings.
Machine learning has recently been used to design innovative, and arguably revolutionary methods for solving many challenging problems from science and engineering which are modeled by partial differential equations (PDEs). Conversely, PDEs provide an important set of tools for understanding machine learning methods. This talk devotes to presenting some recent progress at the interface between neural networks-based machine learning and PDEs.
In the first part of the talk, we will discuss theoretical analysis of neural-network methods for solving high dimensional PDEs. We show that Deep Ritz solvers achieve dimension-free generalization rates in solving elliptic problems under the assumption that the solutions belong to Barron spaces. To justify such assumption, we develop new complexity-based solution theory for several elliptic problems in the Barron spaces.
In the second part of the talk, we will showcase the power of PDEs in minimax optimization, which underpins a variety of problems in adversarial machine learning. More precisely, we consider the problem of finding the mixed Nash equilibria (MNE) in two-player zero sum games on the space of probability measures. It is proved that two-scale gradient descent ascent (GDA) dynamics converges to the unique MNE of an entropy-regularized objective at an exponential rate. We also show that an annealed GDA with a logarithmically decaying cooling schedule converges to the MNE of the original unregularized objective.
For a domain \Omega \subset R^d, a classical criterion of Wiener characterizes the domains for which one can solve the Dirichlet problem (originally, for Laplace's equation) with continuous boundary data. What happens if we allow singular data, say in L^p (with respect to surface measure on the boundary) for some finite exponent p?
It turns out that solvability in the latter setting is equivalent to a quantitative, scale invariant version of absolute continuity of harmonic measure with respect to surface measure on \partial \Omega. In turn, to determine what sort of boundaries are permitted in the presence of such absolute continuity, involves a version of a classical 1-phase free boundary problem. In this talk, we shall discuss the question of characterizing L^p solvability, and we shall give an answer that is rather definitive (i.e., we find a characterization in the presence of some natural "best possible" background hypotheses), in the case of Laplace's equation. Time permitting, we shall also discuss recent partial progress in the caloric case.
In the past half-century, partial differential equation (PDE)-based computational models have emerged as indispensable for science and engineering. However, remarkable gaps still exist between state-of-the-art simulations and reality, meaning that many simulations are ineffective in supporting decision-making or design under uncertainty for complex systems (e.g., climate modeling). To bridge the gap and fulfill challenging real-world missions, I develop data-aware computational models and practical mathematical methods to combine the exponential growth of data with complex PDE-based models, and make improved predictions, which may come equipped with measures of uncertainty.
In this talk, I will mainly focus on two important applied mathematical problems: Bayesian inference and operator learning. 1) Bayesian inference uses data to calibrate/improve models and quantify uncertainties. For large-scale science and engineering problems, challenges arise from the need for repeated evaluations of an expensive forward model, which is often given as a black box or is impractical to differentiate. Our framework, Kalman inversion, built on Kalman methodology and Fisher-Rao gradient flow, is derivative-free; empirically it converges in O(10) iterations with O(10) ensemble evaluations per iteration, leading to effective Bayesian inference with very few model evaluations. 2) Operator learning uses data to build deep learning surrogate models for PDE solving to accelerate many-query problems (e.g., design optimization). Our approach, geometry-aware Fourier neural operator, is inspired by adaptive mesh motion and spectral methods, and learns operators between infinite-dimensional function spaces in a resolution/discretization invariant manner. Specifically, when we learn operators between the design geometry space and the simulation solution space, our approach enables efficient engineering design optimization.
These methods we developed have been successfully applied in complex applications ranging from Mars landing supersonic parachute, bacteria-resistant catheter design, the digital twin for airfoil damage detection, and the Earth system model for climate science.
Strongly correlated quantum systems include some of the most challenging problems in science. I will present the first numerical analysis for the coupled cluster method tailored by matrix product states, which is a promising method for handling strongly correlated systems. I will then discuss recent applications of the coupled cluster method and matrix product states for solving the magic angle twisted bilayer graphene system at the level of interacting electrons.
We consider the dynamics of an area-preserving diffeomorphism of a surface, or (what turns out to be closely related) a Reeb vector field on a three-manifold. One of the basic questions in dynamics is to understand periodic orbits of a diffeomorphism or vector field. A "closing lemma" is a statement asserting that one can make a small perturbation of a diffeomorphism or vector field to arrange that there is a periodic orbit passing through a given nonempty open set. The goal of this talk is to describe a new approach to proving "quantitative" closing lemmas, which gives upper bounds on how much one needs to perturb in order to obtain a periodic orbit with a given upper bound on the period. This is based in part on joint work with Oliver Edtmair.
Kirigami, the traditional art of paper cutting, has recently emerged as a promising paradigm for mechanical metamaterials. While many prior works have studied the geometry and mechanics of certain periodic kirigami tessellations, the computational design of more complex structures is less understood. In this talk, I will present mathematical design frameworks for modulating the geometry of kirigami tessellations. In particular, by identifying the geometric constraints controlling the contractibility, compact reconfigurability and rigid-deployability of the kirigami structures, we can achieve a wide range of patterns that can be deployed into pre-specified shapes in two or three dimensions. Altogether, our approaches pave a new way for the design of shape-shifting structures in science and engineering.
A representation of a group G is said to be rigid, if it cannot be continuously deformed to a non-isomorphic representation. If G happens to be the fundamental group of a complex projective manifold, rigid representations are expected to behave fundamentally differently than generic representations. In this talk I will outline the basic properties of rigid local systems and explain the meaning of two conjectures by Simpson (motivicity and integrality). I will then report on joint work with Esnault where we provide some partial answers to Simpson's conjectures.
We know what it means to diagonalize an operator in linear algebra. What might it mean to diagonalize a functor?
Given a linear operator f whose characteristic polynomial is multiplicity-free, we can construct projection to each eigenspace as a polynomial in f, using a technique known as Lagrange interpolation. We think of the process of finding a complete family of orthogonal idempotents as the diagonalization of f. After reviewing this we provide a categorical analogue: given a functor F with some additional data (akin to the set of eigenvalues), we construct idempotent functors projecting to "eigencategories." Along the way we'll explain some of the basic concepts in categorification.
Diagonalization is incredibly important in every field of mathematics. I am a representation theorist, so I will briefly indicate some of the important applications of categorical diagonalization to representation theory. I'll also indicate applications to algebraic geometry.
In this talk we will follow a running example involving modules over the ring Z[x]/(x^2 - 1), in other words, the group algebra of the group of size 2. If you know what a complex of modules is (and what chain maps and homotopies are) then you have all the prerequisites needed for this talk.
This is all joint work with Matt Hogancamp.
Mean field spin glasses are high-dimensional random functions with special exchangeability properties. These models were originally motivated by the study of disordered magnetic materials. However, it soon become clear that a large number of random optimization problems of interest in computer science and statistics fit this framework. Parisi's replica symmetry breaking (RSB) theory allows to determine the asymptotics of the optimal value of these problems. Can RSB shed light on relevant algorithmic questions as well? Here is a specific formalization of this question: Is there a polynomial time algorithm that outputs a feasible solution of these optimization problems whose value is, with high probability, within a factor \rho of the optimum? I will survey recent rigorous work that points at a remarkably precise answer for this question. Time permitting, I will talk about the problem of sampling from the Sherrington-Kirkpatrick Boltzmann measure. [Based on joint work with Ahmend El Alaoui and Mark Sellke.]
In this talk, we will address several areas of recent work centered around the themes of transparency and fairness in machine learning as well as practical efficiency for methods with high dimensional data. We will discuss recent results involving linear algebraic tools for learning, such as methods in non-negative matrix factorization and CUR decompositions. We will showcase our derived theoretical guarantees as well as practical applications of those approaches. These methods allow for natural transparency and human interpretability while still offering strong performance. Then, we will discuss new directions in debiasing of word embeddings for natural language processing as well as an example in large-scale optimization that allows for population subgroups to have better predictors than when treated within the population as a whole. We will conclude with work on compression and reconstruction of large-scale tensorial data from practical measurement schemes. Throughout the talk, we will include example applications from collaborations with community partners. This talk will also include discussion of recent leadership experience, initiatives, and related work.
High dimensional data can be organized on a similarity graph - an undirected graph with edge weights that measure the similarity between data assigned to nodes. We consider problems in semi-supervised and unsupervised machine learning that are formulated as penalized graph cut problems. There are a wide range of problems including Cheeger cuts, modularity optimization on networks, and semi-supervised learning. We show a parallel between these modern problems and classical minimal surface problems in Euclidean space.
This analogy allows us to develop a suite of new algorithms for machine learning that are both very fast and highly accurate. These are analogues of well-known pseudo-spectral methods for partial differential equations.
Mathematics underpins fundamental theories in physics such as quantum mechanics, general relativity, and quantum field theory. Nonetheless, its success in modern biology, namely cellular biology, molecular biology, chemical biology, genomics, and genetics, has been quite limited. Artificial intelligence (AI) has fundamentally changed the landscape of science, engineering, and technology in the past decade and holds a great future for discovering the rules of life. However, AI-based biological discovery encounters challenges arising from the intricate complexity, high dimensionality, nonlinearity, and multiscale biological systems. We tackle these challenges by a mathematical AI paradigm. We have introduced persistent cohomology, persistent spectral graphs, persistent path Laplacians, persistent sheaf Laplacians, and evolutionary de Rham-Hodge theory to significantly enhance AI's ability to tackle biological challenges. Using our mathematical AI approaches, my team has been the top winner in D3R Grand Challenges, a worldwide annual competition series in computer-aided drug design and discovery for years. By further integrating mathematical AI with millions of genomes isolated from patients, we uncovered the mechanisms of SARS-CoV-2 evolution and accurately forecast emerging SARS-CoV-2 variants.
I will describe in a very informal way some techniques to deal with the existence (and more qualitatively regularity vs singularity formation) in different geometric problems and their heat flows motivated by (variations of) the harmonic map problem, the construction of Yang-Mills connections or nematic liquid crystals. I will emphasize in particular on recent results on the construction of very fine asymptotics of blow-up solutions via a new gluing method designed for parabolic flows. I'll describe several open problems and many possible generalizations, since the techniques are rather flexible.
In this talk classical results on the distribution of values of quadratic forms with real coefficients on lattices and related lattice point counting problems are reviewed. For dimensions five and larger we discuss some of our previous and recent results and outline their relation to corresponding error bounds for the multivariate central limit theorem in Probability and the importance of a gap principle in the Fourier analysis of approximation errors.
Neighborhood growth cellular automata were introduced over 40 years ago as easy to describe models that exhibit the complex phenomena of nucleation and metastability. The most well-known of these is the threshold-2 growth model on the 2d integer lattice, wherein an initially occupied set of vertices is iteratively enlarged by occupying all vertices with at least two occupied neighbors. If the initially occupied set of vertices is chosen by randomly including each vertex independently with small probability p>0, then surprisingly all vertices eventually become occupied, but it typically takes exponentially long in (1/p) for the origin to become occupied. I will discuss the history and intuition behind results like these. I will also mention more recent progress on neighborhood growth models where polynomial scaling laws appear and the growth mechanism is quite different.
A group is just a set together with a multiplication with certain properties, and these are ubiquitous in modern mathematics. In homotopy theory, we often see less-rigid objects, with many of the properties like associativity or commutativity only holding "up to homotopy". This part of the story has been well-understood since the 1970s.
When we add in the action of a fixed finite group, the game changes wildly. Even what we mean by commutative becomes less clear. This is the heart of the evolving subject of equivariant algebra. I'll describe how to understand groups "up to homotopy", what we see when we put in a group action, and how a classification problem of what "commutative¡¨ means in this context connects to combinatorics.
The Teichm Êller space of a surface S is the space of marked hyperbolic structure on S, up to equivalence. By considering the holonomy representation of such structures, the Teichm Êller space can also be seen as a connected component of (conjugacy classes of) representations from the fundamental group of S into PSL(2,R), consisting entirely of discrete and faithful representations. Generalizing this point of view, Higher Teichm Êller Theory studies connected components of (conjugacy classes of) representations from the fundamental group of S into more general semisimple Lie groups which consist entirely of discrete and faithful representations.
We will give a survey of some aspects of Higher Teichm Êller Theory, and will make links with the recent powerful notion of Anosov representation. We will conclude by focusing on two separate questions:
1. Do these representations correspond to deformation of geometric structures?
2. Can we generalize the important notion of pleated surfaces to higher rank Lie groups like PSL(d, C)?
The answer to question 1 is joint work with Alessandrini, Tholozan and Wienhard, while the answer to question 2 is joint work with Martone, Mazzoli and Zhang.
Atiyah-Floer conjecture concerns a relation between two different versions of Floer homologies, one in gauge theory and the other in symplectic geometry. Based on joint works with A. Daemi and M. Lipyanskiy, I explain certain partial results and generalizations on this conjecture.
The Fourier transform and Poisson summation formula on a vector space have a venerable place in mathematics. It has recently become clear that they are but the first case of general phenomena. Namely, conjectures of Braverman, Kazhdan, L. Lafforgue, Ngo and Sakellaridis suggest that one can define Fourier transforms and prove Poisson summation formulae whenever the vector space is replaced by a so-called spherical variety satisfying certain desiderata. In this talk I will focus on what has been proven in this direction for a particular family of spherical varieties related to flag varieties. A simple (but nontrivial) example is the zero locus of a nondegenerate quadratic form. Kazhdan believes that these generalized Fourier transforms and Poisson summation formulae will eventually have many applications throughout mathematics. I agree, and to expedite these applications I will present them in a format that is as accessible as possible.
The problem of finding the smallest eigenvalue of a Hermitian matrix (also called the ground state energy) has wide applications in quantum physics. In this talk, I will first briefly introduce the mathematical setup of quantum algorithms, and discuss how to use textbook quantum algorithms to tackle this problem. I will then introduce a new quantum algorithm that can significantly and provably reduce the circuit depth for solving this problem (the reduction can be around two orders of magnitude). This algorithm reduces the requirement on the maximal coherent time for the quantum computer, and can therefore be suitable for early fault-tolerant quantum devices. No prior knowledge on quantum algorithms is necessary for understanding most parts of the talk. (Joint work with Zhiyan Ding.)
In the 60s and 70s, there was a flurry of activity concerning the question of whether or not various subgroups of homeomorphism groups of manifolds are simple, with beautiful contributions by Kirby, Mather, Fathi, Thurston, and many others. A funnily stubborn case that remained open was the case of area-preserving homeomorphisms of surfaces. For example, for balls of dimension at least 3, the relevant group was shown to be simple by work of Fathi from the 1970s, but the answer in the two-dimensional case was not known. My talk will be about some recent joint work, solving many of the mysteries of the two-dimensional case by using ideas from symplectic geometry. In particular, we resolved the "Simplicity conjecture", which stated that the group of area-preserving homeomorphisms of the two-disc that are the identity near the boundary is not simple, in contrast to the situation in higher dimensions. I will also explain some mysteries that remain unresolved. A key role in our arguments is played by a kind of Weyl law, relating the asymptotics of some new "spectral invariants" to more classical invariants. No prior knowledge of symplectic geometry will be assumed.
Traditionally, the way we compare mathematical structures is by using the notion of equality, or even of isomorphism. However, there are many settings where this is no longer the natural notion of "sameness". A notable example occurs when our objects admit a homology construction: then, we want two objects to be "the same" if they have identical homology.
Homotopy theory provides the framework required to work in these settings. In this talk, we will describe an algebraic invariant of graphs called "path homology", and introduce a new homotopical framework that encodes "sameness up to path homology". We will also show how this new framework allows us to make homology-invariant constructions. No prior knowledge of homology or homotopy theory will be assumed.
Curvature is one of the fundamental ingredients in differential geometry. People are increasingly interested in whether it is possible to think of combinatorial graphs as manifolds and a number of different notions of curvature have been proposed. I will introduce some of the existing ideas and then propose a new notion based on a simple and completely explicit linear system of equations. This notion satisfies a surprisingly large number of desirable properties -- connections to game theory (especially the von Neumann Minimax Theorem) and potential theory will be sketched; simultaneously, there is a certain "magic" element to all of this that is poorly understood and many open problems remain. I will also sketch some curious related open problems. No prior knowledge of differential geometry (or graphs) is required.