Skip to main content

Seminars

Unless otherwise noted, all talks take place at Digital Futures Hub , Osquars Backe 5, floor 2, SE-100 44 Stockholm.

Jonas Adler

DeepMind

AlphaFold 3

Wednesday, 19 June, 10:30-11:30 at Digital Futures Hub, Osquars Backe 5, floor 2

Ozan Öktem

AlphaFold 3 is a substantially updated model which is capable of joint structure prediction of complexes including proteins, nucleic acids, small molecules, ions, and modified residues. The new AlphaFold model demonstrates improved accuracy over many previous specialised tools: greater accuracy on protein-ligand interactions than state of the art docking tools, higher accuracy on protein-nucleic acid interactions than nucleic-acid-specific predictors, and significantly higher antibody-antigen prediction accuracy than AlphaFold-Multimer. I'll describe the new diffusion-based architecture in depth and discuss progress that made it possible to model a much wider range of structural biology.

Simon Arridge

Department of Computer Science, University College London

Nonlinear Inverse Problems with Learning

Thursday, 13 June, 13:30-14:30 at Digital Futures Hub, Osquars Backe 5, floor 2

Ozan Öktem

Several problems in imaging are based on recovering coefficients of a PDE, resulting in a non-linear inverse problem that is typically solved by an iterative algorithm with the gradient obtained by an adjoint state method. When the forward problem is time-varying this corresponds to the method of time-reversal which convolves a forward and time-reversed field with the derivative of the spatial operator (sometimes called the “imaging condition”). Applications include full-waveform imaging (FWI) in Ultrasound Computed Tomography, PhotoAcoustic Tomography (PAT) and time-resolved Diffuse Optical Tomography (tDOT). Within Learned Physics approaches time reversal corresponds to the Neural ODE method for learning the time-derivative of an ODE parameterised by a neural network. By combining the trained network with symbolic regression an interpretable model can be discovered. In this talk I will discuss application of these methods for solving some forward and inverse problems in imaging.

Steve Brunton

Department of Mechanical Engineering, University of Washington

Machine Learning for Scientific Discovery, with Examples in Fluid Mechanics

Thursday, 23 May, 10:30-11:30 at Digital Futures Hub, Osquars Backe 5, floor 2

Ricardo Vinuesa

Accurate and efficient nonlinear dynamical systems models are essential understand, predict, estimate, and control complex natural and engineered systems. In this talk, I will explore how machine learning may be used to develop these models purely from measurement data. We explore the sparse identification of nonlinear dynamics (SINDy) algorithm, which identifies a minimal dynamical system model that balances model complexity with accuracy, avoiding overfitting. This approach tends to promote models that are interpretable and generalizable, capturing the essential “physics” of the system. We also discuss the importance of learning effective coordinate systems in which the dynamics may be expected to be sparse. This sparse modeling approach will be demonstrated on a range of challenging modeling problems, for example in fluid dynamics. Because fluid dynamics is central to transportation, health, and defense systems, we will emphasize the importance of machine learning solutions that are interpretable, explainable, generalizable, and that respect known physics.
Dr. Steven L. Brunton is a Professor of Mechanical Engineering at the University of Washington. He is also Adjunct Professor of Applied Mathematics, Aeronautics and astronautics, and Computer science, and he is also a Data Science Fellow at the eScience Institute. He is Director of the AI Center for Dynamics and Control (ACDC) at UW and is Associate Director for the NSF AI Institute in Dynamic Systems. Steve received the B.S. in mathematics from Caltech in 2006 and the Ph.D. in mechanical and aerospace engineering from Princeton in 2012. His research combines machine learning with dynamical systems to model and control systems in fluid dynamics, biolocomotion, optics, energy systems, and manufacturing. He received the Army and Air Force Young Investigator Program (YIP) awards and the Presidential Early Career Award for Scientists and Engineers (PECASE). Steve is also passionate about teaching math to engineers as co-author of four textbooks and through his popular YouTube channel, under the moniker “eigensteve”.

Predrag Cvitanović

Department of Mechanical Engineering, University of Washington

Turbulence in spacetime

Wednesday, 12 June, 10:30-11:30 at Digital Futures Hub, Osquars Backe 5, floor 2

Shervin Bagheri

For two centuries we have had the equations that describe the motion of fluids, but we cannot solve them where we need them. For pipe, channel and plane flows for long time intervals, on large spatial domains, turbulent instabilities make any accurate numerical time integration difficult. However, recent progress in ‘compressing’ turbulence data by equation-assisted thinking, in terms of so-called ‘exact coherent structures’ suggests a radically different approach. The way we perceive turbulence – the mere fact one can identify a cloud in a snapshot – suggests these terabytes should be zipped into small labelled files, a label for each pattern explored by turbulence, and a graph of transitions among them. This pattern recognition problem is exceptionally constrained by the exact differential equations that the data must respect. Here the Navier-Stokes equations are recast as a space-time theory, with both space and time taken to infinity, the traditional Direct Numerical Simulation codes have to be abandoned. In this theory there is no time, there is only a repertoire of admissible spatiotemporal patterns. To determine these, radically different kinds of codes will have to be written, with space and time treated on equal footing.

Javier Jiménez

School of Aeronautics, Universidad Politecnica de Madrid

Fake Turbulence

Thursday, 30 May, 10:30-11:30 at Lecture hall E3, Osquars backe 2

Ricardo Vinuesa

Turbulence is a high-dimensional dynamical system with known equations of motion. It can be numerically integrated, but the simulation results are also high-dimensional and hard to interpret. Lower-dimensional models are not dynamical systems, because some dynamics is discarded in the projection, and a stochastic Perron-Frobenius operator substitutes the equations of motion. Using as example turbulent flows at moderate but non-trivial Reynolds number, we show that particularly deterministic projections can be identified by either Monte-Carlo or exhaustive testing, and can be interpreted as coherent structures. We also show that they can be used to construct data-driven ‘fake’ models that retain many of the statistical characteristics of the real flow.
Aeronautical Engineer (1969) by the School of Aeronautical Engineering at Madrid. Master in Aeronautics (1970) and Ph.D. in Applied Mathematics (1973) by the California Institute of Technology, Pasadena, Ca. Currently Emeritus research professor of Fluid Mechanics at the School of Aeronautics of the Universidad Politécnica de Madrid. He has been professor of Mechanics at the Ecole Polytechnique, Palaiseau, France and senior research fellow and visiting professor at the Centre for Turbulence Research, Stanford University and NASA Ames Res. Centre, Ca. USA. Research scientist at the IBM Madrid Scientific Centre (1975-1990). His research interests include: the physics of turbulence and hydrodynamic transition, numerical simulation and data-driven analysis of turbulence and combustion, vortex dynamics, computer graphics for the analysis of experimental results, flow at low Reynolds numbers, numerical simulation of transonic flows, turbulent mixing, digital image processing and its applications, and theory of nonlinear waves and resonance. He has been principal investigator of numerous research contracts, both institutional and industrial, including three consecutive Advanced Grants of the European Research Council. He has coauthored over a hundred publications in international refereed journals, 10 books, 85 book chapters and invited conferences, 13 invited courses, 21 technical reports and numerous other publications, resulting in about 25000 citations. He has directed 19 doctoral theses. He is member of the Spanish Royal Academy of Sciences and of the Spanish Royal Academy of Engineering. Elected fellow of the American Physical Society, of the Institute of Physics of London and of the European Mechanics Society (Euromech). He received the research prize of the Spanish Royal Academy of Sciences in 1998, and the Fluid Mechanics prize of Euromech in 2018.

Sebastian Kaltenbach

Harvard School of Engineering and Applied Sciences, Harvard University

Physics-aware reduced order modelling for forecasting the dynamics of high dimensional systems

Wednesday, 12 June, 13:30-14:30 at Digital Futures Hub, Osquars Backe 5, floor 2

Ozan Öktem

Reliable predictions of critical phenomena, such as weather, turbulence, and epidemics, often rely on models described by Partial Differential Equations (PDEs). However, simulations of the full high-dimensional systems described by such PDEs are often prohibitively expensive due to the small spatio-temporal scales that need to be resolved. To address this, reduced-order simulations are usually deployed that adopt various heuristics and/or data-driven closure terms. In the first part of this talk, we will discuss our latest advances in accelerating simulations of high-dimensional systems through learning and evolving their effective dynamics. We introduce the Generative Learning of Effective Dynamics (G-LED) framework, which leverages a Bayesian diffusion model and integrates physical information through virtual observables. Additionally, we will present the interpretable iLED framework, which is based on Koopman Operator theory and the Mori-Zwanzig formalism. The second part of the talk will focus on a systematic approach for identifying closures in under-resolved PDEs using grid-based Reinforcement Learning. Our method incorporates inductive bias and exploits locality through a central policy efficiently represented by a Fully Convolutional Network.

Petros Koumoutsakos

Harvard School of Engineering and Applied Sciences, Harvard University

AI and Scientific computing for complex systems

Friday, 31 May, 13:30-14:30 at Digital Futures Hub, Osquars Backe 5, floor 2

Ozan Öktem

Over the last thirty years we have experienced more than a billion-fold increase in hardware capabilities and a dizzying pace of acquiring and transmitting massive amounts of data. Scientific Computing and, more lately, Artificial Intelligence (AI) has been key beneficiaries of these advances. In this talk I would outline the need for bridging the decades long advances in Scientific Computing with those of AI. I will use examples from fluid mechanics to argue for forming alloys of AI and simulations for their prediction and control. I will present novel algorithms for learning the Effective Dynamics (LED) of complex systems and a fusion of multi-agent reinforcement learning and scientific computing (SciMARL) for modeling and control of turbulent flows. I will also show our recent work on Optimizing a Discrete Loss (ODIL) that outperforms popular techniques such as PINNs by several orders of magnitude. I will juxtapose successes and failures and argue that the proper fusion of scientific computing and AI expertise are essential to advance scientific frontiers.
Petros Koumoutsakos is a professor of Engineering and Applied Sciences, Faculty Director of the Institute for Applied Computational Science and Area Chair of Applied Mathematics at Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) at Harvard University in Boston.

Nathan Kutz

Department of Applied Mathematics, University of Washington

Deep Learning Architectures for Science and Engineering

Monday, 27 May, 10:30-11:30 at Digital Futures Hub, Osquars Backe 5, floor 2

Anna-Karin Tornberg

Physics based models and governing equations dominate science and engineering practice. The advent of scientific computing has transformed every discipline as complex, high-dimensional and nonlinear systems could be easily simulated using numerical integration schemes whose accuracy and stability could be controlled. With the advent of machine learning, a new paradigm has emerged in computing whereby we can build models directly from data. In this work, integration strategies for leveraging the advantages of both traditional scientific computing and emerging machine learning techniques are discussed. Using domain knowledge and physics-informed principles, new paradigms are available to aid in engineering understanding, design and control.
Nathan Kutz is the Yasuko Endo and Robert Bolles Professor of Applied Mathematics and Electrical and Computer Engineering at the University of Washington, having served as chair of applied mathematics from 2007-2015. He is also the Director of the AI Institute in Dynamic Systems (dynamicsAI.org). He received the BS degree in physics and mathematics from the University of Washington in 1990 and the Phd in applied mathematics from Northwestern University in 1994. He was a postdoc in the applied and computational mathematics program at Princeton University before taking his faculty position. He has a wide range of interests, including neuroscience to fluid dynamics where he integrates machine learning with dynamical systems and control.

Beverley McKeon

Department of Mechanical Engineering, Stanford University

Data-driven descriptions of linear & nonlinear interactions in wall turbulence

Tuesday, 21 May, 10:30-11:30 at Digital Futures Hub, Osquars Backe 5, floor 2

Ricardo Vinuesa

Significant recent progress has been made in flow modeling using both equation-driven and data-driven techniques. We focus here on the intersection of these two approaches, using data to complete the details of known flow dynamics. We utilize the classical approaches and tools of the modern day – theoretical analysis, data-driven methods and machine learning tools – to illuminate features responsible for the sustenance of turbulence associated with nonlinear interactions in the Navier-Stokes equations. Focusing on a spatio-temporal representation of turbulence near walls – an omnipresent phenomenon in large-scale transport and transportation – we identify and quantify key scale interactions. Methods to obtain data-driven representations of both linear and nonlinear dynamics will be discussed, along with some implications for the modeling of wall turbulence. The work has benefited from funding by the US ONR, ARO and AFOSR over a period of years, which is gratefully acknowledged.
Beverley J. McKeon is Professor of Mechanical Engineering at Stanford, effective January 1, 2023. Previously she was the Theodore von Karman Professor of Aeronautics at the Graduate Aerospace Laboratories at Caltech (GALCIT) and former Deputy Chair of the Division of Engineering & Applied Science. She received her B.A., M.A. and M.Eng. from the University of Cambridge in the United Kingdom, and an M.A. and Ph.D. in Mechanical and Aerospace Engineering from Princeton University under the supervision of Lex Smits. She completed postdoctoral research and a Royal Society Dorothy Hodgkin Fellowship at Imperial College London before starting at Caltech in 2006. Her research interests include interdisciplinary approaches to manipulation of boundary layer flows using morphing surfaces, fundamental investigations of wall turbulence and the influence of the wall at high Reynolds number, the development of resolvent analysis for modeling turbulent flows, and assimilation of experimental data for efficient low-order flow modeling. Prof. McKeon is a Fellow of the APS and the AIAA and the recipient of a Vannevar Bush Faculty Fellowship from the DoD in 2017, the Presidential Early Career Award (PECASE) in 2009 and an NSF CAREER Award in 2008 as well as Caltech’s Shair Program Diversity Award, Graduate Student Council Excellence in Mentoring Award and Northrop Grumman Prize for Excellence in Teaching. She currently serves as co-Lead Editor of Physical Review Fluids, as Physical Sciences co-captain on the National Academies Decadal Survey on Biological and Physical Sciences Research in Space 2023-32, and on the editorial board of the Annual Review of Fluid Mechanics. She is the Past Chair of the US National Committee on Theoretical and Applied Mechanics.

Bartosz Protas

Department of Mathematics and Statistics, McMaster University

Systematic Search For Singularities in 3D Navier-Stokes Flows

Friday, 31 May, 11:15-12:00 at Digital Futures Hub, Osquars Backe 5, floor 2
This investigation concerns a systematic computational search for potentially singular behavior in 3D Navier-Stokes flows. Enstrophy 𝓔(𝑡) serves as a convenient indicator of the regularity of solutions to the Navier Stokes equation – as long as this quantity remains finite, the solutions are guaranteed to be smooth and satisfy the equations in the classical (pointwise) sense. Another well-known conditional regularity result are the Ladyzhenskaya-Prodi-Serrin conditions asserting that the quantity

𝓛𝑞,𝑝 := ∫0𝑇 ‖𝐮(𝑡)‖ 𝑝𝓛𝑞(Ω)d𝑡,

where 2𝑝+3𝑞 ≤ 1, 𝑞 > 3, must remain bounded if the solution 𝐮(𝑡) is smooth on the interval [0,𝑇]. However, there are no finite a priori bounds available for these quantities and hence the regularity problem for the 3D Navier-Stokes system remains open. To quantify the maximum possible growth of 𝓔(𝑇) and 𝓛𝑞,𝑝, we consider families of PDE optimization problems in which initial conditions are sought subject to certain constraints so that these quantities in the resulting Navier-Stokes flows are maximized. These problems are solved computationally using a large-scale adjoint-based gradient approach. By solving these problems for a broad range of parameter values we demonstrate that the maximum growth of 𝓔(𝑇) and 𝓛𝑞,𝑝 appears finite and follows well-defined power-law relations in terms of the size of the initial data. Thus, in the worst-case scenarios the two quantities remain bounded for all times and there is no evidence for singularity formation in finite time. We will also review earlier results where a similar approach allowed us to probe the sharpness of a priori bounds on the growth of enstrophy and palinstrophy in 1D Burgers and 2D Navier-Stokes flows. [Joint work with Dongfang Yun, Di Kang and Elkin Ramirez]

Carola-Bibiane Schönlieb

Department of Applied Mathematics and Theoretical Physics, University of Cambridge

Machine learned regularisation for inverse problems – the dos and don’ts

Thursday, 13 June, 10:30-11:30 at Digital Futures Hub, Osquars Backe 5, floor 2

Ozan Öktem

Inverse problems are about the reconstruction of an unknown physical quantity from indirect measurements. Most inverse problems of interest are ill-posed and require appropriate mathematical treatment for recovering meaningful solutions. Regularization is one of the main mechanisms to turn inverse problems into well-posed ones by adding prior information about the unknown quantity to the problem, often in the form of assumed regularity of solutions. Classically, such regularization approaches are handcrafted. Examples include Tikhonov regularization, the total variation and several sparsity-promoting regularizers such as the L1 norm of Wavelet coefficients of the solution. While such handcrafted approaches deliver mathematically and computationally robust solutions to inverse problems, providing a universal approach to their solution, they are also limited by our ability to model solution properties and to realise these regularization approaches computationally. Recently, a new paradigm has been introduced to the regularization of inverse problems, which derives regularization approaches for inverse problems in a data driven way. Here, regularization is not mathematically modelled in the classical sense, but modelled by highly over-parametrised models, typically deep neural networks, that are adapted to the inverse problems at hand by appropriately selected (and usually plenty of) training data. In this talk, I will review some machine learning based regularization techniques, present some work on unsupervised and deeply learned convex regularisers and their application to image reconstruction from tomographic and blurred measurements, and finish by discussing some open mathematical problems.
Belongs to: Scientific Machine Learning for Simulation and Inverse Modelling
Last changed: May 29, 2024