** Spring 2021 (****Thursdays 3 pm through Zoom)**

**Date:** **Thursday February 11**

**Where: https://wakeforest-university.zoom.us/j/92169085500?pwd=WkdQZkt2ckVvcWVwSjhXUEFDVlIzZz09**

**Speaker:** **Dr. John Holmes**

**Title: New results on weakly dispersive PDEs**

**Abstract:** We will discuss two models for water waves. The first models waves near the equator, where the coriolis effect is important. The second equation is the FORQ equation, which is related to the celebrated KdV equation. Together with Feride Tiglay and Ryan Thompson, we study these equations in Besov spaces and show that the data-to-solution map is not better than continuous. In this talk we will motivate and introduce Besov spaces based on the Littlewood-Paley decomposition. We will also outline the proofs of nonuniform dependence and present some new estimates in Besov spaces.

**Date:** **Thursday February 18**

**Where: https://wakeforest-university.zoom.us/j/92169085500?pwd=WkdQZkt2ckVvcWVwSjhXUEFDVlIzZz09**

**Speaker:** **Dr. Jeremy Marzuola**

**Title: Some recent progress on nodal domains**

**Abstract:** We will first discuss some recent work on characterizing the number of nodal sets for an eigenfunction that was initiated with Graham Cox and Chris Jones, then has been further developed with Greg Berkolaiko and Graham Cox. The work of Berkolaiko-Cox-M gives an especially nice means of quantifying nodal sets for separable problems. Then, time permitting we will discuss more closely properties of nodal sets for low energy eigenfunctions on nearly rectangular domains that we looked at in joint with with Tom Beck and Yaiza Canzani.

**Date:** **Thursday February 25**

**Where: https://wakeforest-university.zoom.us/j/92169085500?pwd=WkdQZkt2ckVvcWVwSjhXUEFDVlIzZz09**

**Speaker:** **Dr. John Gemmer**

**Title: Gamma-convergence for novices: Part 1**

**Abstract:** Gamma convergence provides an asymptotic description for the behavior of a family of minimization problems and has been used to provide a rigorous framework for studying singular perturbation problems that arise in homogenization theory, phase transitions, free discontinuity problems, and elasticity theory to name a few. In 2002 Andrea Braides wrote a book entitled “Gamma convergence for beginners” which requires a solid foundation in measure theory and functional analysis to read in depth which is not my definition of a beginner. My goal in these three lectures is to introduce the topic of gamma convergence to a broad audience somewhere at the novice level, i.e. a Wake Forest student taking MST 711. Part 1 of the lecture will serve as an introduction to weak convergence and the direct method of the calculus of variations. Part 2 will focus on gamma convergence with respect to the weak topology and contain several examples of computing gamma limits. Finally part 3 will focus on a collaboration with Kaitlin Hill in which we apply gamma-convergence techniques to understand noise induced tipping in piecewise-smooth dynamical systems.

**Date:** **Thursday March 4**

**Where: https://wakeforest-university.zoom.us/j/92169085500?pwd=WkdQZkt2ckVvcWVwSjhXUEFDVlIzZz09**

**Speaker:** **Dr. John Gemmer**

**Title: ****Gamma-convergence for novices: Part 2**

**Abstract:** Gamma convergence provides an asymptotic description for the behavior of a family of minimization problems and has been used to provide a rigorous framework for studying singular perturbation problems that arise in homogenization theory, phase transitions, free discontinuity problems, and elasticity theory to name a few. In 2002 Andrea Braides wrote a book entitled “Gamma convergence for beginners” which requires a solid foundation in measure theory and functional analysis to read in depth which is not my definition of a beginner. My goal in these three lectures is to introduce the topic of gamma convergence to a broad audience somewhere at the novice level, i.e. a Wake Forest student taking MST 711. Part 1 of the lecture will serve as an introduction to weak convergence and the direct method of the calculus of variations. Part 2 will focus on gamma convergence with respect to the weak topology and contain several examples of computing gamma limits. Finally part 3 will focus on a collaboration with Kaitlin Hill in which we apply gamma-convergence techniques to understand noise induced tipping in piecewise-smooth dynamical systems.

**Date:** **Thursday March 11**

**Where: https://wakeforest-university.zoom.us/j/92169085500?pwd=WkdQZkt2ckVvcWVwSjhXUEFDVlIzZz09**

**Speaker:** **Dr. John Gemmer**

**Title: ****Gamma-convergence for novices: Part 3**

**Abstract:** Gamma convergence provides an asymptotic description for the behavior of a family of minimization problems and has been used to provide a rigorous framework for studying singular perturbation problems that arise in homogenization theory, phase transitions, free discontinuity problems, and elasticity theory to name a few. In 2002 Andrea Braides wrote a book entitled “Gamma convergence for beginners” which requires a solid foundation in measure theory and functional analysis to read in depth which is not my definition of a beginner. My goal in these three lectures is to introduce the topic of gamma convergence to a broad audience somewhere at the novice level, i.e. a Wake Forest student taking MST 711. Part 1 of the lecture will serve as an introduction to weak convergence and the direct method of the calculus of variations. Part 2 will focus on gamma convergence with respect to the weak topology and contain several examples of computing gamma limits. Finally part 3 will focus on a collaboration with Kaitlin Hill in which we apply gamma-convergence techniques to understand noise induced tipping in piecewise-smooth dynamical systems.

**Date:** **Thursday March 18**

**Where: https://wakeforest-university.zoom.us/j/92169085500?pwd=WkdQZkt2ckVvcWVwSjhXUEFDVlIzZz09**

**Speaker:** **Dr. Irfan Glogic**

**Title: Self-similar blowup for nonlinear wave equations**

**Abstract:** One of the remarkable features of time-dependent nonlinear partial differential equations is the possibility of spontaneous onset of singularities (also called blowup). Namely, a smooth and localized initial profile evolves after a finite amount of time into a singular form, where either the profile itself or some derivative becomes infinite (hence the name blowup). This phenomenon has both physical and mathematical significance, and determining whether a given nonlinear model admits blowup is one of the central questions of the modern analysis of PDEs. In this talk we concentrate on nonlinear wave equations and we discuss a type of blowup that appears to be generic, namely the self-similar one. We furthermore outline a general framework for studying stability of self-similar blowup, we discuss the new mathematical problems this approach generates, and we mention some results we obtained in recent years.

**Date:** **Thursday March 25**

**Where: https://wakeforest-university.zoom.us/j/92169085500?pwd=WkdQZkt2ckVvcWVwSjhXUEFDVlIzZz09**

**Speaker:** **Dr. Marco Lopez**

**Title: Invariant measures of Markov transformations and their entropy**

**Abstract:** In this talk we will give a review of metric and topological entropies for various classes of discrete dynamical systems, with especial emphasis on Markov transformations.

**Date:** **Thursday April 1**

**Where: https://wakeforest-university.zoom.us/j/92169085500?pwd=WkdQZkt2ckVvcWVwSjhXUEFDVlIzZz09**

**Speaker:** **Dr. Nsoki Mavinga**

**Title: Delay Differential Equations with Applications to the Analysis of the Spread of Vector-Borne Diseases **

**Abstract:** Many problems in applied sciences give rise to delay differential equations. These are differential equations in which the current rate of change of the system depends not only on the current state but also on the history of the system; i.e. the system has memory. In this talk, we will discuss the stability of equilibrium solutions for a two-lag delay differential equation which models the spread of infectious diseases; namely, vector-borne diseases where the lags are incubation periods in humans and vectors. We show that there are some values of the transmission and recovery rates for which either the disease dies out or it spreads into an endemic state. The approach is based on the linearization method and the analysis of roots of transcendental equations.

**Date:** **Thursday April 8**

**Where: https://wakeforest-university.zoom.us/j/92169085500?pwd=WkdQZkt2ckVvcWVwSjhXUEFDVlIzZz09**

**Speaker:** **Dr. Mostafa Rezapour**

**Title: ***A new approach in derivative-free optimization (part 1)*

*A new approach in derivative-free optimization (part 1)*

**Abstract: **Optimization is one of the most important components of machine learning and deep learning. However, in modern computational science, machine learning is not simply a consumer of optimization. The interaction between optimization and machine learning is one of the most important recent developments in computational science. In the first part of the presentation, we review the classic optimization algorithms for solving unconstrained optimization problems. Moreover, we discuss how to apply deep neural networks to solve ODEs and PDEs. Finally, in the second part of the presentation, we discuss how to use deep neural-networks and their well-known capability as universal function approximator in derivative-free unconstrained optimization problems, where the objective function is possibly nonsmooth. This approach may provide improved practical performance in cases where the objective function is extremely noisy or stochastic.

**Date:** **Thursday April 15**

**Where: https://wakeforest-university.zoom.us/j/92169085500?pwd=WkdQZkt2ckVvcWVwSjhXUEFDVlIzZz09**

**Speaker:** **Dr. Mostafa Rezapour**

**Title: ***A new approach in derivative-free optimization (part 2)*

*A new approach in derivative-free optimization (part 2)*

**Abstract: **Optimization is one of the most important components of machine learning and deep learning. However, in modern computational science, machine learning is not simply a consumer of optimization. The interaction between optimization and machine learning is one of the most important recent developments in computational science. In the first part of the presentation, we review the classic optimization algorithms for solving unconstrained optimization problems. Moreover, we discuss how to apply deep neural networks to solve ODEs and PDEs. Finally, in the second part of the presentation, we discuss how to use deep neural-networks and their well-known capability as universal function approximator in derivative-free unconstrained optimization problems, where the objective function is possibly nonsmooth. This approach may provide improved practical performance in cases where the objective function is extremely noisy or stochastic.

**Date:** **Thursday April 22**

**Where: https://wakeforest-university.zoom.us/j/92169085500?pwd=WkdQZkt2ckVvcWVwSjhXUEFDVlIzZz09**

**Speaker:** **Dr. Kate Meyer**

**Title: Dynamics of flow-kick disturbance models**

**Abstract:** To incorporate repeated disturbances into a differential equation (DE) model of ecological processes, one might embed the disturbance continuously in the DE or resolve the disturbance discretely. For example, do harvests from a logistic population appear continuously as x’ = x(1 − x) − h(x) or do individual harvests periodically kick the state x as it flows according to x’ = x(1 − x)? Are fires always smoldering in a model savannah, or do they burn at discrete timepoints? In this talk we’ll explore the flow-kick approach to modeling repeated, discrete disturbances and examine the dynamic implications of this modeling choice. We’ll position continuous disturbances as limits of repeated, discrete ones, and discuss how flow-kick systems both mimic and depart from their continuous analogs.