Return to search

A Mathematical Study of Learning Dynamics

Data-driven discovery of dynamics, where data is used to learn unknown dynamics, is witnessing a resurgence of interest as data and computational tools have become widespread and increasingly accessible. Advances in machine learning, data science, and neural networks are fueling new data-driven studies and rapidly changing the landscape in almost every field. Meanwhile, classical numerical analysis remains a steady tool to analyze these new problems.

This thesis situates emerging works coupling machine learning, neural networks, and data-driven discovery of dynamics in classical numerical theory. We begin by formulating a universal learning framework based in optimization theory. We discuss how three paradigms of machine learning -- supervised, unsupervised, and reinforcement learning -- are encapsulated by this framework and form a general learning problem for discovery of dynamics.

Using this formulation, we distill data-driven discovery of dynamics using the classical technique of linear multistep methods with neural networks to its most basic roots for numerical analysis. We establish for the first time a rigorous mathematical theory for using linear multistep methods in discovery of dynamics assuming exact data. We present refined notions of consistency, stability, and convergence for discovery and show convergence results for the popular schemes of Adams-Bashforth, Adams-Moulton, and Backwards Differentiation Formula. Extending the study for noisy data, we propose and analyze the recovery of a smooth approximation to the state using splines and prove new results on discrete differentiation error estimates.

Identiferoai:union.ndltd.org:columbia.edu/oai:academiccommons.columbia.edu:10.7916/d8-5b8z-yd84
Date January 2021
CreatorsKeller, Rachael Tara
Source SetsColumbia University
LanguageEnglish
Detected LanguageEnglish
TypeTheses

Page generated in 0.0012 seconds