Monday, January 29, 2024, 01:00pm - 02:00pm
Neural operators are a class of deep learning data-driven architectures that learn maps between function spaces. Typically, an operator like that is capable of learning how to solve a family of a particular PDE problem by learning the operator that maps the known function(s) of an equation to the unknown solution. One of the challenges I would like to address in this talk is how to obtain data in order to train such models. Classical numerical method (such as finite element/differences) have been used to solve instances of PDEs for training and testing models. However, if we wish to outperform classical solvers, how can we really still depend on them to generate training data? This talk will be based on a recent preprint with Prof. Rachel Ward (see https://arxiv.org/pdf/2401.02398.pdf).
Location: PMA 9.166