Young woman writing on a chalk board

AISA Colloquium

Within the AISA Colloquium international and local specialists present on the latest advances in AI+SE+X.
[Photo: Fauxels]

AISA Colloquium Schedule

Upcoming Events - Overview

Nov 28, 2023, 17:30-18:30 Prof. Ludovic Noels Université de Liège, Belgium UN34, R150
Dec 14, 2023, 17:30-18:30
postponed to 2024
Prof. Andreas Metzger Universität Duisburg-Essen UN34, R150
TBA, 2024, 17:30-18:30 Prof. Heng Xiao University of Stuttgart UN32, R101

(list of previous invitees: LINK)

Prof. Ludovic Noels, Université de Liège

Data-driven-based History-Dependent Surrogate Models in the context of stochastic multi-scale simulations for elasto-plastic composites

When developing stochastic models or performing uncertainty quantification in the context of multiscale models, considering direct numerical simulations at the different scales is unreachable because of the overwhelming computational cost. Surrogate models of the micro-scale boundary value problems (BVP), typically Stochastic Volume Elements (SVE), are then developed and can be constructed or trained using off-line simulations. In such a data-driven approach, different kinds of surrogate models exist including in the context of non-linear behaviours, but difficulties arise when irreversible or history-dependent responses have to be accounted for as in the context of elasto-plastic composites. In this paper we investigate three kinds of surrogate models that can handle elasto-plasticity.

Once trained using a synthetic database, neural-networks (NNWs) can substitute the micro-scale BVPresolution while reducing the computation time by more than 5 orders of magnitude. In the context of reversible behaviours or proportional loading, feed-forward NNWs can predict a homogenised response, possibly for different parametrised micro-structures. In order to introduce the history dependency, recurrent neural networks (RNNs) were shown to be efficient and accurate in approximating the history-dependent homogenised stress-strain relationships. The limitations of NNWs are mainly two-fold. On the one hand they are unable to extrapolate responses (they can only interpolate), and on the other hand they require a large synthetic database to be trained. A physics informed alternative is the deep material network (DMN) approach which consists in a network of mechanistic building blocks. During the training process, the DMN “learns” the weight ratio and interactions of the building blocks. Once trained, the DMN is able to predict nonlinear responses, including for unseen material responses and loading conditions, in a thermodynamically consistent way, although they are less computationally efficient than the NNWs in their online stage.

A last approach is to identify the parameters of a semi-analytical mean-field-homogenization (MFH) model from the resolutions of different micro-scale BVP or SVEs: a set of MFH parameters is associated to each SVE. Since the surrogate is purely micro-mechanistic, it can handle damage-enhanced elasto-plasticity including strain-softening by considering objective quantities such as the critical energy release rate.

The different surrogates are applied in two different contexts: On the one hand the Bayesian inference of multi-scale model parameters and on the other hand, the stochastic multi-scale simulation of composite coupons.

Access information

November 28, 2023, 17:30-18:30, UN34, R150.

About the speaker

(webpage)

Prof. Andreas Metzger, Universität Duisburg-Essen

Explainable Reinforcement Learning for Self-Adaptive Software Systems

A self-adaptive software system can automatically maintain its quality requirements in the presence of dynamic environment changes. Developing a self-adaptive software system may be difficult due to design time uncertainty; e.g., anticipating all potential environment changes at design time is in most cases infeasible. To address design-time uncertainty, self-adaptive software systems increasingly utilize Deep Reinforcement Learning (Deep RL). Deep RL learns from actual data at runtime and it can generalize well over unseen inputs and natively handle concept drift. However, the learned knowledge is hidden in the parametrization of a deep neural network and thus essentially appears as a black box. This severely limits the explainability of Deep RL, which however is essential to (1) enable software engineers to perform debugging, (2) support system providers to comply with the relevant legal frameworks, and (3) help system users to build trust. 
This talk presents our recent research on explainable reinforcement learning (XRL) for self-adaptive systems. We first introduce how Deep RL can be leveraged to build self-adaptive systems and motivate the problem of explainability in this context. We then present a basic XRL technique, which delivers insights into the decision-making of Deep RL. Based on this basic technique, we introduce and discuss two alternative methods for presenting explanations to humans: graphical user interfaces vs. natural language. To demonstrate the effectiveness and usefulness of these different forms of explanation, we present the results of an empirical user study, involving 73 participants from academia and industry. We conclude the talk with an outlook on future research opportunities.

Access information

December 14, 2023, 17:30-18:30, TBA.

About the speaker

Prof. Dr. Andreas Metzger is an adjunct professor of software engineering at the University of Duisburg-Essen and heads the “adaptive systems” group at paluno, the Ruhr Institute for Software Technology. His current research interests include the use of (explainable) machine learning in software engineering and business process management. He received his Diploma and his Ph.D. in computer science from the Technical University of Kaiserslautern in 1998 and 2004, respectively.

Andreas is the steering committee vice chair of the European Technology Platform NESSI (The Networked European Software and Services Initiative) and was deputy general secretary of BDVA (The Big Data Value Association) from 2015 to 2021. Among other leadership roles in EU projects, he was the technical coordinator of the Big Data Value PPP lighthouse project TransformingTransport, and chief architect of the Future Internet projects FInest and FIspace.

(webpage)

Prof. Heng Xiao, University of Stuttgart

Learning Complex Mapping in Computational Physics: From Neural Networks to Neural Operators

In computational physics, researchers often need to develop complex constitutive relations, reduced-order models, and surrogate models. A common challenge in such models is representing complex mappings between non-local, unstructured data. Neural networks have emerged as a versatile tool in data-driven computational physics, primarily due to their flexibility in representing and learning these mappings. In this talk, we will show the limitations intrinsic to neural networks in this context and highlight the benefits of neural operators. We demonstrate the superiority of neural operators with two examples from fluid dynamics: (1) the Reynolds stress transport model for turbulent flows, and (2) the generation of initial conditions essential for accelerating Computational Fluid Dynamics (CFD) simulations.

Access information

TBA, 2024, 17:30-18:30, UN32, R101.

About the speaker

Prof. Heng Xiao holds a bachelor's degree from Zhejiang University, China, a master's degree from the Royal Institute of Technology (KTH), Sweden, and a Ph.D. degree from Princeton University, USA. From 2009 to 2012, he worked as a Postdoctoral Researcher at ETH Zurich, Switzerland. He joined Virginia Tech, USA, in 2013 as an Assistant Professor and was promoted to Associate Professor with tenure in 2020. In 2022, he moved to the University of Stuttgart to take up a professorship in Data-Driven Fluid Dynamics. His research focuses on turbulence modeling using data-driven methods, including data assimilation, machine learning, and uncertainty quantification.

speaker website (opens in new window/tab)

Former invitees

Summer 2022
  • Prof. Emad Shihab, Concordia University, Canada
  • Prof. Frank Hutter, Albert-Ludwigs-Universität Freiburg
Winter 2022/2023
  • Prof. Mathias Niepert, University of Stuttgart
  • Prof. Steffen Freitag, KIT
Summer 2023
  • Prof. Ute Schmid, University of Bamberg
  • Dr. Benjamin Paaßen, Humboldt-Unversität zu Berlin

Contact for AISA Research

This image shows Steffen Staab

Steffen Staab

Prof. Dr.

Artificial Intelligence and Machine Learning | Spokesperson EXC 2075

To the top of the page