AISA Colloquium Schedule
Upcoming Events - Overview
Nov 28, 2023, 17:30-18:30 | Prof. Ludovic Noels | Université de Liège, Belgium | UN34, R150 |
Dec 14, 2023, 17:30-18:30 | Prof. Andreas Metzger | Universität Duisburg-Essen | UN34, R150 |
Jan 25, 2024, 17:30-18:30 | Prof. Steffen Staab | University of Stuttgart | UN32, R101 |
(list of previous invitees: LINK)
Prof. Ludovic Noels, Université de Liège
Data-driven-based History-Dependent Surrogate Models in the context of stochastic multi-scale simulations for elasto-plastic composites
When developing stochastic models or performing uncertainty quantification in the context of multiscale models, considering direct numerical simulations at the different scales is unreachable because of the overwhelming computational cost. Surrogate models of the micro-scale boundary value problems (BVP), typically Stochastic Volume Elements (SVE), are then developed and can be constructed or trained using off-line simulations. In such a data-driven approach, different kinds of surrogate models exist including in the context of non-linear behaviours, but difficulties arise when irreversible or history-dependent responses have to be accounted for as in the context of elasto-plastic composites. In this paper we investigate three kinds of surrogate models that can handle elasto-plasticity.
Once trained using a synthetic database, neural-networks (NNWs) can substitute the micro-scale BVPresolution while reducing the computation time by more than 5 orders of magnitude. In the context of reversible behaviours or proportional loading, feed-forward NNWs can predict a homogenised response, possibly for different parametrised micro-structures. In order to introduce the history dependency, recurrent neural networks (RNNs) were shown to be efficient and accurate in approximating the history-dependent homogenised stress-strain relationships. The limitations of NNWs are mainly two-fold. On the one hand they are unable to extrapolate responses (they can only interpolate), and on the other hand they require a large synthetic database to be trained. A physics informed alternative is the deep material network (DMN) approach which consists in a network of mechanistic building blocks. During the training process, the DMN “learns” the weight ratio and interactions of the building blocks. Once trained, the DMN is able to predict nonlinear responses, including for unseen material responses and loading conditions, in a thermodynamically consistent way, although they are less computationally efficient than the NNWs in their online stage.
A last approach is to identify the parameters of a semi-analytical mean-field-homogenization (MFH) model from the resolutions of different micro-scale BVP or SVEs: a set of MFH parameters is associated to each SVE. Since the surrogate is purely micro-mechanistic, it can handle damage-enhanced elasto-plasticity including strain-softening by considering objective quantities such as the critical energy release rate.
The different surrogates are applied in two different contexts: On the one hand the Bayesian inference of multi-scale model parameters and on the other hand, the stochastic multi-scale simulation of composite coupons.
Access information
November 28, 2023, 17:30-18:30, UN34, R150.
Prof. Andreas Metzger, Universität Duisburg-Essen
Explainable Reinforcement Learning for Self-Adaptive Software Systems
A self-adaptive software system can automatically maintain its quality requirements in the presence of dynamic environment changes. Developing a self-adaptive software system may be difficult due to design time uncertainty; e.g., anticipating all potential environment changes at design time is in most cases infeasible. To address design-time uncertainty, self-adaptive software systems increasingly utilize Deep Reinforcement Learning (Deep RL). Deep RL learns from actual data at runtime and it can generalize well over unseen inputs and natively handle concept drift. However, the learned knowledge is hidden in the parametrization of a deep neural network and thus essentially appears as a black box. This severely limits the explainability of Deep RL, which however is essential to (1) enable software engineers to perform debugging, (2) support system providers to comply with the relevant legal frameworks, and (3) help system users to build trust.
This talk presents our recent research on explainable reinforcement learning (XRL) for self-adaptive systems. We first introduce how Deep RL can be leveraged to build self-adaptive systems and motivate the problem of explainability in this context. We then present a basic XRL technique, which delivers insights into the decision-making of Deep RL. Based on this basic technique, we introduce and discuss two alternative methods for presenting explanations to humans: graphical user interfaces vs. natural language. To demonstrate the effectiveness and usefulness of these different forms of explanation, we present the results of an empirical user study, involving 73 participants from academia and industry. We conclude the talk with an outlook on future research opportunities.
Access information
December 14, 2023, 17:30-18:30, TBA.
About the speaker
Prof. Dr. Andreas Metzger is an adjunct professor of software engineering at the University of Duisburg-Essen and heads the “adaptive systems” group at paluno, the Ruhr Institute for Software Technology. His current research interests include the use of (explainable) machine learning in software engineering and business process management. He received his Diploma and his Ph.D. in computer science from the Technical University of Kaiserslautern in 1998 and 2004, respectively.
Andreas is the steering committee vice chair of the European Technology Platform NESSI (The Networked European Software and Services Initiative) and was deputy general secretary of BDVA (The Big Data Value Association) from 2015 to 2021. Among other leadership roles in EU projects, he was the technical coordinator of the Big Data Value PPP lighthouse project TransformingTransport, and chief architect of the Future Internet projects FInest and FIspace.
Prof. Steffen Staab, University of Stuttgart
TBA
TBA
Access information
January 25, 2024, 17:30-18:30, UN32, R101.
About the speaker
Former invitees
Summer 2022
- Prof. Emad Shihab, Concordia University, Canada
- Prof. Frank Hutter, Albert-Ludwigs-Universität Freiburg
Winter 2022/2023
- Prof. Mathias Niepert, University of Stuttgart
- Prof. Steffen Freitag, KIT
Summer 2023
- Prof. Ute Schmid, University of Bamberg
- Dr. Benjamin Paaßen, Humboldt-Unversität zu Berlin
Contact for AISA Research

Steffen Staab
Prof. Dr.Artificial Intelligence and Machine Learning | Spokesperson EXC 2075