Exploratory Multidisciplinary AI Research

EAISI EMDAIR Program

EAISI stimulates exploratory multidisciplinary AI research. One of the actions we undertake to reach this goal is to organize internal TU/e calls for research proposals, where a team of TU/e AI researchers can apply for the funding of PhD projects. EAISI Exploratory Multidisciplinary AI Research Program (EMDAIR) is one of the building blocks of the execution of the TU/e Artificial Intelligence Scientific Roadmap.

The program focuses on funding high-impact/high-risk projects, i.e., projects that have a high value if they are successful where we accept an associated potentially higher risk that they will not be successful. As such these projects tend to have the property that it is harder to find funding in any of the other ways that are available (national, European, industrial).

In the first EMDAIR call (2021) five research projects were granted with a total of 11 PhD positions. The second EMDAIR (2022) 8 PhD positions were granted. 
The third call (2023) has been announced, submissions are possible as of January 8th, 2024. If you want know more about the EAISI EMDAIR program check out the intranet page or send a mail to Wim Nuijten, Scientific Director of EAISI, via eaisi@tue.nl.  

AICrowd

Project description

Whenever our safety and comfort in public areas are at risk because of dense crowds, crowd management failed. Even quite recently, these disfunctions have cascaded into disastrous accidents. How can this be still acceptable?
This project aims at quantitatively modelling the behavior of human crowds. This is key to surpass our outdated crowd management practices, still based only on back-of-the-envelope size estimates and stewards’ experience.

We will establish a holistic AI framework for crowd analytics. This hinges on two recent technological achievements: the capability of performing real-life experimental campaigns and the existence of big crowd dynamics datasets entailing normal and rare conditions. AICrowd will tackle three outstanding challenges: quantitative stochastic modeling of crowds, maximization of data-informativity, and optimal actuation for experimental design and control.

CTRL-P

Project description

CTRL-P: Compute-TRain-Learn 3d Printing. Additive Manufacturing or 3D printing is one of the cornerstones of a sustainable production chain. Complex structures can be manufactured directly from digital drawings in an additive fashion,  thereby reducing the amount of waste material and eliminating the need for dedicated tooling.

We have the ambition to make the control of a 3D printer as simple as controlling a conventional paper printer: by giving a single command, such as pressing 'Ctrl-P' on a keyboard. We want to  develop a procedure that automatically converts a digital drawing of a product into a set of printer instructions that results in a product with optimal mechanical and geometrical properties. It  should deliver an optimal product quality without human intervention and repetitively fine-tuning the process settings.

The proposal aims to develop a Digital Twin of Additive Manufacturing processes to study and control the dynamics of print process parameters, and to understand the relation between print process settings and the final output.

DNmAt

Project description

DNmAt: Towards in silico DNA for Materials 
With the need for efficient, energy-saving and environmentally friendly procedures, porous materials with tailored structures, tunable surface properties and high functionality must be found. However, despite their potential (e.g., as catalysts1–7, membranes8–13 or molecular tanks14–19), their practical application is limited mainly by the difficulty in finding unambiguous structure-functionality relationships and the lack of stability.

The main objective of this project is to develop a unique framework, inspired by the concept of DNA, to allocate relevant information of materials. We will use a combination of simulation and ML techniques. Just as each species has a distinct DNA, we plan to create an in silico DNA for materials. In a simile to biological DNA base pairs, the in silico DNA will consist of a series of descriptors containing the properties of the material.

NGO-PDE

Project description

Neural Green’s Operators (NGO) as Surrogates for Parametric Solutions of Partial Differential Equations (PDEs): application to first-principle models for magnetic confinement fusion energy systems.
Simulation of complex physical phenomena has been the primary focus of computational sciences, where methods are developed to solve PDEs. NGO-PDE targets a breakthrough in surrogate modeling of PDEs by developing a unified modeling framework that combines physics-based variational approaches with the data-driven and non-intrusive approach of neural networks.

The novelty and universal applicability of the proposed AI framework to PDEs implies that the proposed research will have significant impact on many research fields

Physics-informed AI for improved cancer prognosis

Project description

In order to metastasize, cancer cells need to move. Estimating the ability for cells to move, i.e. their dynamics, is a promising new biomarker for predicting the patient prognosis (overall survival) and response to therapy.
We aim to improve cancer prognostication by introducing the cell dynamics as a novel marker for metastatic risk and therapeutic resistance. We will explicitly model and predict the tumor cell dynamics from microscopic images through the innovative use of physics-informed AI.

To achieve this, we will combine state-of-the-art technologies from different fields (statistical physics of jamming/unjamming transitions, biophysics, image analysis, and deep learning/AI) to accurately and predictively link structure to dynamics in disordered cell tumors.

VACE

Project description

VACE: Value alignment for counterfactual explanations in AI. Explaining AI systems for decision support is necessary. However, most explanation methods fail to integrate stakeholder values and fail to provide users with actionable interventions. VACE pushes the state-of-the-art in computer science by developing methods for counterfactual explanations (CE) that provide feasible, actionable, and fair CEs. VACE aims at a domain-concrete approach, focusing on the health domain.

VACE’s impact is in establishing an emerging research methodology (PiCS) that transforms how we do fundamental CS research and creating AI tools that can give actionable interventions for model decisions and patient outcomes that align with important stakeholder values.

BayesBrain

Project description

Computation in biological brain tissue consumes several orders of magnitude less power than silicon-based systems. Motivated by this fact, this project aims to develop the world’s first hybrid neuro-in-silico Artificial Intelligence (AI) computer, introducing a fundamentally new paradigm of AI computing. In this high-risk high-gain project, we will combine an in-silico Bayesian control agent (BCA) with neural tissue hosted by a microfluidic Brain-on-Chip (BoC) that together form a hybrid learning system capable of solving real-world AI problems. Toward this paradigm, all computation and communication inside and between the BCA and BoC will be governed by the Free Energy Principle, which is both the leading neuroscientific theory for describing biological neuronal processes and supports a variational Bayesian machine learning interpretation. We will start by developing a pure silicon-based BCA that learns to balance an inverted pendulum, implemented by free energy minimization on a factor graph.

Creative AI Machines

Project description

Creativity forms the basis for the design of all human artefacts. This Creative AI Machines project aims to research creativity and develop creative AI methods to (a) generate creative design solutions, and (b) emerge and promote creativity in  (human) design processes through close interaction between AI and humans. The research will be consolidated with a framework on human-machine embodied interaction design. Most current "creative" AI simply provides designers with a vast  array of ideas. This project develops methods for creative AI interpretation and tweaking using new technologies with respect to neural and Bayesian networks. Furthermore nudging, and friction (a troublemaker) are developed, used, and tested in a multi-disciplinary approach across the departments BE, IE&IS, and ID.

DAMOCLES

Project description

The modeling of complex engineering systems is highly challenging. Physics-based models require a cautious application of constitutive assumptions, whereas data-based models require vast amounts of data. DAMOCLES targets a breakthrough in the constitutive modeling of such systems in different physical domains by developing a unified multi-tool framework that combines the favorable characteristics of physics-based and data-based approaches, thereby fitting squarely in the EAISI program line “Merging Models and Data in AI”. The core step of DAMOCLES is to merge the state of the art in port-Hamiltonian Neural Networks, Robust Bayesian Uncertainty Quantification, Sparsity-Promoting and Physics-Informed Machine Learning to cover the entire spectrum from purely data-driven to completely physics-based modeling. The challenges are tied to data-availability.

Hybrid Machine Learning

Project description

This project develops Artificial Intelligence (AI) based solutions for the design and optimal control of engineering-scale devices manufactured from advanced/smart materials. We aim at soft robotic manipulators (e.g., microsurgery robots or devices autonomously navigating in complex, unstructured, and dynamic environments) as characteristic examples of systems with many degrees of freedom, made of active mechanical metamaterials. Optimal control of such manipulators necessitates real-time yet high-accuracy multiscale simulations to predict relevant mechanical behavior at the engineering scale, hierarchically emerging from the underlying microstructure.

I.Touch2See

Project description

Visual robot perception has grown tremendously in the last ten years. Artificial neural networks allow to reliably classify objects and segment structures from RGB images. Moreover, they allow for 3D pose estimation for robot navigation and manipulation in partially structured/unstructured environments. However, vision alone is not sufficient. Versatile interaction with unstructured environments requires a new generation of robots fully exploiting also touch and proprioception (perception of own movement and associated effort). Combining complementary touch and vision information leads to a better world interpretation. Touch allows for 3D modeling of unseen parts of the environment as well as object mass, inertia, and friction estimation. Moreover, Artificial Intelligence (AI) combined with visuo-tactile sensing will improve robot decision making, bringing it closer to human capabilities.