Three researchers from the Department of Mathematics become research leaders with Sapere Aude grants
Henrik Garde, Alexandra-Iulia Otiman, and Rune Nyrup each receive a research leader grant from the Danish Independent Research Fund (DFF).
They are now set to lead their own research groups. Henrik Garde is an associate professor at the Department of Mathematics, Rune Nyrup is an associate professor at the Center for Science Studies, and Alexandra-Iulia Otiman is an assistant professor at AIAS and tenure track assistant professor at the Department of Mathematics.
"Calderón’s problem is a famous and challenging mathematical problem. Based on boundary electrical measurements of an object (or a person), the aim is to reconstruct a three-dimensional image of the interior spatial structure. If measurements only pertain to a small subset of the object’s surface, and if a complex function must be determined relating to both the conductivity and the permittivity, then no mathematically proven reconstruction method yet exists.
The project takes a completely new approach, by transforming the measurement data using so-called functional calculus. This will be used to improve the modulus of continuity and reduce non-linearity, which are the barriers to proving the applicability of optimisation-based methods. Additionally, it is investigated to what extent the geometric shape and location of complex perturbations can be determined from the surface measurements."
"My project is meant to make theoretical advances in the field of differential geometry and more precisely, it aims at understanding the shape of some special spaces in arbitrary high dimensions by using a particular way of measuring distances. Describing the shape of a space in dimension more than four is not a straightforward task. In order to achieve this, my team and I will use abstract theories coming from various sides and branches of mathematics, that will produce the appropriate tools and framework."
"My project concerns the role of explanations in ensuring that artificial intelligence is applied in an ethically responsible way. Specifically, I will examine the advantages and disadvantages of what is called ‘Explainable Artificial Intelligence’, that is, different technologies for generating explanations of decisions made using artificial intelligence. Such explanations are important in order to enable people to understand and thereby think critically about the artificially intelligent systems that are increasingly impacting our everyday lives. But they also risk creating a false sense of understanding, which can be exploited to mislead or even manipulate. To resolve this dilemma, the project will therefore develop both philosophical theories and practical guidelines for how we can distinguish between ethically beneficial and ethically pernicious explanations."