MANTIS

MANTIS

Reference: 01621-01493911 - MANTIS
Duration: 2023 - 2025
BCAM budget: 289102.88
BCAM budget number: 289103.00
Funding agency: MEC
Tipo: National Project
Estado: Ongoing Project

Objective:

As AI capabilities rapidly progress, open problems and questions in the field increasingly pivot over the notion of autonomy (both its transformative potential and significative risks). Answering such issues involves understanding the degree of intrinsic intentional behaviour a system can achieve, developing its own norms, goals, and potential divergence from its initial programming or evolutionary intent. For example, present efforts towards utilizing Large Language Models to create autonomous agents bring challenges in assessing their degree of self-determination, given their vast, complex, and emergent nature (Z. Liu et al., 2023; Weng, 2023; Yao et al., 2023). In parallel, autonomous agency is central (perhaps the central property) to living and cognitive systems (Varela 1979, Moreno & Mossio 2015, Di Paolo et al. 2017) and significant progress has recently been achieved on the scientific understanding of natural autonomy in bio-cognitive sciences (Ruiz-Mirazo & Moreno, 2004a; Barandiaran & Egbert, 2014; Aguilera & Di Paolo, 2021). However, there is, to date, little interaction between these achievements and autonomous AI systems’ research. We believe that new trends and open problems in AI development overlap with progress in the understanding of cognitive autonomy in bio-inspired intelligence, biological physics, and philosophy. In our view, in order to successfully studying cognitive and AI autonomy, we face the challenge of developing theoretical and mathematical unifying frameworks that foster an interdisciplinary study of bio-cognitive, technological, ethical and social aspects of autonomy. The challenge addressed by this project aims to bridge the gap between technological advancements and our understanding of biological and artificial cognition: 1) Can we understand the degrees and dimensions of autonomy and emerge of complex networks? (either artificial or biological). And 2) Based on this new understanding, can we develop novel control mechanisms to constrain the autonomy of AI systems to be aligned with human and ethical values?