S+T+ARTS AIR - Breathing Architecture by Filippo Nassetti

Six use cases on generative AI: Making the Invisible Visible in Health, Nature, and Human Connection

In today’s technology-driven world, where innovation often takes a purely technical approach, art-driven innovation explores a different path: bringing artists into the heart of scientific and technological development. Through the S+T+ARTS AIR project, a consortium of eight partners and ten artists demonstrated how artistic experiments with generative artificial intelligence can lead to groundbreaking innovations. This essay explores the fascinating outcomes of these experiments, particularly in AI x health, AI x biodiversity, and AI x human values. As In4Art, an organization dedicated to supporting responsible innovation through artistic experimentation, we highlight those outcomes which, in our view, show the greatest potential for practical application and innovation spillover, especially for small and medium-sized enterprises (SMEs). From inner body architectures to urban biodiversity monitoring, these projects reveal not just their current achievements but point toward an exciting future where art-driven experiments reshape how we develop and use AI.

S+T+ARTS AIR brought together eight partners from five EU countries, including technology powerhouses like the Barcelona Super Computing Center and spatial sound expert Heka-Lab (Slovenia). As In4Art, we took on the role of innovation mentor, supporting and guiding the project in its mission to foster art-driven innovation. Our task was to ensure that residency projects were selected and executed with tangible innovation outcomes in mind, while building bridges to different ecosystems to maximize the possibility of innovation spillovers. 

Just as nature has evolved different species to tackle different challenges, today’s artificial intelligence landscape comprises distinct ‘species’ adapted for specific tasks. Understanding these AI species is important for grasping the innovations that emerged from this program. Computer vision models excel at understanding and interpreting visual information, as is demonstrated in Breathing Architecture, dealing with complex medical visualizations. Synthetic data generation models (GANs) can create new, realistic data – the foundation of Impossible Larynx’s voice modification capabilities. Auto-encoder models, used in both SYMBODY and Metawalks, learn to represent complex patterns in movement and sound. Reinforcement learning models, employed in Ki/S, understand and adapt to human behaviour, while transformer-based multimodal models that can process multiple types of information simultaneously, among which biometric data as voice, are the level of investigation within the Vocal Values Principles project. 

AI Species in 2024 - 2025

AI Species

In this essay, we will focus on explaining the above six mentioned use cases of different uses of generative AI. They were all developed as outcomes in the collaborative processes between the artists and the AIR partners, often through a process of designing, developing, testing or iterating the outcome of the project. Delivering digital innovation in the three domains of art, science and technology is a challenging and complex ambition to aspire to. As we have learned, it is important to focus on delivering an innovative prototype in one of these domains as a set priority. This is why we used the mentoring process to identify the biggest potential of each project for the duration of the residencies and support the collaborators to go as far as possible in that direction, while developing understanding of its spillovers and practical follow ups.   

Finally, to explore and understand the use of these AI models, we selected six projects from our S+T+ARTS AIR program that demonstrated exceptional potential for further development. These projects were identified through our PESETABS diffusion model as ‘new-end outcomes‘ – meaning they reached conclusive results that opened clear pathways for future development. Unlike projects that reach dead ends (no clear development path) or open ends (inconclusive experimental stage), new-end outcomes demonstrate surprising insights or unexpected applications of technology that can be further nurtured into practical innovations. Each of these six projects not only succeeded in their initial experimental goals but also revealed promising directions for continued development and real-world implementation.

The Convergence of Art and AI in Healthcare

The Health & Wellbeing innovations offer a comprehensive suite of tools addressing different aspects of physical and mental wellbeing, from internal body visualization to urban stress management. The innovation potential of integrating AI with physical/biological systems becomes apparent. Real-time analysis and visualization capabilities are developing at par with computational capabilities, as these innovations developed in collaboration with high performance computing centers demonstrate. AI-enabled applications allow for personalized healthcare applications, as we see in different art-driven innovations coming out of AIR. Finally, advanced modelling and simulation tools allow for high qualitative non-invasive monitoring and assessment, often in real-time and on an individual level. Although not deliberately designed in the AIR project, it turned out that most innovative applications coming out of the program respond to challenges and needs in the health and wellbeing domains. 

In the realm of health and wellbeing, our experiments revealed how artistic perspectives can transform complex medical data into accessible, practical tools. Take Breathing Architecture, for instance – a true artist- scientist collaboration that developed a novel modelling process for complex inner body geometries. The innovation lies in its ability to simulate fluid flow and transported particles within the respiratory zones of the lungs, specifically the alveolate structures. This advancement could revolutionize how we approach drug delivery to human lungs, particularly crucial for treating diseases like COPD. 

Similarly, the Impossible Larynx project pushes the boundaries of voice technology. By combining physical modelling with AI, this tool enables real-time voice modification based on physical characteristics. Think of it as a bridge between lost voices and renewed communication – whether for individuals facing voice loss due to illness or those undergoing voice transitions as part of gender affirmation. 

The SYMBODY dashboard exemplifies how artistic vision can make complex AI systems more intuitive and accessible. This prototype interface visualizes and analyses latent space auto-encoded representations of human motion data, making sophisticated movement analysis available to non-technical users. Its applications range from sports training to early detection of movement disorders, showcasing how art-driven innovation can bridge the gap between cutting-edge technology and practical applications. 

Perhaps most intriguing is the Ki/S (Kinesphere infringements per second) in the Body and the City project by Michail Rybakov, which introduces a novel way to measure personal space dynamics in urban environments. This innovation could transform how we design public spaces, manage crowd flow, and enhance urban wellbeing. When the measurement indicates between 0.3 and 0.4 infringements per second, people’s perception of space shifts from personal to crowd space – a crucial insight for urban planning and public health. 

Biodiversity Through the Lens of AI

The biodiversity innovation developed during S+T+ARTS AIR provides a powerful tool for understanding and protecting urban ecosystems, addressing multiple conservation challenges simultaneously. The project explores the crossover between AI and bioacoustics, at a moment where only limited organizations set out to do the same. This domain, called unsupervised discovery in urban bioacoustics, could represent a powerful approach for uncovering hidden patterns and phenomena in our cities ecosystems.  

In the realm of environmental monitoring, the Metawalks project stands as a testament to how artificial intelligence can help us understand and protect urban ecosystems. This machine learning-based data experience instrument allows us to segment and explore urban vocalizations and their relations. Imagine being able to identify acoustic corridors between green spaces, monitor species presence through vocalizations, or track the spread of invasive species – all through an AI system trained on other-than-human acoustic datasets. 

Human Rights and AI: Establishing Ethical Frameworks

One would expect that ethics, human rights, privacy concerns, concerns of power or ecological effects of AI developments would have been at the heart of many of the projects in AIR, but this was not the case. Only one of the projects dedicated a line of exploration to this topic, in order to fill a critical gap in ethical frameworks for emerging voice technologies. In the EU AI act, as well as in the Data act, there is no mentioning of guidelines, responsibilities nor protection for data givers or receivers when it comes to voice data. Therefore, through Vocal Values Principles, a potentially impactful (due to its distributive and collaborative nature) start was made to address this EU policy omission.

The Vocal Values Principles (VVP) represent a crucial step toward ensuring ethical development of voice technologies and was developed as part of an investigation into the ecosystem of voice data. This living document, hosted at vocalvalues.org, was co-created by over 20 experts and establishes principles for transparent and traceable voice technology development. In an era where voice is increasingly digitized and framed as “data” for AI systems, these principles lay the groundwork for protecting individual rights while fostering innovation that takes into account social, ethical and interpersonal responsibilities.

The Path Forward: Creating new spaces for innovation

The innovations emerging from S+T+ARTS AIR demonstrate how new spaces for innovation arise when collaborators step outside their comfort zones, meeting halfway between the worlds of art and science. These spaces emerge when artists and technologists find common ground for exploration that correlates with, but distinctly differs from, their usual domains. For instance, when artist Natan Sinigaglia collaborated with TMC on SYMBODY, they created an entirely new space for innovation around the visual exploration of latent space AI models – something neither party would have conceived working in isolation. 

What makes these innovations particularly significant is their potential for disruption. By “disruption,” we mean the capacity to fundamentally change current practices or add revolutionary insights that could alter the course of research and innovation in their respective fields. The Breathing Architecture project, for example, shows immense disruptive potential in medical treatment. If its model proves suitable for lung dynamics simulation, it could revolutionize how we approach drug delivery and respiratory disease treatment. Similarly, the Ki/s project could disrupt urban planning by providing, for the first time, a way to quantify how people experience city spaces on a personal level at scale. 

Looking at the outcomes of our experiments, we see several clear paths for transformative impact:

In healthcare, both Breathing Architecture and Impossible Larynx represent potentially disruptive innovations. While Breathing Architecture could transform our understanding and treatment of respiratory conditions, Impossible Larynx may revolutionize voice rehabilitation and transition processes with its hyper-realistic voice regeneration capabilities.

In urban development, Ki/s opens new possibilities for human-centered city design, potentially disrupting how we approach everything from public space planning to crowd management. This innovation could fundamentally change how cities understand and respond to their inhabitants’ experiences.

In digital rights and ethics, Vocalvalues.org stands as a potentially disruptive force in AI regulation, particularly regarding voice data. By establishing principles for human-centered AI development, it could shape the future of voice technology governance.

The success of these projects demonstrate the power of art-driven innovation. For SMEs, these developments represent more than just technological advancement; they offer new ways of thinking about and approaching innovation. Whether it’s a family-owned manufacturing business looking to optimize their processes or a healthcare provider seeking to improve patient care, these artistic experiments with AI show how stepping outside conventional boundaries can lead to breakthrough solutions.

These innovations represent more than mere technological advancement – they demonstrate how artistic thinking can humanize AI systems while making them more practical and accessible. For SMEs, these developments open new possibilities in healthcare, environmental monitoring, and human-computer interaction. A family-owned business with decades of legacy can now access sophisticated AI tools that are both powerful and intuitive, thanks to the artist’s touch in their development.

As we look ahead, it’s clear that art-driven innovation has an important role to play in shaping and questioning how we develop and implement AI technologies. By bringing together artistic vision with technical expertise, we can create solutions that are not only powerful but also inherently human-centered. For SMEs looking to stay competitive in an AI-driven future, these innovations offer a glimpse of how technology can enhance rather than replace the personal touch that makes their businesses special.

Through S+T+ARTS AIR, we’ve experienced that when artists and technologists collaborate, the results can be both revolutionary and practical. The six innovations highlighted here represent just the beginning of what’s possible when we approach AI development through the lens of artistic exploration and human-centered design.

The artists part of these projects were Filippo Nassetti (Breathing Architecture), Maria Arnal Dimas (Impossible Larynx), ) Natan Sinigaglia (Symbody), Michail Rybakov (The Body and the City), Antoine Bertin (Making all Voices of the City Heard) and Jonathan Reus (Dadasets).

Technical Terms Glossary

AI Species Terms

● Auto-encoder Models
AI systems that learn to compress data into a compact representation and then reconstruct it, useful for finding patterns and reducing noise in complex data

● Computer Vision Models
AI systems specialized in understanding and processing visual information from images or video

● GANs (Generative Adversarial Networks)
AI systems that can create new, synthetic data by learning from real examples

● Reinforcement Learning Models
AI systems that learn through trial and error by interacting with their environment

● Transformer-based Multimodal Models
AI systems capable of processing multiple types of data (text, audio, images) simultaneously

Project-Specific Terms

● Alveolar Structures
Tiny air sacs in the lungs where gas exchange occurs

Articulatory Vocal Synthesizer
A system that simulates the physical process of human speech production

Kinesphere
The sphere of physical space around a person’s body that they consider their personal space

Latent Space
A compressed representation of data where similar items are positioned closer together

PESETABS
In4Art’s diffusion model for assessing innovation outcomes (Potential, Experiment, Social, Economic, Technical, Artistic, Business, Sustainability)

Technical Development Terms

TRL (Technology Readiness Level)
A method for estimating the maturity of technologies, ranging from 1 (basic principles) to 9 (proven system)

New-end Outcome
In PESETABS, an experimental outcome that reaches conclusive results with clear development pathways

Innovation Spillover
When innovations in one area lead to unexpected benefits or applications in other areas

Methodology Terms

AVES Model
Unsupervised learning algorithm designed for animal vocalization encoding

Bioacoustics
The study of sound production and reception in animals

Linear Predictive Coding
A method used in audio signal processing to represent the spectral envelope of a digital signal

Real-time Inference
The ability of an AI system to process and respond to input immediately

Data & Analysis Terms

 Feature Learning
The automatic discovery of relevant characteristics in raw data

Parametric Conversion
Transformation of data based on adjustable parameters

Spectral Synthesis
Creation of sound by combining different frequency components

Unsupervised Discovery
Finding patterns in data without predefined categories or labels

PROJECT: S+T+ARTS AIR