Working Group 4 · Short Contribution · 15 min
COST Action CA23158 — WG4

Translating
Situated
AI Literacy

Participatory Artistic Research as
Policy-Relevant Insight

1 — The Problem

Why AI Literacy Is Hard
to Capture in Policy

AI literacy is often defined through skills, competencies, and ethical principles.

WG4 is tasked with mapping AI-related policies in CCIs, assessing impact, drafting recommendations, and contributing to a Toolbox. Across national and European contexts, AI literacy is increasingly framed through competency checklists, ethical guidelines, and governance principles.

These models struggle to describe how understanding and agency actually form in practice.

The core problem this talk addresses

What if AI literacy does not first emerge as a competency, but as a situated condition?

2 — A Recurring Observation

What I Kept Seeing
Across Contexts

Across different settings, the same questions kept reappearing.

i
What does AI mean here?
The same system carries different meanings, stakes, and affordances in each local world.
ii
Who feels able to act with it?
Agency is not uniformly distributed — it is produced through context, trust, and position.
iii
What carries over beyond the moment?
Insight formed in one setting does not automatically become portable knowledge.
3 — AT-LABs

AT-LABs as
Epistemic Environments

In my artistic research, I work with AT-LABs — temporary, context-specific artistic research environments. Polylocality here is not a theory; it is a methodological condition. The same AI system behaves differently in different local worlds. And so does literacy.

Workshops in higher education settings
Health-related field labs
Civic participation formats
Immersive audiovisual installations

Each setting operates as a coherent local world — with its own temporalities, participants, affordances, power structures, and forms of meaning-making. Designed to surface how sense-making actually happens.

4 — The Core Tension

A Key Tension for
AI Literacy Policy

Policy side
  • Universal frameworks
  • Skills taxonomies
  • Evaluation grids
  • Abstract transferability
Practice side
  • Local, contingent understanding
  • Context-dependent agency
  • Relational trust formation
  • Embodied sense-making

"This tension is not a problem to solve — it is a condition to work with."

Policy often needs abstraction. Practice often resists it. The question for WG4: how can we reference situated insights without flattening them?

5 — Translational Indicators

Boundary Concepts for
Referencing Situated Practice

Across AT-LAB iterations, three recurring dimensions became visible. These are not metrics — they are diagnostic lenses: translational tools that allow us to speak across contexts without collapsing them into sameness.

I
Concept Clarity
Can participants articulate what they believe is happening with AI in this context?
→ What does AI mean here?
II
Perceived Agency
Do participants feel able to intervene, respond, or reposition themselves in relation to AI?
→ Who can act with it?
III
Transferability
Do insights reappear elsewhere — in language, practice, or imagination — beyond the immediate moment?
→ What carries over?

These do not describe outcomes — they describe what can travel across contexts.

6 — Why This Matters for WG4

A Contribution
to WG4's Work

AT-LABs are not policy instruments — they are epistemic environments. They reveal where trust breaks down, where agency emerges, and where transfer succeeds.

Reveals blind spots in current AI literacy framings — particularly around how understanding and agency form in embodied, locally contingent practice.
Supports policy briefs and toolkits grounded in practice. Grounding WG4 documents in situated observation strengthens their legitimacy.
Preserves local specificity while enabling comparison — a referencing layer, not a standardization layer.
Shows how artistic research already performs referencing work — identifying what aspects of situated practice can be made legible to policy.
7 — Closing

A Dynamic Ecology
of Situated Intelligences

"Collective intelligence should not mean convergence toward a single framework — it might instead mean a dynamic ecology of situated intelligences, capable of translation and mutual influence, while retaining local coherence."

A practice-based lens
On how AI literacy actually forms — from the inside of situated experience, not the outside of competency frameworks.
A translational layer
That preserves complexity — minimal, useful for referencing layers, toolkits, and impact discussions.
An open question
How might this intersect with WG4's work on referencing, impact assessment, and policy recommendations?

Thank you. I look forward to the discussion.

Q&A — Discussion

Questions & Responses

What has been your experience so far working with this framework? What has changed during the process?
My approach has been to see what emerges rather than starting with a fixed framework. The work has been in a constant state of reflection and reanalysis. Only recently have I been able to identify the boundary concepts I presented, they were not predefined, they appeared gradually as patterns across the different labs. The Professional Doctorate program is also very collaborative, with artists and researchers working on similar trajectories in different universities of applied sciences in the Netherlands. The key thing emerging now is that the framework is starting to become more transferable across contexts.
How would this AT-LAB methodology look in practice? What kinds of activities take place inside one of these labs?
I usually try to design an artistic or creative activity that allows participants to interact with the system in an open way. For example, I sometimes use card-based exercises inspired by Brian Eno's Oblique Strategies, cards that introduce unexpected prompts or constraints. In prompting workshops, participants are not only asked to write prompts for AI systems but also to rethink the prompts themselves. A card might suggest thinking upside down, thinking through water, or imagining the opposite of what seems obvious. The goal is to create an imaginative environment that helps participants engage with AI systems even if they initially feel unsure where to begin.
Participatory practices can sometimes be politically instrumentalized. How do you see AI interacting with those dynamics?
In some ways AI certainly amplifies existing issues. Many of the questions we are facing are not entirely new. However, artistic interventions can sometimes open new perspectives. In the rural health project, designers noticed that people would use wearable devices for a week and then stop, the project stalled. When we created immersive environments that translated sensor data into light and sound, participants began to interact differently, discussing the technologies more openly and feeling more ownership over the data. In that sense the artistic environment helped people reclaim a sense of agency in relation to the technology.
Could you describe a concrete example of how participatory AI tools were used in a civic project?
One example comes from a placemaking project in Reggio Emilia, Italy, working with a startup from Poland. The city wanted to collect ideas from residents about public spaces, normally through surveys, which many people don't respond to. Instead we designed an interactive walk through the city. Participants used an app to generate speculative images of what those spaces could become, received prompt cards with unexpected perspectives, could see what others had created, modify ideas, vote for ones they liked, and leave comments. The result was a more collective and creative process where participants could build on each other's ideas.
Could you talk more about the immersive installation work — the Atmosfera project?
In Atmosfera I worked with a scientific tool developed by NASA called the Planetary Spectrum Generator, a system that simulates atmospheric conditions on different planets, calculating how light interacts with particles depending on viewing angle and environmental factors. Instead of presenting this as a scientific visualization, I created an immersive audiovisual installation using layered projections, fog, and sound. We used a special screen called hologauze that allows light to pass through while still reflecting part of the image. By placing fog both in front of and behind the screen, the projected light became visible in the air. The audience moved through planetary environments including Venus, Neptune, and Jupiter. At the end I introduced an AI-generated layer to remind the audience that what they experienced was always mediated through computational systems.
Are you also developing the technical systems yourself?
Yes. My background is in art, design, and technology. Much of my work involves creative coding and building custom systems. I am active in communities related to new media art and creative coding. Recently I have also been experimenting with collaborative coding practices that involve language models generating or assisting with code, some people refer to this informally as vibe coding.
Speaker Notes
Slide 1 / 9
Press N to toggle · ← → to navigate · F for fullscreen