Who Supports the Builders?
- Tyson Walters
- Jan 12
- 3 min read
Updated: Jan 12
An invitation from the professional supervision field to those developing high‑stakes AI systems
Artificial intelligence is increasingly embedded in systems that shape people’s lives: justice decisions, welfare access, health triage, employment screening, risk prediction, and public safety. Considerable effort is rightly being invested in technical robustness, bias mitigation, governance frameworks, and regulatory compliance.
What is discussed far less often is a simpler, human question:Who supports the people building these systems?

High‑stakes work creates human load
In other sectors where decisions carry significant consequences for others — such as health, justice, social services, and corrections — it is widely accepted that technical competence alone is not sufficient. Practitioners are routinely exposed to:
Ethical ambiguity and moral tension
Power over others outcomes
Uncertainty and imperfect information
Pressure to act under time and organisational constraints
Responsibility for unintended or downstream harm
Over decades, these sectors have learned (often the hard way) that without structured reflective support, risk increases — to service users, organisations, and practitioners themselves.
The response has been professional supervision: a structured, confidential space where practitioners are supported to think critically, ethically, and sustainably about their work.
A quiet parallel with AI development
Those building and deploying AI systems increasingly operate under similar conditions:
Design decisions made upstream shape outcomes far downstream
Responsibility is often distributed across teams, tools, and time
Harms may be indirect, delayed, or difficult to trace
Engineers and product leaders may carry moral unease without an appropriate place to speak it
Organisational incentives can unintentionally narrow the space for dissent or doubt
Yet most support structures in technology environments focus on outputs rather than inner load:
Line management
Code review
Retrospectives
Ethics boards and compliance processes
These are essential — but they are not designed to hold ethical uncertainty, moral distress, or the human impact of working inside powerful systems.
What professional supervision actually is (and is not)
Professional supervision is often misunderstood. It is not therapy. It is not performance management. It is not about telling people what decisions to make.
At its core, supervision is a structured reflective practice with three inter‑related functions:
1. Normative: ethics, responsibility, and standards
A space to reflect on questions such as:
What are we responsible for — and what are we not?
Where might harm be occurring, even if unintentionally?
How do organisational values translate into everyday design decisions?
2. Formative: learning and development
A space to deepen professional judgement:
Surfacing assumptions embedded in data or design
Learning from near‑misses and dilemmas, not just failures
Strengthening cross‑disciplinary thinking and ethical literacy
3. Restorative: sustainability and wellbeing
A space to acknowledge human impact:
Moral distress or unease
Burnout driven by pace and pressure
Isolation in responsibility‑heavy roles
In many sectors, this function is recognised as a form of risk management, not a personal indulgence.

Why this matters now
As AI systems scale, so does the distance between decision‑makers and those affected by decisions. Without deliberate reflective spaces, several risks increase:
Ethical blind spots become normalised
Responsibility becomes diffuse and harder to hold
Practitioners disengage emotionally as a coping strategy
Organisations rely solely on technical or legal fixes for fundamentally human problems
Supervision does not replace governance or ethics review. It complements them by working at the level where decisions are actually made: inside people and teams.
An invitation, not a prescription
The supervision models used in health or justice cannot be lifted wholesale into AI development environments. They must be adapted — culturally, linguistically, and structurally.
But the underlying insight is transferable:
When people work inside systems that can profoundly affect others, they need structured spaces to think, reflect, and remain human in the work.
This piece is not a proposal, a critique, or a call‑out. It is an invitation to dialogue.
What forms of reflective support already exist in AI teams?
Where do ethical doubts or moral tensions go — if they go anywhere at all?
What might responsible AI development look like if reflective practice were treated as infrastructure, not an afterthought?
These are not technical questions alone. They are professional ones.
Tyson Walters is a professional supervisor and practice leader with over 15 years’ experience supporting practitioners working in high‑risk, high‑impact systems, including justice, corrections, and social services. His work focuses on reflective practice, ethical decision‑making, and sustaining people who work with power and responsibility.



Comments