T(h)ree is a permanent landscape intervention that gives voice to a 200-year-old tree in a public park, connecting together natural, human and technological infrastructures to create a more-than-human experience embedded within a planetary-scale system.
T(h)ree
Clarence Gardens, London, UK
Co-created with residents of Regent’s Park Estate, T(h)ree is a permanent landscape intervention in Central London, integrating community knowledge with natural infrastructure and AI technology to give ‘voice’ to a London Plane Tree that has witnessed hundreds of years of history, so people can talk with it and ask questions.
Extending across the entire public garden, with digital networks inscribed on the surface, and cobble sett in grass areas and even the dog run, the project presents a permanent physical manifestation of AI’s hidden infrastructures and ethical dimensions in a public space which, we hope, will encourage deeper dialog on more-than-human communication.
Prompted by a local community champion’s question: “if this tree could talk, what stories would it tell?” it builds on our work designing more-than-human public spaces. A key moment in ODAC’s Story Trail (a series of new public interventions in and around Regent’s Park Estate), the project title refers to the fact that there are three genetically identical trees on site, all connected together underground through their root structures.
Blending natural, human and technological systems, sculptural root-like mounds formed from charred wood and rusted steel emerge from the soil, connected underground by digital and mycorrhizal networks, while an embedded planetary-scale system diagram maps the project’s complex ecological, technological, and social interdependencies.

T(h)ree root mounds, cobble sett networks and Story Tree. Photo by Stephen Chung.

Ask T(h)ree any question and you'll get a response. Photo by Nick Turpin.
Voice is fundamental to social interaction, underpinning the idea of personhood as well as broader democratic processes.
Though humans can’t yet directly converse with non-humans and engage them in these processes, progress has been made – yet we’re still not sure what we will talk about, or how we’ll deal with conflict once we can. Visiting T(h)ree you'll be able to practice those conversations.
As you get closer you might hear a collection of voices (synthesised from local residents voice captures). If you ask a question - any question - you’ll get a response. The large language model (or 'LLM') driving the interaction is configured with residents’ individual stories, memories, perspectives and beliefs about the tree - as well as input from local experts and an arborist - shaping a subjective multi-layered representation of the tree’s perceived personality, character and history.
The voices that you hear, slightly different every time Story Tree speaks, are a synthesised amalgamation of local residents’ voices. The way the Story Tree responds is unpredictable, but always based on language that residents themselves have used to describe it, and uttered in voices that came from the residents themselves.
Every now and then, you’ll hear it humming the melody to 'Daisy Bell', which was sung in 1961 by the first computer speech synthesis system, a reference here to the fact that technological systems often misunderstand human categories (‘daisy’, the nearby flower, versus ‘Daisy’, a person’s name).

Daisies grow in and around T(h)ree root mounds

“It blends seamlessly with its surroundings. The pathways that lead to it remind people of the connection it has to surrounding trees.
I can imagine an elderly lady talking of her passed husband and a grown child 50 years from now being able to ask about their grandfather.
I imagine inquisitive children asking questions about the tree that many adults might not have the patience to answer. And in a very special way that almost made me cry. If you sit still long enough it will play a song from my mams favourite film. A homage to 2001 a space odyssey.
I have all of this information in my brain about such a niche topic and for the first time I was able to tell some people that really heard and cared and wanted it to be part of something that they offered to the zeitgeist.”
- Liam Mcgough, Arborist
T(h)ree root mound formed from charred wood and rusted steel. Photo by Alice Horsley.
At a time when voice emanates from technological artifacts all around us (home automation systems, mobile devices, etc) we hope to flip the relationship, using technology to deploy ‘voice’ in a landscape, encouraging people to perceive the living world differently.
We are interested in exploring the role that AI, with its deep and often negative societal and environmental impacts, might play in facilitating meaningful interaction between humans and non-human species.
In a massively interconnected world - socially, technologically and ecosystemically - particularly while ‘nature’ is regarded as separate from ‘human’, we wanted to make explicit the interdependence of human and natural systems; human and technical systems; technical and natural systems.
The project may seem to be about non-humans speaking with us, but it’s more about what we ourselves might say to them, and what it means to have such conversations in public.

T(h)ree extends across Clarence Gardens, Regent's Park Estate in London, revealing connections between digital and ecological systems

Rescripting Technology
We hope to expose the hidden layers of technology purposefully obscured in mainstream technology development. We're interested in how recent technologies, like AI and LLMs, can be re-imagined, re-designed and re-presented outside the corporate imagination in more creative and constructive ways that get others involved in building and deploying. This project strives for that goal in several ways.
First, rather than using a generic speech synthesis voice you might find pre-installed on your laptop or phone, the voice of the Story Tree is bespoke, programmed with voice recordings contributed by local residents - the Tree’s anthropomorphised voice is the co-created voice of its neighbours.
Second, the large language model (LLM) that drives the conversational interface is configured around residents’ stories, memories, questions and subjective data. This means the words that the Story Tree uses are derived from the perspectives, perceptions and language used by the people that know it best.
T(h)ree system diagram embedded into the root mounds.
Third, embedded in the sculptural mounds themselves is a detailed diagram of the system that drives the entire installation.
This system diagram describes infinite holistic planetary-scale connections that visitors can navigate on site, while also experiencing its result in the form of T(h)ree.

T(h)ree system diagram embedded into the root mounds. Photo by Nick Turpin.

More than just technical infrastructure the system diagram maps out the project’s complex ecological, technological, and social interdependencies, including:
- The installation’s LLM’s components, their environmental impact, energy use & open source processes they're built on.
-
The natural inhabitants who live in and around the Story Tree, helping it thrive, and leading to residents forming significant memories that become part of the LLM used in the installation.
-
The military funding that supports technology entrepreneurs who create platforms used to train AI models.
-
Copyrighted content that often goes into LLMs, as well as emotional labour of gig-economy workers who annotate the models.
-
Oligarchs who influence political decisions that impact climate & biodiversity including organisms that live in & around the Story Tree.
Planetary-scale system diagram of T(h)ree, zoom or pan to view extents
Through this project we hoped to probe the ethical implications of creating AI-driven anthropomorphised personas for non-human species, underpinned by a wide range of perspectives offered by the local community.
In this way, the project presents a paradigm for ‘more-than-human’ interaction that intertwines human, non-human/natural and non-human/artificial intelligence, in a way that could help reshape our interdependent futures.
