On The Edge of Something Else
What the Seismic report on AI signals about public disillusionment, planetary readiness, and the deeper role of digital tools like AI in our collective transformation as human Earthians.
My partner often reminds me to stop assuming what I have to share isn’t worth sharing. I tend to move quickly through insights that feel obvious to me, even when they might carry value for others navigating similar terrain or just don’t have the time to engage with the complex entanglements of our current predicament as a species. So when a dear friend and Earthian brother, Nathan Kinch, recently shared with me the On the Razor’s Edge report released by Seismic, my initial response after going through it was somewhat, meh...
This response was not because the work lacked rigour and clarity. Quite the opposite. The Seismic team produced a fairly comprehensive, globally grounded account of how people are feeling about artificial intelligence. But as I read through the findings, that of public distrust in AI labs, concern for democracy and human agency, fears for children’s mental and emotional wellbeing etc, it didn’t feel new. It felt like confirmation of a mood I’ve been tracking in the field for many years now. It reflected what I, and many others working on the frontlines of sociotechnical change, already sense: the current paradigm is losing legitimacy. And it’s not just the AI/tech paradigm, driven by the manboy broligarchs of Silicon Valley, that has lost it’s legitimacy, it’s deeper. It’s civilisational. It’s how we feel about the world, how we relate to each other and make sense of what the fuck is happening and derive meaning from this so we can craft a future more attuned to life on this planet.
But a few hours after first going through the report, I returned to my draft chapter for the forthcoming Springer Philosophy of Science series—a contribution titled Mirrors, Machines and Entangled Minds—the significance of the report landed differently as I sat with scribbles, words, theories and a long reference list in my Zotero. Like an “aha” gamma-wave state.
What Seismic has captured isn’t just a map of disillusionment. It’s a glimpse at the techno-moral zeitgeist of our Anthropocentric epoch and the dominance of the Dark Mirror view in our collective moral imagination. And for me I sensed into it as signal that the cultural superstructure is primed for transformation. The report does not advocate for a new paradigm, but it does, it many ways, confirm the readiness for one. It does not name regenerative futures or articulate civilisation and it’s discontents. But what it does reveal, is a public that is increasingly open to life-affirming, relational, and pluralistic ways of thinking about AI.
This is exactly the context in which my book chapter is situated. So somewhat selfishly, I thank Seismic for another wonderful reference I can use.
Diagnosing the rupture
If you haven’t already, I suggest going through the report in full but I am just going to outline some of the findings here to show a readiness for the thrutopian paths to regenerative AI, and most importantly, a civilisation that is more attuned to life on this planet we all call home.
On the Razor’s Edge is clear in its findings:
Over half of the public feel like AI labs are playing god.
Only a little over a third believe AI labs have our best interests at heart.
~40% people agree that we should not pursue artificial general intelligence and should stop all technical development in this area.
Across all markets the public were very clear: on a macro level, 70% globally either agreed or strongly agreed that AI should never make decisions without oversight.
And perhaps most revealing: many parents worry more about AI’s impact on their children’s emotional and relational wellbeing than on their economic futures. I am certainly one of those parents. Not to mention the ecological impact of the current approaches to AI development and use.
And really, this isn’t just a regulatory concern or some policy signal in the narrow sense. It’s a cultural rupture. It marks a withdrawal of consent from the dominant paradigm of AI development. One that is extractive, hyper-centralised, speed-driven and positions intelligence as a competitive, commodified force rather than the capacity of relational networks for coherence.
And the trajectory we’ve been on is also one of the colonial mindset itself, a venture of epistemic capture.
This was something I felt viscerally when I tuned in earlier this year to watch Jensen Huang, CEO of NVIDIA, and the high priest of compute in a black leather jacket deliver the 2025 NVIDIA GTC Keynote.
CUDA-X offers tools for everything.
Tooling for industries and modern scientific advancement as it silently reshapes what counts as knowledge, and what is rendered obsolete in its wake.
This is the slow violence of epistemic capture. Where intellectual diversity and lived experience is replaced with technical homogeneity and reductive orthodoxy. And the infinite plurality of ways we know and be in the world? Well this must be flattened into vectorised efficiency.
From what I can sense the mood music this Seismic report expresses is more a signal of a deeper grief over what we are trying to remember. We are part of the whole of life on this planet. And perhaps, more importantly, the insights point to the longing for something else we are not yet collectively able to imagine and craft into being—yet.
A frame for something else
In my book upcoming chapter Mirrors, Machine and Entangled Minds, I describe tech as a mirror—a system that reflects, distorts, and amplifies the values, trauma patterns and imaginaries we encode within it. You can read this more about this lens when it comes to tech-in-general in an IEEE Tech and Society Magazine article I co-authored with the brilliant Alja Isakovic, “Rainbow Mirrors: Technology and Our Collective Moral Imagination”. The gist is that if we continue to design and build AI through extractive logics, it mirrors those logics back at us. If we build it through relational, regenerative worldviews, it has the potential to serve an important socio-technical function in collective flourishing.
The sociotechnical architecture I am proposing is more so a field of possibility to explore together and a thrutopian pathway using what I’ve come to call Rainbow Mirrors. More diverse visions that reflect the plurality of social norms, collective aspirations, and the potential of technology to support human and planetary flourishing.
This frame isn’t about offering singular solutions. It offers conditions for re-relationship. For reimagining. For participatory design patterns and decentralised governance structures that can support the emergence of more life-aligned, morally attuned forms of artificial intelligence. Just imagine if OpenAI was actually open and was designed and incorporated as a platform cooperative from the outset and they took the seriously the insights that came from the democratic inputs to AI grant program they ran in 2023. Yeah, we’d be in a different place. But there is a historical context to grapple with, and moloch forces and all the rest that meant this was an impossibility.
I wont go deep on the privacy preserving architecture, the governance, the decentralised infrastructure for data storage and computation, though I will share a provocation in a moment. But before I do I want to say that it’s not for me to decide future designs and architectures as some “expert”. The futures we yearn for require bio-psycho-socio-cultural acupuncture points to harmonise the energy flow from all the emotional, epistemic, and existential disorientation that people are feeling in this period of the great phase transition. You might also refer to this as what the recently passed Earthian Elder Joanna Macy termed the The Great Turning, or what Glenn Albrecht frames as a epochal transition from the Anthropocene to the Symbiocene.
Alternatively you might use the frame of the brilliant Australian indigenous scholar, critic and misfit of metamodernity, Tyson Yunkaporta. And say we have to learn how to be in right relation with life.
Whatever your frame or reference, you know we gotta unlearn a whole bunch of stuff as a species. And while we do, we also gotta learn to find a way through.
Cultural attunement and the Diverse Dreamers and Globalist Guardians
The Seismic report identifies “five publics [that] are politically active, emotionally primed, and culturally connected. If they move, the conversation moves with them”.
Putting aside the challenges with segmentation of this nature, there are two of these segments that stand out as particularly aligned with the thrutopian path I allude to:
The Diverse Dreamers are often younger, more religious or spiritual, and deeply concerned with the emotional, social, and ethical impact of AI—especially on children, identity, and collective trust. They are already primed for frameworks rooted in pluralism, care, and narrative healing.
The Globalist Guardians skew older and more civically oriented. They are concerned with climate, war, and institutional legitimacy. They support slow governance, planetary stewardship, and global checks on power. Which is alluding to the kinds of infrastructure required to approach to AI development that is life-affirming.
Together, these groups do not represent a fringe. They represent a latent coalition of cultural readiness. They may not yet use terms like “regenerative AI” or “moral imagineering,” but they are reaching toward the same underlying values: coherence, equity, relationality, integrity, dignity and care as Earthian beings.
Importantly all other “segments” must be included in the journey. Though coordinating this all is still a challenge to dive into in and of itself.
From markets to phygital mycelium
For centuries, we’ve relied on markets, monetary systems, and incentive structures to coordinate human behaviour at meso and macro scales. These systems have brought undeniable efficiency in scaling during the Anthropocene epoch we are in. But they’ve also abstracted us from land, eroded relational trust, and externalised costs to future generations and the interconnected ecologies all human Earthians rely upon.
Now, in the face of complex and entangled challenges, we need to reimagine the substrate of coordination itself. Not through centralised bureaucratic command or algorithmic governance. Which is what I sense is likely to be the response to this accumulation of public sentiment as evidenced in the Seismic report. This is the default we tend to rely upon. That is, the current Hobbesian Leviathan (the state and institutions of governmentality) that we somehow expect to address issues of the commons because the nature of humans is assumed to be feral and parochial rather than effectively political and civic.
This is Government as saviour. And a faustian bargain. I say that because we cannot reform that which is dominated by regulatory capture, perverse incentives, entrenched corruption and the lack of sensitivity and agility to cosmo-local complexity. We will have to put some trust in these institutions in the short term and this path regardless.
But what is sorely needed is a mycelial web. A relational infrastructure that helps us sense, signal, and synchronise across scales.
This is where life-aligned AI comes in. As a way for human discernment, ritual, and embodiment to be more connected and supported through a phygital (physical + digital) substrate for the kind of planetary coordination that regenerative futures require. Hi-tech and low-tech ways for coordinating kincentric care, meaning, and life-force.
In our current phygital world where the digital and physical are irreversibly entwined, life-aligned AI can help weave together the threads of planetary cognition if we embed it within the right relational architecture:
Phygital coordination infrastructure:
Privacy-preserving, federated learning to enable collective learning without surveillance or extraction.
Commons-based data stewardship where data is held in trust with communities and ecosystems.
Platform cooperatives and transitions to DAOs to help ensure benefit flows toward contributors, stewards and all Earthian ecologies.
Guardian Councils as intergenerational, cross-cultural assemblies that can pause, guide, and steward AI development in alignment with a plurality planetary ethics.
Kincentric sensing networks as bioregionally aware systems that help us attune to the signals of the living Earth and support right relation with Gaia.
This must be a pragmatic scaffolding for the kind of coordination that neither markets nor bureaucracies have been able to provide. The technical pieces of this puzzle are there and this report from Seismic indicates that the overton window is ripe. The movements are ready to be mobilised but we still lack what I call the collective imagination systems to bring it into being.
The soil of collective imagination
Our imagination is extraordinary in it’s power and what cannot be imagined cannot be created. So if we are to coordinate regenerative futures we must cultivate collective imagination systems.
These are the often-invisible roots beneath political possibility, ethical discernment, and cultural transformation.
In my work with Collective Futurecrafting, I’ve been exploring how imagination functions as a plural ecology of capacities. This model represented in the diagram above draws inspiration from thinkers like Mark Johnson, Geoff Mulgan, and Henry Jenkins, each of whom have helped articulate distinct, vital forms of imagination.
Moral imagination
Rooted in embodied ethics and cognitive science, Mark Johnson essential frames moral imagination as the art of discerning the contours of care and meaning in the lived, and often ambiguous contexts we find ourselves in. It allows us to sense what matters, and why, across tangled scales of responsibility.
It asks: What does care require in this moment, and at what scale?
In a time of polycrisis, this imagination helps us hold tensions without rushing to closure.
Social imagination
Geoff Mulgan has posited that our crisis is not just economic or environmental, but imaginal, and that modern societies have lost the capacity to imagine alternatives. Social imagination, in his framing, is the collective capacity to envision and prototype better futures, even in the face of entrenched systems.
It asks: What if things could be otherwise?
This is the creative engine of transformation that is the capacity to rehearse the possible despite the inertia of the probable.
Civic imagination
Henry Jenkins introduces civic imagination as the ability to imagine ourselves as civic actors. Essentially part of a democratic “we” that can spark, narrate and enact shared futures. It connects personal narrative with collective action, and cultivates public agency through storytelling, media, and participation.
It asks: What might we do, together, to shape the world?
This form of imagination underpins political agency and rehearses democratic belonging.
These three forms of imagination don’t operate in isolation.
They entangle, root, and regenerate within what I call the Soil of Collective Imaginal Praxis. A dynamic field where moral, social, and civic imaginations co-arise and mutate through collective play, grief, refusal, and improvisation.
This is the shared Mundus Imaginalis as infrastructure for transformation and the substrate from which thrutopian pathways emerge.
It’s undeniable that AI tech is rapidly reshaping how we make sense of ourselves and each other for better or worse. In this world, cultivating these imaginal biomes becomes a strategic necessity. Without them, we risk building technical capacity without ethical orientation, and scaling coordination without shared meaning.
A thrutopian threshold moment
This is not a time for minor adjustments, technical safeguards or the plethora of AI governance and ethics frameworks, standards and guidelines alone. This is a time for reimagining what AI is for, how it aligns to what matters for the flourishing of life on this planet, and how we might best be in relationship with.
If the dominant system no longer holds, we must ask: what wants to take root in the compost of collapse?
And if we move beyond critique and diagnosis, and are finally able to imagine, together, what becomes possible that wasn’t before?
The Seismic report didn’t give me new information. But it did give me affirmation. A little confirmation bias, yes, but sometimes that is the best signal to mindfully attune to. That the work I’m doing, and that of many others in the field, is not just a thought experiment. It is part of a quiet cultural threshold that is becoming more visible by the day.
The edge here is an opening and we are not alone in this discontent. This isn’t just about AI. Because the current context of AI tech is just a mirror to where we are at in our developmental phases as Earthian children of Gaia.
But i’m also curious of what you sense is reflected in this report? What new ways of being, relating, imagining and collectively crafting the future resonate with you?
I used a combination of cognitive tools for drafting and writing this. Good old pen and paper. Whiteboard and marker. Obsidian to draft, then {m3} chatGPT to refine while conversing also with the custom GPT from Seismic (which was doing some mushrooms and hallucinating).
And I’m now using the icons for human machine collaboration that were recently published from the Dubai Future Foundation that you can find here.
"The futures we yearn for require bio-psycho-socio-cultural acupuncture points to harmonise the energy flow from all the emotional, epistemic, and existential disorientation that people are feeling in this period of the great phase transition." - What a powerful, well-worded statement.
I believe the further we walk into these disruptive times, the more we must look to Nature and its billions of years of research and development into creating and sustaining life. Perhaps as a trigger for those acupuncture points; the deep DNA memory of when we were part of something greater that was built for survival.