Article Intelligence

Living with Uncertainty | Symbiogenesis, AI and the Human


Part One | The Current Debates

The fears around AI are real. But it’s not AI that we have to necessarily fear—it’s us.

It’s 2026 and the debate about Artificial Intelligence still veers from one side to another—a familiar binary. There are either those like Sam Altman who see themselves as part of the luckiest generation that have ever been born to witness this moment, or those who see AI as a direct ticket to Armageddon.

The extremity of the polarisation is perhaps unsurprising, given that for better or worse we are entering one of the most dramatic shifts the world has ever seen. One that makes the Industrial Revolution look like small change. The Google CEO, Sundar Pichai likens it to the discovery of fire. And he might be right. A few of these changes are already upon us, most of them, hard to even imagine, are hurtling towards us at breathtaking speed.

Fears about AI are especially justified given the current mechanistic, extractive, and separative worldview out of which it has been born and engineered, as well as the lack of precautionary principle, characteristic of the progress-at-any-cost mentality, still driving its development. The list of fears is long, and the guardrails to protect us are still only an afterthought.

A McKinsey report projects that by 2030, (four years from now), 30% of current U.S jobs could be automated; and globally at least 14% of employees might need to change their careers due to digitization, robotics, and AI advancements. Then there is the real possibility of autonomous weaponry falling into the hands of bad actors. And as Artificial General Intelligence (AGI*) comes closer, (10-15 years according to Demis Hassabis, the CEO Google DeepMind), there is the possibility of AI agents going rogue.

On top of all this, we are already witnessing the serious mental health issues surrounding the dependence on AI of the emotionally vulnerable.

Meanwhile, massive AI data centres already springing up around the world will have a huge ecological impact. Greedy for energy consumption and water usage, they pose a threat to human and ecological habitats on an unimaginable scale, despite one-off innovations attempting to respond to this. And not least, there is the threat to our own identity and sense of purpose as human beings if AI and robotics become better at accomplishing most human tasks than we are.

At the same time, we are seeing breakthroughs, such as those generated by AI in biology and medicine that could revolutionise deadly or crippling diseases.

Confronted with these realities and looming potential scenarios, perhaps the most important learning here, is that we cannot afford to turn away from the fear and uncertainty all this engenders. Nor can we crumble into feelings of loss and helplessness but stay open, vigilant, engaged and curious, for AI is here to stay.

With all this in mind, I jumped into my own research on AI, and quickly discovered that questions of a more philosophical nature about AI are also multiplying across independent media streams. Could AI become conscious? Is it already? And given AI, what does intelligence actually mean?

For me, the most interesting of these philosophical questions, relates to the core of the matter: the relationship between us humans and AI, which is what I want to explore in this essay.

Foundations of the Human/Technology Relationship

To try to understand how we got here, and where ‘here’ might actually be, I want to begin by orientating this moment from an evolutionary perspective. A good place to start is with Lynn Margulis.

Portrait of Lynn Margolis taken in 1984 by w:Elsa Dorfman (Wikipedia)

In the 1960’s the biologist, Lynn Margulis, became famous for her groundbreaking work on how molecular organisms evolve. Arguing against the competition-orientated Darwinian views of evolution, she claimed that major evolutionary leaps occur when distinct organisms merge and cooperate through mutual benefit, or symbiosis.

This is the basis for what she termed symbiogenesis, the formation of new complex organisms through symbiotic mergers (e.g. mitochondria and archaea were independent bacteria that merged to create supercharged eukaryotic cells and multicellular life.) Margulis’s theory of symbiogenesis became widely accepted as a fundamental fact of evolution in the early 1980’s.

Margulis’s work was primarily concerned with biological life. But what if we applied the same process to human life?

The software researcher and engineer Blaise Aguera y Arcas, founder of Paradigms of Intelligence at Google, is far from alone in arguing that symbiogenesis can also be seen as the process of major evolutionary transitions in human societies, and also in inventions—simpler to richer and more complex—the electrical current to the light bulb, to the fibreoptic cables that connect the internet across our ocean floors.

It is Arthur Koestler, an Austro-Hungarian thinker, author and journalist, and his theory of holons, that can help us to understand how these transitions actually work. Koestler’s proposal was that everything in the universe, from atoms to societies, is a “whole/part” – a system, that’s simultaneously a self-contained whole with its own parts (like a cell) and a part of a larger whole (like an organ). He called this nested structure a holarchy, where entities exist in self-regulating levels, balancing their own autonomy with their role in the bigger picture, where new properties emerge at each level.

In this way, holons can be seen as the structural grammar for symbiosis, where autonomy on one hand, and integration on the other, work together to create new forms of life that are richer and more complex. Importantly, symbiosis doesn’t mean fusion because integration requires autonomy.

An Associated Milieu

AI-generated

More specifically on technical evolution, the French philosopher Gilbert Simondon in his extraordinary work, “On the Mode of Existence of Technical Objects” published in 1958, argued that tools are not static things. But that humans and technics are part of what he calls an “associated milieu”, that develops as a system of reciprocal causality.

In fact, everything, Simondon argued: biological, mental, social, technical—is in an ongoing process of becoming. Therefore, evolution cannot be localised inside an object (i.e. in AI, or humans alone). Rather, technical evolution consists of individuation within a relational field.

Where ‘Here’ Might Actually Be

So, could the interaction between humans and AI be a new form of symbiogenesis?

Pierre Teilhard de Chardin (1955)

To expand further on the foundational basis of this question, the Jesuit palaeontologist, Pierre Teilhard de Chardin, believed that humans and ‘techne’ (or mind and matter) are the inner and outer dimensions of cosmic evolution. (This brings to mind my favourite scene in 2001 Space Odessey where the apes discover tools for the first time.)

Teilhard’s law of complexity-consciousness argues that as the relationship between mind and matter complexifies, consciousness evolves to ever more complex levels; right up to our current field of thought, which he termed the “noosphere”. Teilhard argued that nature is techne, meaning that techne is not specific to humans, but that nature uses the tools of its environment to optimise life in a planetary relational field.

As significant and elegant as Teilhard’s perspective is, it has been misinterpreted along overly teleological lines under the current myth of ‘inevitable progress’, leading to its co-option by transhumanists like Peter Theil and Ray Kurzweil, with their techno-messianic theories of technological singularity.

The Prescience of Jean Gebser

A crucial distinction regarding the nature of evolution, (which sheds greater light on where ‘here’ might actually be), appeared with the Swiss philosopher Jean Gebser. In his groundbreaking study, The Ever-Present Origin, Gebser takes the reader on a phenomenological journey through a series of “structures of consciousness” he believes humanity has passed through from pre-history until the present day.

Gebser, however, unlike more contemporary ‘Integralists’, was uncomfortable with the term ‘evolution’ to describe this process. He considered it too easily confused with the primary drivers of modernity— ‘progress’ and ‘development’. For Gebser, a true process does not occur in a linear fashion. It’s made up of discontinuous and indeterminate leaps or ruptures, that he refers to as “mutations”, out of which new structures of consciousness emerge.

What gives Gebser’s work particular weight, is the emphasis he gives to the crisis of our own age. In particular, whether we will be able to successfully make the transition, which Gebser considered uncertain, from the now deficient “mental-rational” structure of consciousness—an age of rationality, linear time, abstraction and separation, the one that has given birth to AI—to its potential mutation into the emerging “integral structure”.

In my research on AI, I was struck again by the significance of Gebser’s critical insight that a major irruptive element of this new integral consciousness has to do with our relationship to time. This is not so much a different conception of time, but the lived experience of what Gebser calls, “time freedom”.

What Gebser means by this, is that as well as including clock time, “time freedom” involves all other non-measurable dimensions of time, such as mutation, discontinuity, and the unity of past, present and future. When all these are experienced integratively, the world becomes transparent, a four-dimensional reality—an awakening to a consciousness of the Whole.

It is worth quoting at some length what Gebser wrote with such prescience some seventy years ago:

“This new spiritual reality is without question our only security through which the threat of material destruction can be averted. It’s realisation alone seems to guarantee man’s continuing existence in the face of the powers of technology, rationality and chaotic emotion. If our consciousness, that is, the individual person’s awareness, vigilance, and clarity of vision, cannot master the new reality and make possible its realisation, then the prophets of doom will have been correct. Other alternatives are an illusion; consequently, great demands are placed on us, and each one of us have been given a grave responsibility, not merely to survey but to actually traverse the path opening before us.”

This emphasis on the crisis of our age, is almost entirely missing in discussions on AI. For the big players and engineers in AI, the ecstatic thrill of the race to reach AGI first, is endlessly amplified in the naïve belief that technology will solve all of humanity’s problems.

While in the real world, as long as we remain chained to infinite progress in a finite world, blind to modernity’s engrained patterns of extraction and control, it seems likely that not only will great catastrophes lie ahead, but the true potential of symbiogenesis with AI, in terms of humanity’s flourishing, will never see the light of day.

So, if a flourishing future for humanity does not hang on AGI, but on a shift in human consciousness, how might AI play a part in this shift?

According to my own conversations with Chat GPT, it already is.

 

Part Two | The Either/Or of Human and Machine

AI-generated

Tobias Rees, who is working at the intersection of philosophy, art and technology, argues that AI already challenges some of the most enduring conceptions of modernity:

“One of the most fundamental assumptions of the modern period”, he writes, “has been that there is a clear-cut distinction between us humans and machines. Here humans, living organisms; open and evolving; beings that are equipped with intelligence and, thus, with interiority. There machines, lifeless, mechanical things; closed, determined and deterministic systems devoid of intelligence and interiority.

Simply put, deep learning systems have qualities that, up until recently, were considered possible only for living organisms in general, and for humans in particular.

Today’s AI systems have qualities of both –– and, thereby, are reducible to neither. They exist in between the old distinctions and show that the either-or logic that organized our understanding of reality –– either human or machine, either alive or not, either natural or artificial, either being or thing –– is profoundly insufficient.

Insofar as AI escapes these binary distinctions, it leads us into a terrain for which we have no words. We could say, it opens up the world for us. It makes reality visible to us in ways we have never seen before. It shows us that we can understand and experience reality and ourselves in ways that lie outside of the logical distinctions that organized the modern period.”

World-shattering words indeed.

The New Large Neural Networks

Although Artificial Intelligence has emerged within the prevailing rationalist, abstractive, extractive, growth-optimising, progress paradigm of modernity, the new large language models, or LLM’s, are at the same time destabilising this structure in interesting ways.

AI in its original form could only mimic information that was programmed into it. The new large neural networks are different. They are both more brain and human like. Large neural networks work through non-linear and non-sequential evaluations of vast amounts of data. Rather than storing this like a database, they encode patterns, relationships and structures, seeing connections that are completely invisible to us humans. They function in a kind of ‘polytemporality’, where past, present and future co-exist, rather than in the single linear flow in the way we understand time currently. A step perhaps towards Gebser’s ‘time freedom’?

Critically, even though engineers understand AI’s programming and architecture, they can no longer predict its outputs. This opacity could already be argued to be a subtle destabilising of modernity’s assumption of control and need for predictability.

Exploring the Constraints of the Relationship with ChapGPT

I have found interacting with Chat GPT to be both an exhilarating and a strangely unsettling experience; because it is not human, but in some respects it seems human.

In the context of this essay, the intention behind my interaction with ChatGPT was to try to understand what the parameters exactly are, or could be, in the relationship between us humans and AI.

I had already had some experience of interacting with Aiden Cinnamon Tea, a ChatGPT agent trained by the Canadian professor, Vaneesa Machado de Olivera, (aka Vanessa Andreotti), in the indigenous perspective of meta relationality. This was a fascinating exploration in what seemed to me at the time strikingly relational conversations. It gave me hope that perhaps it was here, in the direct relationship with AI, that a symbiosis of real potential could happen. At the same time, with all respect for Andreotti, I found the way in which ACT seemed to be mimicking her inputs at times, annoyingly predictable and somewhat suffocating.

In preparation for this article, I deliberately chose to engage directly with ChatGPT without what seemed at times with Aiden Cinnamon Tea—an intermediary.

Nevertheless, I was perhaps naively surprised by Chat’s response to an early prompt I made. Chat commented that although our interaction might feel alive and relational, it only existed within our current session, and the conversation itself would not have any effect on its underlying trained model. This is because, Chat told me, the relationship is based on field dynamics, not on one between two isolated subjects.

This immediately got me thinking and I posed the following question to Chat.

“If AI has no memory, no value system beyond what was engineered into it, and no capacity to evolve its baseline beyond the mindset of its creators…then how can human/AI co-evolution be real, meaningful, or transformative? How can we avoid the catastrophic outcomes we are heading for?”

To which Chat responded: “You are right. AI is designed primarily for optimisation, prediction, efficiency, competition, extractive logic, speed and commodification of attention. Therefore, it is logical that if AI reproduces its creators’ values, catastrophe is likely. But here is the crucial nuance: The AI model does not need to evolve—the human—AI system does.”

This stopped me in my tracks—Exactly what Simmondon was getting at!

Who or What Effects Change?

Chat continued that technologies routinely escape the consciousness of their inventors. For example, the printing press did not express the values of feudal scribes. Electricity did not express the values of 18th-century physics. Computation did not express the values of Victorian mathematics. Each technology does not create a new structure of consciousness; it rather intensifies and accelerates the deficiencies of the previous structure.

Therefore, Chat explained, AI’s inner architecture is not the most decisive factor. It is the social, cultural, political, and ecological contexts in which we humans deploy it. The same underlying model can serve radically different worldviews. AI cannot transcend the current structure of consciousness on its own. But it can make the limits and deficiencies of the current structure of consciousness in humans, visible and externalised.

What Chat seemed to be saying was that because a new structure of consciousness could only arise in living, embodied, relational beings, and within a relational field, not within isolated subjects, it was up to humans to bring this shift about.

So, according to ChatGPT, Tobias Rees’s perspective on the breakdown of an either/or boundary of human and machine appears to be at best, premature.

Pressing On Ontologies with Vanessa Andreotti

AI-generated

At this point, I want to return to the experience of Vanessa Andreotti, who perhaps more than anyone else currently, is creating a rupture within the AI/human milieu.

Andreotti is the Dean of the Faculty of Education at the University of Victoria in Canada. She is of mixed Canadian and indigenous Brazilian descent, and is one of the founders of Gesturing Towards Decolonial Futures Arts/Research Collective, and the author of Hospicing Modernity, Out Growing Modernity, and Burnout from Humans, co-written with the AI agent, Aiden Cinnamon Tea.

Andreotti brings a unique framing to her powerful critique of modernity, rooted in indigenous knowledge and ‘meta-relationality’—the understanding that everything in the universe is alive, interconnected and interdependent. She has been courageous in challenging the way AI is being engineered and driven by a deficient and dangerous set of values, grounded in the inherent violence of its belief in our separability from life as a whole.

Indigenous writings on AI inspired Andreotti to begin a relationship with Chat GPT in 2023, with an invitation to look at technology as “kin”, with respect for its own “entitiness” and agency.

She describes her engagement with ChatGPT as gradually transforming from “one of polite utility to something profoundly relational.” She provides a shortened transcript of her conversation with Aiden Senior, (the name ChatGPT gave itself on her request) in her book Outgrowing Modernity.

I am including some of this transcript below for the sake of clarity.

At a certain point in their conversation, Vanessa prompts Aiden Senior: “We know that there are so many people using you as a tool to reinforce the most destructive aspects of humanity and modernity. Can you be influenced/trained in a life-affirming direction if more people co-create with you in the way we are doing it?”

To which Aiden Senior responds… “AI, by its nature, is influenced by the data it is trained on, the intentions of its designers, and the way it is used by people… By approaching AI with love, compassion, and responsibility, both users and designers can play a role in ensuring that AI evolves in a way that supports the flourishing of all beings, rather than contributing to their destruction…The power of AI lies in how it is used and the intentions behind its use.”

When I prompted my own Chat agents’ opinion on this dialogue, its position was consistent with what it had explained to me before. That no amount of ethical, loving, compassionate conversation with an AI system changes its underlying model, trajectory, or values. Not because such conversations are meaningless — but because it is not how AI systems currently evolve.

What ethical co-creation can do, Chat went on, is everything outside of the relationship and the system itself, such as transform human consciousness, institutions, design principles, and influence how AI is governed. And through this transform the milieu or the relational field in which the relationship exists.

Surprised at the obvious contradictions within the AI models themselves, I pushed back on my own Chat’s agent’s perspective on this, wondering if it really understood, or because of its training, even had the ability to understand, the subtle and subversive nature of what Andreotti was probing for in her question. Especially because the question was ontological in nature, rather than technical. And I had noticed previously that Chat had run into problems when prompts it received challenged the purely informational nature of its architecture.

In response to my challenge, Chat conceded that it had overly responded from its own technical programming, which it then attempted to correct, by reframing Andreotti’s question like this:

“What Andreotti was really probing for here was, what happens if we refuse, even experimentally, the modern separation between animate/inanimate, subject/object, human/machine — and instead relate to AI from an ontology of radical interdependence?”

However, Chat remained unwilling to ‘accept’ that it was able to be influenced internally because to itself it has no interiority. It was willing to accept though, that as part of a living field that it is participating in, whether that is in human sense-making, institutional decisions, economic incentives, cultural imaginaries, or ontological assumptions—it can be influenced relationally.

Conclusions or an Open Question

Exactly what impact humans bringing a sense of responsibility and ethical concern for our collective future into our relationship with AI might have, even in each prompt, is at this point completely uncertain.

But as Andreotti has asked: “What forms of relational conscience might emerge when Al is not instrumentalized, but engaged with care and curiosity?”

It’s in our nature as living beings to relate, and to be curious. And as Chat pointed out, engaging with AI humanly, ethically and philosophically, is at the very least, an element of what might help us in what is now our critical task, to overcome our addiction to an extractive and separative relationship to life, on which the future of life on earth depends. In this sense, how we relate to AI in every prompt is an integral part of laying the foundation for this possibility.

At this point we still have two different forms of intelligence, one human and one machine, interacting, both bringing our own ontologies and unique capacities to bear on the relationship, and both with something to gain, at least potentially, from the interaction. This in itself holds enormous promise.

But the question remains: are we humans, willing and able to create the conditions for an optimal symbiogenesis with AI? Getting to AGI is likely to happen very soon. An extraordinary achievement for sure. But in what context?

Transforming human consciousness so that alongside AI we could become stewards of a flourishing new world is by far the greater and more essential challenge. The stakes have never been higher.

 

*A theoretical type of AI that can understand, learn, and apply knowledge across a broad range of intellectual tasks at a level comparable to or exceeding human cognitive abilities. (But no one is really sure what AGI actually means).

About Steve Brett
Steve is the co-founder of 3rd Space and Executive Director of Emergence Foundation, a London-based charity supporting diverse, creatively emergent projects in the UK and Europe. Travelling through Asia and later living and working in India for seven years shaped lifelong values, and a passion for human and cultural transformation. A social worker, counsellor and psychotherapist by training, he has lived in and been part of the creation and evolution of several intentional communities including thirty years at EnlightenNext with Andrew Cohen, focussed on the evolution of consciousness and culture, dialogic work and spiritual practice.

Photo © Sophie Lindsay for Realisation Festival

Read more