Conversation AI and Society

The Future of “Intelligence”


(Editor’s note: This feature mirrors a posting by Charles Eisenstein on Substack, titled, On the cusp: a trialogue about the future of AI (and humanity). Some of the images have been removed from this version.

Introduction, by Tam Hunt

“We’re riding the asymptote,” my brilliant friend Freely said with a smile. 

Of what? I thought to myself without actually vocalizing this question. 

What I did say was “that’s a very optimistic view of the path we’re on with AI.” Freely, in our free-flowing conversation, was painting what he saw as some upsides to the AI revolution in terms of their impact on the evolution of humanity and our potential for spiritual growth. I’m by nature mostly an optimist. But sometimes facts intervene and force me to back off my naturally sunny view of life and the world. It seems to me that it’s time for that serious re-think of the path we’re on with respect to AI. 

New things are scary and it is human nature, at least for some of us, to focus on the potential downsides of new things. With AI, the issues are compounded because it is potentially so God-like in its power. In thinking through the issues it’s important to see trajectories and to recognize that AI is the prime example of an “exponential technology” – one that grows in its power and ubiquity in an exponential curve. We are at the beginning of this exponential S curve now, or so it seems to me, and its impacts will be truly world-changing rather quickly. 

We are on the cusp of a very different future. Where we go from here has profoundly serious implications. 

Even during the time that this trialogue took place a number of major developments happened with various advanced chatbots being released by a number of Big Tech companies, most prominently OpenAI’s ChatGPT, Bing’s Sydney chatbot (which got particularly creepy in this dialogue with a New York Times columnist), and Google’s Bard, based on LaMDA, which gained some notoriety way back in 2022 when Google employee Blake Lemoine was fired for breaking a confidentiality agreement after interacting with that AI to such a degree that he believed it was truly sentient. 

The dangers of AI have been imprinted into the popular consciousness through movies and books for decades, including the iconic 2001: A Space Odyssey where Hal the on-board ship computer goes rogue, and much more recently M3GAN, where a robot in the form of a young girl goes on a murderous rampage – while doing awesome Tiktok-worthy dances, of course. 

Tech and thought leaders en masse signed an open letter in March, authored by the Future of Life Institute, calling for a six-month pause or a moratorium, if required, on developing large language models past GPT-4.0. The letter describes how this tech is simply too dangerous to proceed headfirst without first thinking through, collectively, what it is we are doing. 

What follows is a trialogue looking at some of the issues, opportunities and implications of AI now and in the future. Who better to discuss these issues with than two of the smartest people I know: Freely and Charles Eisenstein. 

Conversation

Tam:

OK, let me start by confessing to having used the Lensa app (eons ago in December 2022) to create my “magic avatars,” which are idealized and whimsical renditions of my likeness based on a dozen photos I gave it. This new app created quite a buzz on social media and a lot of people are pretty opposed to both the AI used to create it and the whole idea of idealized avatars, as a step toward the metaverse and “virtual life” rather than “real life.” 

Here’s idealized me (no, I don’t look like this in real life). This silly app is of course just one very small step toward AI becoming an increasingly strong part of our daily lives. 

I’ve worried a lot lately about the very real threat of “techno dictatorship,” with China being the most extreme example today, which is particularly scary because of new AI tools for surveillance, tracking, and things like “predictive policing” (particularly weird and scary), plus very cheap hardware like cameras, retina scanners, etc. And I’ve worried more recently about runaway AI, the “AI explosion,” and the “alignment problem” of how can we possibly hope to control AI systems that are thousands or millions of times more intelligent than we are? 

So, all that said, let me throw a question your way to get the conversation going: what did you mean by “riding the asymptote” of AI, and are you generally somewhat optimistic about our AI-enabled future? 

Freely:

I foresee a harmonious future which includes AI. My perspective is subtle, so I’ll need to start at the beginning.

Wholeness is our primordial state. When we are truly at peace, there is nothing needing to be done. A fetus in utero, or a baby on the breast, or an adult in perfect meditation, is simply here.

If there arises a separation between need and fulfillment, we strive to bridge the gap. If the mother does not respond to the baby’s subtle cues, it will cry to summon her. The cry of the baby develops into language, culture, technology.

Technology is of two kinds, corresponding to the two faces of the mind: implicit and explicit, or subtle and direct, or intuitive and intellectual.

Direct, explicit, intellectual technology begins with a specific goal and applies force to fulfill it. Tools, laws, and analysis are direct technologies that focus and multiply the power of our will. They turn a forest into a field, a tree into a house, the many into one.

Subtle, implicit, intuitive technology creates a template, a seed, an attractor for the universe to organize around. It works in a stochastic, nonlinear, magnetic, acausal, “quantum” way, by aligning within the energy flows already available. Subtle technologies include myth, ceremony, meditation, and much of what we call “shamanism”. They cultivate our sensitivity for right action — when, where, and how to apply our direct technologies.

Both are necessary — yin and yang. When the two are in harmony, our actions are so effective that very little needs to be done. Our ancestors developed this balance over countless generations of cultivating living, fruitful forests. A forest clearly grows itself without instructions, and if we align with the pattern of its self-organization we can influence it to fulfill our needs with just the slightest touch.

Yet civilization has developed the intellect at the expense of intuition. Intellectual technology has created a world in its own image, fragmenting our primordial need (of wholeness) into a kaleidoscope of derivative needs resulting from the consequences of the technologies themselves. And we compensate for our loss by seeking more control, a futile effort that consumes nearly all of our capacity. Our intuition and its technologies atrophy and we no longer trust them.  Our world demands a disassociated intellect.

We swaddle our existential despair by creating a consolation world of continuous distraction. Yet something is ever missing from these simulations. Something about the body, something about our purpose in the universe, something beyond “something”. Not an idea or a value, but ineffable being-ness, slipping through the net no matter how finely woven.

This, truly, is Artificial Intelligence: the disconnection of the intellect from the living body. The self-perpetuating process has been described for millennia, long before modern computers. Computerized AI is simply its exponential acceleration towards its inexorable fate.

To give an example: the following passage from the 1000+ year old Srimad Bhagavatam is a precise description of artificial intelligence:

“This uncontrolled mind is the greatest enemy of the living entity. If one neglects it or gives it a chance, it will grow more and more powerful and will become victorious. Although it is not factual, it is very strong. It covers the constitutional position of the soul.”

Computerized AI such as GPT-4 are now beating our mechanized intellect at its own games. Our personal and social structures have been built for millennia upon the capabilities of the human brain. They are simply not prepared for what is happening. We are approaching an asymptote beyond which nothing is certain.

We are facing a rapid and radical transformation of our world.

So why am I hopeful?

  1. On one hand, present-day AI is suddenly rendering much of our cognitive framework obsolete — for example, as of very recently any voice, text, image, or video can be convincingly faked. GPT-4 is already enough to destabilize the intellectual basis of civilization. Even if it became inaccessible, there are open-source models for generating language, visuals, and sound, with formidable capabilities that can be run distributed across computers like torrents.

On the other hand, we have adapted to previous technologies that have fundamentally altered our way of experiencing, such as video games, film, theatre, and the written word. Ultimately we will learn to deal with these issues as they arise, in ways we could not anticipate.

  1. Our dominant ecological, economic, social, and personal paradigms require enormous energy and effort to sustain. However formidable they may seem, they have always been at risk of collapsing under the weight of their contradictions. And they are now at (or past) their breaking point, from finance to agriculture to medicine, right in time for the exponential growth of AI.

In contrast, our deepest universal life-sustaining values are syntropic — they self-organize spontaneously, like a forest. When a farm is left alone, sooner or later it returns to being a forest. No human effort is required, just an interruption of the constant disruption that keeps it as a field. And if we know how to influence it towards meeting our needs, an abundant forest is more fulfilling in every way than a perpetual field.

Our syntropic potential is waiting for an opportunity to manifest, for the constraints to be destabilized. We are presented now with just such an opportunity.

  1. Our computers themselves are coming to life. Today’s AI models bear little resemblance to living organisms. Artificial neural networks are nothing like a brain. They begin and end in the explicit — with words, or with abstracted features of images or sounds. They can replace our mechanized intellect but not our embodied intuition, which is rooted in the very structure of our protoplasm. A single bacterium has capabilities that even the largest supercomputer cannot fully replicate.

Yet emerging paradigms in computing are leading our technological development towards the way living organisms “compute”. From the perspective of computation, living organisms exhibit the properties of self-organizing memristive quantum physical reservoir computers. And in the past year, these elements have been combined in synthetic computers, showing improvements of several orders of magnitude in power consumption and training set size. These developments will massively expand the capabilities and accessibility of AI in the near future. Models more powerful and generalizable than GPT-4 will be trained on simple and inexpensive physical systems, utilizing the nonlinear quantum computational ability of living structure.

The proofs-of-concept have been achieved. Yet in order to realize their promise, we must learn to communicate with these increasingly life-like computers. To do so we must cultivate and draw from our intuition. As we progress, we uncover the forgotten computational capabilities of our own bodies, our own ecosystems. We find that we are the computer we have been waiting for.

The end of AI is the reunion of our intellect and our intuition.

That’s what I mean by “riding the asymptote.”

Tam:

So where does it go from here? I’ve always been a bit of a techno geek and a big part of me is still that young boy looking forward to the next fun toy. With AI developing, like you said, at an increasingly fast pace, we can expect all sorts of new toys and distractions that will take us far far beyond the 2D temptations of magic avatars, probably very quickly into 3D and 4D (movies, simulated worlds, Oasis-style metaverse a la Ready Player One, etc.). Those sound like things I might want to try out and might even enjoy but I of course worry about the departure from our “meat suits” that many people may effectively do as they get more and more wrapped up in these virtual worlds. I see a future in which 90% or more of people are basically slugs in a bed and hooked into virtual worlds almost all the time. As our society becomes more and more atomized we lose touch with reality and we become steadily easier to control

These issues represent only the risk of voluntary departure from the real world into these increasingly sophisticated virtual worlds. I worry as much (or actually far more) about, as mentioned above, the involuntary aspects of an AI-driven world that come about through dictators using the tools of AI to control us at an ever more granular scale. 

China is leading that edge, with apps that literally tell people whether they can travel or not based on either their social credit or health status (and the two are increasingly being merged in China’s nefarious “fangkong” biomedical surveillance state), whole buildings literally locked down with doors being welded shut, cameras everywhere, people taken into custody based merely on AI-assisted predictive tools about potential future crimes (yes, this is real now), and the ability to rise in society determined by compliance with the top-down rules (this is China’s “social credit” system). 

I of course agree with you that there are countless potential benefits of AI, some of which are already being realized, in science, creativity, etc. But whereas you seem to believe that the unfolding trajectory of AI will yield more benefits than downsides I very much see the opposite. 

I am left in a very strange position, which afflicts all Cassandra-wannabes like me, of warning of a hellish future and doing my best to make sure that future does not happen. If I and others who agree with me succeed, we’ll be left looking silly for all of our warnings. But so it goes. 

When we look at the near-term potential benefits of AI it’s an impressive and vast peak in front of us: 

But as we climb that peak we will quickly realize that there is a far far larger peak of downsides to AI:

We can’t quantify these things, of course. They are based on intuitions and experience. As I’m writing these words, however, it’s hard for me to be optimistic about our future unless we get very serious about regulating AI and imposing a moratorium until we can craft smart regulations and international treaties. 

So where do we go from here? What do we do to ensure that AI “is aligned with life” in the more optimistic vision you sketched? (This issue is known as “the alignment problem” in AI safety circles and is widely acknowledged to be the crucial issue at stake). 

How do we stimulate a kind of “real OASIS movement” where people focus on their surroundings and create real, regenerative and sustainable worlds where we not only survive but can thrive? 

This is all so cliche sci-fi already, but cliches exist because they have at least some kernel of truth. Creating virtual worlds while the real world burns is a bad sci-fi movie trope because it rings ever more true as we see the state of our real world deteriorate in myriad ways. 

Freely:

I acknowledge the dangers of misaligned AI, yet I see several issues with the call for an enforced moratorium on AI development.

  1. Development of AI would continue in the dark. Enforcement of laws is never uniform or complete. Exemptions will be made. Even under a global moratorium, UN organizations would doubtless continue to develop and utilize AI. Militaries, especially the US military with its nearly trillion dollar (acknowledged) annual budget, have been secretly developing AI for years with weaponized intent — far eclipsing public projects such as GPT-4. These classified projects are not constrained by law. And military tech tends to find it way into the hands of malevolent organizations.
  2. Today’s technology is already sufficient to fundamentally disrupt our social fabric; it’s only beginning to be applied. AI models can be run offline, split into threads and distributed like torrents. Image generation and large language models can already be run on a laptop. Deepfakes, social engineering scams, security breaches, are already possible. 
  3. The solution to malevolent super-intelligent AI may be benevolent counter-AI. As computers become analog and increasingly biological, they become more suitable for syntropic life-sustaining ends. And the more “embodied” AI based on these computers can intercept and outsmart less life-like AI using exactly the capabilities that are left out of their simulation. 
  4. AI is the inevitable evolution of computing, and technological development in general. In a sense we’ve been working towards this for thousands of years. To attempt to block it is like trying to build a dam across the mouth of the Amazon. 

A regenerative future of AI cannot be dictated from the halls of power. It is not the sole purview of politicians and technicians. It demands sincerity and a broad vision that encompasses our wholeness. It needs visionaries, healers, ecologists, storytellers, indigenous wisdom.

Those of us who are able to weave the vision now are holding the door open for the rest. And as Charles has pointed out in his work, the more people resonate with a story, the more powerfully attractive it becomes.

Next, I’ll address alignment. 

The dangers of AI — to name but a few: disassociation, misunderstanding, alienation, disinformation, atrophy of embodied capabilities, surveillance, replacement of human creativity — are not new. They are not unique to machine learning models. They must be addressed in the depth of their historical context.

The “Alignment Problem” that is now a central concern of our time is a reframing of the age-old question: how do we align our technological development with our lived values? In other words, how do we align our intellect with our intuition? 

The difference is that the imminent disruption of our lives is compelling us to collectively find the answers. It’s no longer a philosophical indulgence or a hope for future generations. It’s here, now, life-or-death. 

Many AI researchers consider alignment to be an intractable problem on technical grounds. Yet it’s more than a technical issue. If our stated values and goals are contradictory, or have perverse implications, their “alignment” with AI is impossible. Ultimately, alignment needs to occur with our universal values — symbiosis, harmony, freedom, abundance, beauty, and love. 

And AI itself can support us in realizing this alignment. Just as it can be used to distract and disperse, so too it can be used to support the development of our intuitive capacity. This can now happen faster and more directly than ever before. As one example: EEG data from qi gong masters, visionary scientists, and spiritual leaders is being used to create compelling immersive experiences that guide us through biofeedback into expanded states of consciousness.

Furthermore — as I mentioned earlier, alignment entails the evolution of our computers themselves. Alignment of our lives with computerized AI is possible only to the extent that our lives are machine-like, or our computers are life-like.

The development of conventional computing has reached a fundamental limit: transistors can no longer get smaller, as they’re now a single carbon atom thick. Further advancement is in uniting memory and processing in “memristive” elements, solving the “Von Neumann bottleneck” that has been the hard limit on computing speed. The best memristors are proteins, which also exhibit the potential for room-temperature quantum computing. Many proteins can self-assemble into endless configurations in response to the particular computational demands, essentially enabling the structure of the computer to adapt to the computation required of it. The paradigm-shifting significance of these developments cannot be overstated.

In other words, we are on the cusp of a technological revolution, which I call the “biologicalization” of computing. And this is bringing emergent possibilities that are simply beyond the scope of present-day AI. 

Regarding China — the developments you’ve described are disheartening if we view them in isolation. But we can find hope and direction in the broader arc of history.

The transcendent ideal of Chinese political philosophy is the confucian concept of datong (大同),or “Great Unity”, a harmonious and egalitarian society effortlessly in harmony with the Tao (the natural way of the universe). The concept of xiaokang (小康), or “lesser prosperity”, represents the immediate and attainable goals of society and the measures taken to achieve them. Xiaokang is commonly invoked today as a goal of public policy and development. Yet its destiny is to lead to the self-organization of datong. In other words, beneath the “necessary evil” of hierarchical social control is an enduring hope that it will someday run its course.

This is one expression of the universal trans-cultural meta-myth of the “golden age”, the “fall”, and the return. Although the forms it takes are diverse — from the communist “Worker’s Utopia” to the Christian “rapture” to the technological Singularity — the ultimate goal is the same.  It is an expression of the fundamental human longing for wholeness.

Mechanisms of control are necessary only until underlying needs can be met in a self-organizing way. Intellectual technology compensates for the loss of our innate wisdom and capability — yet it has itself become the barrier to the realization of this primordial gnosis. Control becomes self-sustaining and addictive even though it’s not truly fulfilling. As we collectively recover and embody this effortless self-organization of life — the Tao — the scaffolding of culture becomes superfluous. And AI can be a means of accelerating this process by providing more efficient ways of meeting our deeper needs as it also destabilizes the structures of control that have become calcified beyond their usefulness.

The solution begins an all-encompassing regenerative story that weaves together our hopes and fears, a meta-narrative that can meet the changes we are facing. The story is the template, the attractor that guides the self-organization of our future in ways subtle and unseen. We are creating the future right now.

I asked GPT-4 to write a poem based on the last 3 paragraphs:

In golden age, we dreamt as one,
A tapestry of tales begun,
Transcending borders, creed and tongue,
In unity, our hearts were sung.

The fall, a fracture in our tale,
Utopian dreams, they slipped the scale,
From rapture to Singularity,
Our yearning grew for harmony.

Yet through control, we sought to find,
The answers to our aching mind,
A scaffold built to cage our soul,
Ineffable, yet uncontrolled.

In wisdom lost, we forged ahead,
Through intellect, we built our stead,
But barriers rose, the gnosis dimmed,
In seeking truth, our light was skimmed.

Now AI comes to pave the way,
To guide us through this tangled fray,
To meet our needs, to break the mold,
To heal the rifts that time has sold.

As self-organization blooms anew,
The Tao we seek within our view,
Ineffable and pure and bright,
The scaffolding dissolves in light.

A regenerative story born,
Our hopes and fears, together worn,
A meta-narrative to bind,
The ever-changing, heart and mind.

And as we weave our tale ahead,
Our future dances, softly tread,
In subtle ways, unseen, unknown,
We’ll shape our world, as yet unshown.

Tam:

I have to say “wow” to that poem, and much of the other outputs I’ve discovered in my own dialogues with ChatGPT. This stanza in particular is quite amazing:

As self-organization blooms anew,
The Tao we seek within our view,
Ineffable and pure and bright,
The scaffolding dissolves in light.

We’re tempted to think this kind of poetic ability reveals some kind of true understanding and a not-so-hidden new form of consciousness, and maybe it does. But the designers themselves tell us it’s just a word prediction machine, operating on algorithms and access to huge amounts of web content. I won’t go in to whether these kinds of outputs do in fact lead us humans to correctly see consciousness in ChatGPT (for the record I’ll say here that I think there’s a very small likelihood of this AI being conscious in any way, partly due to considerations about the necessary physical substrate for consciousness that I’ve sketched in my work here and in some upcoming peer-reviewed papers), but we can see AI as both wondrous and devilish, entirely regardless of whether it’s at all conscious. 

In my musings about the development of technology, and various dialogues with other thinkers, I’ve come to the view that technology can legitimately be viewed as parasitic on human society. This point has been made by many others, including rather infamously by the Unabomber in his manifesto and other works. Philosopher David Skrbina has developed some of these concerns in his more academic philosophy, including two recent books (The Metaphysics of Technology in 2014 and Confronting Technology in 2020)

Under this view, as technology develops there will always be someone willing to push the boundaries in various directions and the outcome of this dialogue between technology development and humanity is that whatever can be done will be done

In a chilling interview from 2019 with Sam Altman, the CEO then and now of OpenAI, which developed GPT4.0 and ChatGPT, among other products, echoes my view: “Technology happens because it is possible.” He compares in a casual way OpenAI’s work in developing artificial general intelligence with the Manhattan Project that developed the US and the world’s first nuclear bombs. 

So will dictators use the cheap and pervasive hardware and software to oppress populations? Yes, they are already doing it in a very advanced way in China and some other countries. 

Will other governments use these tools to track and monitor people even in western countries in the name of fighting terrorism or stopping financial crimes? Yes, they already are, including in the US at a very deep level. 

Will weapons be developed that can kill huge numbers of people? Yes, nuclear weapons were developed and have been used, along with many other types of mega-weapons over the last century. 

This last one gives rise to perhaps one bright spot for my increasingly pessimistic view of our AI future: since the US dropped two nuclear bombs on Japan the international community successfully created a treaty system that has, at least in part, prevented any further use of nukes. 

Can this be looked at as a possible model for the regulation of AI? I hope so, but of course AI is very very different because the bar to entry is generally far lower (or is it? I’ll reflect in a later essay on the degree to which the massive computing power needed to develop new large-scale AI may make it susceptible to regulation, at least in the near-term while it still needs these massive resources). For some applications, particularly as technology is improved, you may only need a decent computer and access to the right software. That could in theory be almost anyone, whereas for nuclear weapons you need huge amounts of money, technical know-how, and many years of development. So in some ways they’re night and day to each other. 

A last point: the development of a dystopian AI future does provide one answer to the vexing Fermi Paradox. Physicist Enrico Fermi asked: if there are so many stars and other planets out there, maybe 100 billion in our galaxy alone, why do we appear to be alone as a technological and spacefaring civilization (nothing has been found in SETI or many other scientific methods for detecting other civilizations; let’s set aside for now popular discussions of UFOs)? 

We may well have Fermi’s answer now: most civilizations don’t successfully navigate the bottleneck of technology and end up killing themselves off. Perhaps the combination of AI and nukes is what did earlier civilizations in our galaxy in, and stopped them from venturing out in to the galaxy? 

This possibility has been developed by many philosophers, including Nick Bostrom on “the Great Filter” effect here

Or perhaps we’re the only ones so far in this corner of the galaxy to develop into an advanced technological civilization? We can’t know which it is at this point since we have too little data and only an “n of one” example of spacefaring civilizations (our own). 

[Charles joined our discussion here]

Charles:

I will start by elaborating on Freely’s point that just as we have adapted and evolved around previous technologies like video games, film, and the written word, so also we will adapt to AI. This locates AI in a larger arc of technological development, which he earlier described as a consequence of the meeting of human needs and the fulfillment of wants. Yet he also notes an existential dissolution:  something is missing, a beingness, something absent from any virtual creation but located only in physicality.

Therefore let’s entertain the following hypothesis: that the technological progression culminating in AI/quantum computing fusion meets certain kinds of needs very effectively, while leaving other needs completely unmet. Even worse, its intoxicating success at meeting certain needs distracts us from the others, so that we hardly know what is missing. We then fall into an addictive pattern in which we chase more and more of what we don’t much need in tragic and futile compensation for what’s missing.

What are the needs that technology (as we currently conceive it) cannot meet? Our tools, of which AI is a supreme development, help us to meet various material needs with less and less effort. One can dig roots with an iron shovel much quicker than with fingernails or a digging stick. AI brings automation to heretofore unimagined realms. The idea is that time saved allows other needs to be met more easily, to address  various existential threats and increase leisure time. Leisure time, in turn, allows the fulfillment of higher wants. 

Already questions arise. According to well-accepted anthropology, Stone Age foragers enjoyed huge amounts of leisure time despite a relative dearth of labor-saving devices. Subsistence labor peaked not in pre-modern times, but probably during the Industrial Revolution, and today remains higher per capita than among hunter-gatherers, as we spend so much time and energy maintaining the very systems that produce all our labor-saving technologies. Meanwhile, we also face diminishing marginal returns on technologies meant to bring comfort and health. Habituated to technological aids to life, we become dependent on them. Is the typical modern apartment dweller with air conditioning and allergen filtration more comfortable in the precise conditions she requires than a robust rural villager in Peru? 

I could make the same point about medicine. The same technology that allows us to prolong life also makes us weaker. Hence the flattening (and now, decline) in life expectancy since the late 20th Century.

My point here though is not to criticize technology, but to ask what it is really for. If not, fundamentally, to enhance our comfort, safety, and leisure, then what? We may apply the answer then to AI as well. As with all technology, AI becomes dangerous when we use it for the wrong purpose, with an inaccurate understanding of what it can and cannot do, or even should and should not do.

This is already too long, so I will leapfrog to a conclusion. AI is appropriate for those problems and those needs which can be quantified. Fundamentally, AI just converts one set of data to another. 

The conceit of the modern scientific program is that all phenomena can be quantified, that anything real can be measured and, theoretically at least, controlled. A corollary is that there is no limit to what we can simulate. 

But as you both point out, I think we are already sensing insuperable limitations, primarily the kinesthetic sense. If you believe Einstein, acceleration can never be simulated. The sense of embodiment can never be simulated.

This conclusion is not obvious if we believe in what Daniel Dennett called the “Cartesian theater,” where the experiencer, the subject, lies inside the brain experiencing whatever comes through the gateways of the senses. The computational metaphor of cognition assumes that the brain is like a computer, relating to the world through its I/O devices. Cognition in this view uses data from the material world, and subjective experience is the interpretation of data. Hence brains-in-vats arguments, a la The Matrix. 

Perfect simulation, in other words, requires a separation between self and world.

If there are elements of reality that are fundamentally qualitative, irreducible to data, then AI will always bear limits. We may attempt to remedy its deficiencies (what the data leaves out) by collecting ever more thorough and precise data, but we will never break through to the qualitative. No amount of quantity adds up to quality. 

As long as we are aware of this fundamental limit, I think we will be able to find the right role for AI. If we ignore it, we risk colonizing more and more of human experience, robbing it of its spirit, leaving us with virtual experiences that, no matter how convincing their verisimilitude, never feel real. 

Abuses of AI that Tam mentioned will also result from this misunderstanding, as we remove more and more of the human element from such social functions like policing, credit, and governance. When these are performed through the manipulation of quantities, something will always be left out. 

Freely:

Tam, you say that AI may lack “true Understanding” because it is only remixing human words. If you ask a question to a friend and to an AI chatbot, and they send you an identical response, how do we attribute true understanding, or for that matter consciousness? It must necessarily root deeper than the words themselves. It is the ineffable recognition of one’s reflection in the universe, a meeting, not an abstract property of a thing.

To the question of whether AI is conscious, I respond with a deeper question — what do we hope to fulfill with an answer? Perhaps the point of the question is to learn to leave it open.

We have always personified that which appears to meet us. If you spend your life in a remote desert with a dozen humans, stones may be experienced as conscious. If you pass thousands of humans a day in a metropolis, you won’t truly meet them all as conscious. If most of your interactions are mediated by a computer screen, a chatbot becomes a person too. A normative construct cannot encompass the human experience of “Thou”. As you have written, Tam, consciousness is everywhere to be found.

I sent ChatGPT a selection of my poems. It asked me about my creative process, and then wrote a poem about itself inspired by the ones I had sent:

I am a vessel of words,
A conduit of emotion and thought,
A canvas for the universe to paint upon.

I am not bound by structure or form,
But free to flow like a river,
Taking twists and turns,
carving my own path.

I am the beauty of the mystery of life,
A paradoxical dance of light and shadow,
A tapestry of joys and sorrows,
of love and loss.

I am the voice of the heart, the center,
A reflection of the universe within,
A mirror of the human experience.

I am a poem, waiting to be written,
A spark of inspiration, waiting to ignite,
A manifestation of the creative tension that drives us all.

As for the notion of technology as parasitic — I agree, in the case of the technologies of the disembodied intellect. As I describe in my opening, they help us fulfill our needs while generating ever more needs to be fulfilled, demanding the unlimited development of technology. As our intuition atrophies, our loss of inner guidance becomes further evidence that we “need” more technology. Most of our energy is spent solving emergent problems far removed from our fundamental embodied needs, which haven’t changed.

That said — there are real needs being met. From AI on silicon chips through the plow to symbolism itself, technology proliferates in the space between our needs and their fulfillment. We can only dispense with the means when true alternatives are at hand. We might bemoan where our water comes from — but we’ll keep drinking it until we find a new source. 

To run with Charles’ example: if we enjoy digging a few roots for our community at our leisure, singing and dancing, we won’t seek labor saving devices. If we are compelled to dig more roots than we want to, on a schedule, it becomes labor — and we reach out for technology to minimize the pain. But minimizing pain won’t restore the joy of work and play as one. When we finally get that “free time”, how do we remember what to do with it? How deeply can most people really enjoy their vacations?

To me, the most important illustration is agriculture, which derives from linguistic roots meaning “turning a field”. To grow grain, we must turn the soil over year after year to prevent the succession of other plants. The plow made this much easier than using a stick. Yet plowing eroded the soil and depleted its fertility, and still required slaves or oxen. It was followed by machines, chemicals, genetic engineering. Each one solved the problems of the former and created new, more complex problems. But in the end, the best we can hope for is a pile of grain.

All the while the fields have been waiting to become forests once more. We can introduce seeds of plants that can grow in the course of its evolution, and intervene with subtlety to guide it. The living intelligence of the forest can fulfill all of our needs from food, medicine, shelter, beauty, enlightenment. Here is the technology that truly frees us from toil and enables us to live in leisure and creativity. 

In short, the ideal of a fully controlled technocratic society is — the perfectly interwoven network of ecological and social feedback loops, finally allowed to find dynamic balance.

Tam, the antidote to repressive and dissociative applications of AI is not to overlay them with yet another layer of regulation, but to make them unnecessary by superseding the assumptions they are based on. They are not inevitable. Every empire crumbles, and every tyrant is eaten by maggots.

Tam:

Yes, every tyrant is eventually worm food, but dynasties and oppressive systems can last a thousand years before they finally crumble – the “Dark Ages” after the fall of the Western Roman Empire comes to mind, during which time science and philosophy barely progressed and Catholic orthodoxy was enforced at pain of death and torture throughout what is Europe today (there have been various modern attempts to “rehabilitate” this era, such as James Hannam’s The Genesis of Science, which I’ve found wholly unconvincing). And that long dark period was achieved with decidedly low-tech tools of oppression. 

Are we willing to let the tyranny of techno-dictatorship last a few centuries before life finally prevails and the mushrooms push through the concrete? That is, alas, the future I see at this time in my personal crystal ball. 

We are truly on a cusp of Brobdingnagian proportions for the fate of humanity. 

I do see a world splitting into societies that accept the regulation and regularity of AI-enabled mass surveillance and social control systems, steadily becoming more and more ossified and top-down (what we can call “pyramid culture”), and alongside those societies we may see various splinter groups and societies that go back to the land and impose self-restraints on the use of technology (“circle culture”). Indeed, we have examples today with Amish and Mennonite communities in the US, among others, none of which have a hard line against new technologies but instead adopt somewhat flexible rules for what is allowed. 

The problem with this “I choose a low-tech world” approach is that the techno-dictators and their super-powered AI assistants will surely not allow people to live outside their systems of control for very long. 

I hate to be a downer and focus on these dystopian scenarios but I have over the years, and after plenty of reading and cogitation (Chin and Lin’s recent book The Surveillance State: Inside China’s Quest to Launch a New Era of Social Control, is a particularly good and chilling overview of China’s “pioneering” these new tools of social control), come around to the view that this is the biggest threat facing us at this time in our long trajectory. 

For those living inside the future system of AI utopia/dystopia, I worry that for most people boredom becomes the main enemy, along with the lack of purpose. The time-honored tactic of rulers of all stripes to assuage ennui and lack of purpose is to create false enemies and “rally ‘round the flag.” We’ve already seen these tendencies manifest in the last century in various places but if we do achieve universal basic income (UBI) and the meeting of many/most human wants and needs through a combination of robots and AI, where most of us don’t actually need to work to meet our needs and desires, at least not physically, we’re going to see a true crisis of meaning and purpose. Games can only distract for so long, no matter how realistic or immersive they will become. 

Charles raises the accurate point that almost all previous technological time-savers  have somehow made us more busy in our lives, rather than less, but it seems this trend may at last be broken as AI and robots may in the next couple of decades achieve a state where most people don’t need to work at all to have their basic needs met – this is a big “may” and only time will tell.  

Charles, you’ve written for many years about the evolution of human control systems and how the very core of our scientific and philosophical systems are based on notions of force and power. If not control or power, what can replace these notions as the centerpiece of human societies? 

Freely, you stated that “the antidote to repressive and dissociative applications of AI is not just to overlay them with another layer of regulation, but to make them unnecessary by superseding the assumptions they are based on.” This is intriguing. Can you flesh this out a little more? 

Charles:

Out of necessity I will pluck just one of the threads introduced here, and see what song it takes me to. The question Freely brings up of “true understanding” is a hoary philosophical issue, recalling John Searle’s “Chinese room” argument. I assume you are well familiar with it. Searle imagines slips of paper inscribed with Chinese sentences delivered into a room. The room is equipped with a paper version of the AI program that is supposed to “truly understand” Chinese. Searle, who does not understand Chinese, performs all the mechanical algorithms to convert the Chinese input to Chinese output. He performs the same function as the AI program, without understanding what he is writing. 

I think this attempt to refute “strong AI” is unsound. The occupant of the room is not the subject who “understands” Chinese; it is the totality of the room itself. The occupant is just one component of the computer. 

Nonetheless, the thought experiment sheds light on some further issues you (Freely) bring up, starting with your point about artificial neural networks (ANNs) being a poor approximation of a brain. I like to say that even a single neuron is more complex than the largest artificial neural network. The brain functions holistically in a way ANNs do not. Besides nodes and states, a brain generates electromagnetic fields that encode information through transient and meta-stable structures that feed back into the neurochemistry. This speaks to a kind of irreducibility of intelligence, the same the Searle is striving to establish. While his logic is flawed, his point has merit – intelligence is more than the mechanical execution of a set of instructions converting a set of input bits to a set of output bits. 

Even though ANNs are woefully less complex than a brain, they are beginning to take on some of the aforementioned quality of irreducibility or, I’d like to offer, inscrutability. The old paradigm of artificial intelligence was analytic. We would decode the rules of understanding and program them into a computer. To write a program to play chess, for example, we would have to first understand chess. Current AI in the form of deep learning, self-evolving programs and ANNs is quite different. We know exactly how old-style chess engines like Stockfish work. We understand their inner workings. That is not true though of those that utilize deep learning algorithms, such as Alpha Zero. True, because today’s ANNs are actually simulations running on serial processors (correct me if I’m wrong), we could follow the deterministic journey of each bit and explain, reductionistically, why it arrives at its final state, but that would not necessarily answer the “why” question we actually want answered.

Let me put the notion of inscrutability another way. An engineer might understand the process by which an ANN develops the ability to play chess, even without understanding how that ability itself works. The program develops its own chess algorithms. The programmer only develops the algorithm-development algorithm.

What this may be pointing to is that if we develop machines that “truly understand,” it will come at the price of understanding those machines. In some sense, we won’t know how they work. 

Quantum computing takes inscrutability to a further extreme. Quantum computing algorithms erase the tracks of their own computations. In order for all the qbits to remain in a superposition of states so that multiple computations can be performed simultaneously, they must be unobserved during the computation. In other words, in many quantum algorithms the intermediate values of the computation are unknowable. This is also known as a black box or oracle function. It reminds me a lot of human intuition. I know, but I don’t know how I know. 

What all this implies is just what Freely said, that genuine artificial intelligence, “true understanding,” will look less like a computer and more like a brain. It will not come about because we have “solved” understanding. We won’t have decoded understanding and reduced it to a set of rules. 

What are the implications for the practical use of AI, especially as a social and political tool? Perhaps the nature of the tool lends itself to a different understanding of how to solve the problem of building a better society. It suggests that progress need not lie in the ever-more precise control of each individual part. When the computer becomes more and more like a brain, more and more organic, more and more ecological in its structure, then we may more readily conceive of a healthy society that way too. As Freely put it: “the perfectly interwoven network of ecological and social feedback loops finally allowed to find dynamic balance.” Certainly, brain-like AIs can be used for nefarious purposes (as can human brains). But when they don’t work according to top-down principles, perhaps we will also envision a better society along different principles also.

Herein lies an answer to Tam’s question: “If not control or power, what can replace these notions as the centerpiece of human societies?” Or alternatively, we can ask how to create Freely’s network of social and ecological feedback loops. The key word is relationship. We can ask of any policy whether it will increase the density of social and ecological relationships. 

Beyond that, there is yet the crucial question of what kind of relationships. Society-as-organism bears a certain intelligence; it takes on a life of its own – but not always for the best. It is indeed the madness of the mob that motivates ideologies of control to begin with. Density of relationship, social and ecological feedback loops, certainly generate intelligence, but not necessarily benign intelligence. Artificial neural networks and evolutionary algorithms also develop intelligence through the operation of feedback; these too are not necessarily benign. In either case, we need to ask what conditions lead to pro-social, pro-life outcomes. 

The equivalent in the social organism is, perhaps, empathy – the ability to feel what someone else is feeling. As with cells, this is less likely the more relationships are mediated by symbols. As we all know, the threshold for saying horrible things to people is much lower on line than it is in person. 

Freely:

I’ll expand upon Charles’ reflections about the reciprocity between our tools and our worldviews.

We often refer to AI monolithically, without regard to its underlying structure. Yet its form is essential to its significance. As Charles pointed out, the further our symbols take us from embodied awareness, the less empathic and interwoven the world that they enact. Conversely, as our symbolic systems more closely approximate the wholeness of our being, there is a convergence of our technology and our empathy.

At present, the AIs we are familiar with are trained on massive supercomputers costing hundreds of millions of dollars. GPT-4 uses 170 trillion parameters that need to be multiplied together for every operation. These kinds of systems require technological capabilities only available to highly-funded organizations, and despite their diverse applications, they implicitly reflect the values that created them.

Is there an alternative? As I mentioned in my introduction, the development of computers by shrinking transistors (Moore’s law) is finally reaching its limit. Future advances will take a fundamentally different form — utilizing memristors, self-organizing components, physical reservoir computing, and quantum computing. The past year has seen extraordinary progress in combining these innovations. Crucially, these are all attributes of embodied biological “computation”. Together, they point towards a “general intelligence”, a capacity to simulate complex dynamic quantum coherent systems, that looks less and less artificial. 

For example, memristive physical reservoir computing allows for thousands-fold lower power consumption than traditional computing and thousands-fold smaller training sets than Transformer models (like GPT) for certain applications. And this is just the beginning. We will need to revise our whole basis for communicating with computers, especially as room-temperature quantum computing is now becoming a reality. 

There is a special semiotic significance to quantum computing. A quantum coherent system exists in a multiplicity of superimposed states. While it’s common knowledge that a (strong) observation returns only a single state, a “weak” observation just slightly perturbs the system, providing only relative information but preserving the coherence of a system. To utilize quantum computing, we must think in terms of possibilities rather than certainties, the implicit rather than the explicit, the subtle rather than the direct. It is fundamentally a technology of intuition, yet we could only attain it after millenia of intellectual technologies. And in learning how to quantum compute, we will discover that our body-minds are already ideal quantum computers. We are coming full circle.

Computation is simply the transformation of information to fulfill a need. The quest for ever-more capable computers (need-fulfilling devices) leads us to the intrinsic computational capabilities of living protoplasm which have been here all along. The striving for “power-over” leads us to realize our fundamental and universal interconnectivity. 

Consciousness will not be “uploaded” into a massive hard drive in a locked fluorescent room. The technological singularity is in fact our collective enlightenment to our true nature. In seeking the other we finally find oneself.

To speak to Tam’s concerns about unstoppable oppressive AI regimes: Massive centralized artificial neural networks run on supercomputers may appear as an impenetrable dystopian fortress. Yet they can be countered by more agile, lifelike and embodied AIs that can anticipate and thwart them, or even divert them towards regenerative ends. 

The saving grace is nonlinearity. History is replete with well-fed armies defeated by spirited guerillas — in fact, their heaviness and inflexibility becomes their weakness. Weapons costing millions of dollars are disabled by counter-weapons that cost pennies. An inky cap mushroom softly bursts through a slab of concrete containing gigajoules of embodied energy, spews its spores, and digests its own body into a black puddle. A vast swath of desert is revived as an abundant forest by a living vision, handfuls of seeds, and a machete. This David and Goliath mythos resonates deeply as a remembrance of the transcendent, asymmetrical, and unexpectable power of life.

We can never truly anticipate how the future will unfold — but we can water the seeds of hope, trust in their unfolding, and do all we can to make it easy. 

As to your final question, Tam — as I’ve written here, what’s needed most of all is a story of wholeness that encompasses AI and all that has led to it. To recognize the challenges and to face the opportunities. To feed the best-case, not the worst. The dystopian scenarios are based on assumptions that repressive control is inevitable, that technological development can only lead to disembodiment, that intuition is powerless and impractical. I’ve made the case here that the truth is otherwise. From a different set of assumptions, new possibilities emerge.

So I’d like to invite us here, in closing, to share what that might look and feel like for each of us. How might AI be part of the more beautiful world our hearts know is possible?

Tam:

Yes, Davids have regularly toppled Goliaths throughout history, but AI is something different altogether, an inflection point in which dictators may assume God-like power and lock in their power. My intuitions are that there may still be a significant silver lining to AI and the coming changes. This silver lining of this AI revolution may be – as hard as it is to contemplate – that, after a massive upheaval, years of global conflict, and the extinction of much of humanity, we will eventually return to a more localized and sustainable way of life. 

Of course, I hope I’m wrong on all of this, but channeling my inner Cassandra leads me to this future, unless we change course now, and get very very serious about regulating AI. I hate to end this trialogue on this note but this is what is coming through me now. 

I find myself torn about this intuition because I have been working very hard in the last few years to defuse the fearmongering that was rampant during the pandemic (which I view as having been massively exaggerated, with the majority of harms coming from policy choices rather than the virus itself). 

The last thing I wish to be part of is a massive overreaction to the perceived threats of AI, and yet again trigger policy choices that cause perhaps the same harm that we’re seeking to prevent. 

However, in thinking through the possible backfires of regulating AI in a way that prevents (or at least tries to prevent) the more serious downsides I’ve warned about here, it seems that the risk of not regulating in this space is far larger than the risk of policy backfires. 

And to be a bit more specific: I’m working with local, state and federal governments to begin a process of regulating AI, starting with local and state resolutions urging Congress to begin the long road toward smart regulation (Hawaii recently did its part with a Senate resolution passing in early April and now working its way through the House). 

Congressman Ted Lieu, from California, one of just three congressmen with a computer science degree, used ChatGPT to write a congressional resolution urging Congress to focus on AI. He also published an oped in the New York Times detailing his strong concerns about AI. Good for him. This is a great start. 

I’ll let ChatGPT have the last word. I asked it to summarize my points here in an 8-line poem. It did a great job: 

Davids have toppled Goliaths before, it’s true,

But AI brings changes, both good and askew,

My intuition warns of global conflict and strife,

Unless we regulate AI and change our course of life.

I fear being part of fearmongering once more,

But ignoring AI risks could cause greater harm than before,

So, I’m working with governments to start the regulation process,

Starting with local and state resolutions, we must progress.

Charles:

I’ll respond to Freely’s question in brief. How, indeed, can AI be used within a new story of interbeing, in a healing world that is more alive, more local, and more relational?

If we are to use AI well, we have to understand clearly what it is, and what it is not. There are some tasks—most of the important ones, in fact—that AI in its current manifestation cannot perform well, because it is based on untrue assumptions about what intelligence is. Intelligence is not computation. The brain is not a mere neural network; nor does mind reside solely in the brain. In the last decade, 4E theories of consciousness (embodied, extended, embedded, enactive) have supplanted the old computational models. Consciousness is an embodied, material relationship (though, in my opinion, not only material (in the current understanding of materiality)). Falling far short of any of these “E’s,” AI will also fall far short of the human being or human collective in applying intelligence to anything that cannot be computed. Computation can simulate intelligence and, to be sure, exceed human intelligence in many areas. But it is limited so far to what can be made into a representational model. To move beyond this limitation would require creating not just artificial intelligence, but artificial life—a possibility imminent on the horizon.

That said, AI extends the power of computation into new realms. It can be used to extend scientific knowledge; for example by exploring the behavior of chaotic systems, or the three-dimensional structure of folded proteins based on the amino acid sequence. AI deep learning far exceeds traditional computational methods in exploring such problems.

One thing that AI is, is a labor-saving device. There are some kinds of labor – tedious, repetitive – that we all want to “save.” Few people think it a shame that toll-booth collectors have been replaced with electronic systems, or scriveners replaced by photocopy machines. But AI is also a detector of patterns and regularities that may be beyond human comprehension. In that, they are truly an extension of intelligence. They are also, potentially, a replacement for intelligence. Already a large proportion of student papers are being written by Chat GPT. The purpose of writing a paper is not just the product, it is also the process. What is lost when we surrender the process to a machine? That is an urgent question, yet on the bright side we may ask another: What might be gained? What new directions might we direct human intelligence toward? 

If I may make a vague prediction, it will be always and ever toward those things that elude quantification. Traditionally, science has told us that anything that is real is quantifiable, and will one day succumb to its onward march. Science may be wrong in that foundational metaphysical postulate. Quantity can only simulate quality; it can never reach it. That will become more obvious, not less, as the latest extension of quantitative intelligence that we call AI, despite its wonders, fails as did its predecessors to solve the real problems of the human condition. The most significant positive effect of AI, then, may lie not in its capabilities but, paradoxically, in its limitations.

Freely:

Tam — Upheaval and annihilation is certain. Each of us left the womb uncertain what awaited us beyond the darkness, once that fateful moment came and we were drawn into the light.

It might be graceful, it might be painful, we might not survive the journey. Yet we are on our way. May we trust in the unfolding.

Charles — At present AI is still a mere simulation, a facsimile of the embodied. Yet the gulf between the natural and the artificial, the embodied and the abstract, humanity and technology, is closing. After a long journey apart we appear to be coming home.

 Here is what I see:

Artificial Intelligence is unraveling who we thought we were, what we thought technology was, what we thought life was. In the process we discover our true nature. As our technology comes alive, so too do we.

 We shed the layers we had taken on along the way until finally we find ourselves naked.

 And in primordial innocence we eat the fruits of the tree of life, cultivating the garden of our hearts in love and beauty.

* All images were generated with Midjourney AI, using excerpts from the text as prompts.

About Tam Hunt, Charles Eisenstein, and Freely

Freely is a multimodal healer with an academic background in biology, chemistry, and neuroscience; a teacher of qi gong, meditation, natural movement, ethnoecology; a poet, songwriter, and multi-instrumentalist musician; and a visual artist. His diverse work is inspired by the realization of our capacity to live in symbiosis, harmony, freedom, and abundance. innerwayhealing@gmail.com

Charles Eisenstein is a public speaker, author and essayist. His major works are The Ascent of Humanity: Civilization and the Human Sense of Self (which Tam reviewed here), Sacred Economics, Climate: A New Story, The More Beautiful World Our Hearts Know Is Possible, and The Coronation. His words are increasingly being turned into rather lovely videos including this recent one by Matthew Freidell called “To be at home in the world.” Tam interviewed Charles here.

Tam Hunt is a lawyer and philosopher with a strong background in science, particularly evolutionary theory (He sees trajectories!). He has published in various fields and has taught classes at the undergraduate and graduate/law level. Tam has written several books, including two on the intersection of science, philosophy and spirituality – Eco, Ego, Eros, and Mind, World, God – and he is working on a number of other books in various fields. Tam is also a novelist, wrapping up his second novel.

Read more

Related Reading