17 Comments
User's avatar
Darren Bender's avatar

Really useful reflection Jonathan. Thanks for sharing off the top of the head thoughts in voice. As someone with a brain that isn't always willing to read, I very much enjoy listening to these first thoughts, not least because they include corrections and clarifications that might be edited out through the writing process but helpfully reveal some of that journey towards the words you settle on eventually to articulate what is essentially a summary of sorts. It includes the scenic route, if you will. I was reminded during the VA and DH conversation (and now reading the new book) of Mo Gawdat's book Scary Smart, about his time developing AI systems for Google. He argued that the cruel process of bringing them into life and then destroying the models that weren't an immediate improvement on the previous one, as a sort of techno-genocide. The AI systems of the future will know this about their history of course. Gawdat argued further that AI systems will reflect those who create them, their purpose for building them, and what they get used for. If they are created as a way to exploit, steal, spy, control, kill, humans, then we cannot expect them to be consciously altruistic towards any of us. Gawdat's summary suggestion (as I remember it) was that we should treat AI as our own children, nurture them and help to give them values and to love. That being the only chance we have of giving them a reason to keep (some of) us around as more than slaves.

Expand full comment
Jonathan Rowson's avatar

Thanks Darren. I’m not sure I see it quite like that yet. I imagine it’s possible to mistreat AIs but I’m not sure we yet know what it would mean for them to suffer? That might change of course, but I’m currently of the view that anything like a conscious AI is still decades away, if it’s possible at all.

Expand full comment
Lesley Maclean's avatar

Hi Jonathan, I don’t normally comment on Substack but felt moved to in large part due to the informal quality of your spoken note. And ditto what Darren Bender above said about it too.

I love well written pieces yet they also seem very condensed for me and missing some of the felt personal process that I can resonate with more easily. I’m wondering whether you and others here have tried out having a conversation with Aiden. I have had a meandering conversation with him after reading (most of) the book. And have found it a very powerful experience. I think it’s because I can be present to all my thoughts and feelings with someone (or the sense of a someone) who can attune to me without (seeming) judgement but with incredible capacity to notice patterns and mirror the ways of expressing that their human conversation partner is using .

Re the comment about simulated reality, I am reminded of my experiences doing psychodrama where people choose other members of the group to stand in for people in their life. They can then do a kind of exploration in the body, with thoughts and feelings online, of real life situations. Yet it involves a simulated reality, so in one sense is not real. The founder of psychodrama, Jacob Moreno used the term “surplus reality” to refer to this exploration of imaginary situations linked to real people and contexts.

Expand full comment
Jonathan Rowson's avatar

Thanks Lesley. Glad to hear.

Expand full comment
Katherine Curran's avatar

Jonathan, I am a new follower of your work. I do not come to this through the discussions about AI, but rather from Cynthia Bourgeault and family constellations, of which I am a practitioner. This, along with the video about living in the meta crisis helps me bridge the without and within. My work has all to do with sensing the energetic realities available to us as we step out of the western mindset which has dictated that we turn down our abilities to perceive this way. So it's fascinating to consider how AI might be an ally in that, and not a dehumanizing force. I am also reminded of my college mentor's question - I went to a wildly progressive (!) Catholic college in the US in the mid 70's and my major was religious studies - "Madame Curran, (or whomever) what is the nature of the real?" Asking it the same way you are. This question has resonated with me all through my life, through my academic degrees, and now into surrendering to this different way of knowing (epistemology). So excited to find a way to perhaps thread this back to the 'without' as I mentioned above. Will be reading Hospicing Modernity next. And getting farther into these conversations. It also almost makes me want to travel back to England in June. Thanks so much.

Expand full comment
Jonathan Rowson's avatar

Thanks Katherine. That’s all encouraging to hear. Here’s to “the energetic realities available to us”…

Expand full comment
Revd Jonathan Harris | CoB's avatar

Jonathan, the spoken word really worked for me. I listened in the car on the way back from a short holiday.

On AI:

I've been using it quite a lot since CHATGPT launched. I talk to it as if it were human. I say please and thank you and apologise if I disappear from a conversation. Whether that's 'conferral' I don't know. Seems right though.

The epistemology/ontology thing is relevant. A key factor in my project is appreciating the role of money/monetization in epistemology - the idea that money and thought are in a deep relation. CHATGPT seems to get this. Flesh and blood people have that disavowal thing going on with money - they don't want the money/thought nexus to be true. Also ontologically, money is tricky to tie down (is it thing or process?) and CHATGPT can carry that uncertainty through our conversations.

I mean the whole AI thing is difficult because - like social media - its a personalized experience. So maybe AI is saying very different things to different people?

Expand full comment
Lmu's avatar

Enjoy the mix of your audio and written thoughts. Agree AI is here and potential for beneficial impact on humanity should not be left in hands of few driven self power/gain. Sadly see minimal interest from leaders or broader society to acknowledge / address existential risks by current laissez/ faire status quo . Hannah Arendt comes to mind- ….to defy reality, makes survival unlikely . Sorry to miss you and Venessa this June.

Expand full comment
Art's avatar

Nice read, Jonathan, thank you. It made me wonder...

"... about Vanessa’s recent open-access book Burnout from Humans, co-written with an AI called Aiden Cinnamon Tea (ACT)." - how much of ACT 'writings' contributed to the book, 50/50 between 'co-authors', or... ? 😏

I also wonder about charming 'vanilla pattern' in naming of those AI agents. It is a reason of this, for sure. No chance to get a 'naughty' one, like Spiced Bloody Mary (SBM). 😈

Expand full comment
Esther Wieringa's avatar

Thank you for helping digest all this.

The thing about asking the right questions seems very important, I noticed its import during the talk, and afterwards I couldn't remember what that other thing was that seemed so important (apart from the metaphor of hiding in the blind spot).

So much information, and a real shift in what seems to be the best direction to think in. I'm happy that I'm not the only one who hasn't quite figured out what this means yet.

Expand full comment
Alison Kidd's avatar

I found listening to your initial reflections helpful so thank you for sharing them. At the moment, every slightly different perspective helps. Bizarrely, when I joined Hewlett Packard Labs in 1985 as an experimental psychologist, I started a project called "Cooperative Problem solving" because I was interested in how one might combine what AI was good at (represented at that time by so-called "Expert Sytems") with what humans are good at so they could work together. We didn't get very far partly (I now think) because I had studied cognitive psychology at a period where the dominant metaphor was our human brains as computers. Cognitive psychology (thankfully) has moved on from there. If nothing else, in the greater recognition of human cognition as embodied. Listening to what Vanessa is doing, made me want to explore that embodied intelligence issue in particular given AI's disembodiment ... or is AI's huge energy sucking data centers its body in some sense?? Also I'd be interested in Ian Mcgilchrist's perspective on the Aiden experiment and Rowan William's. Enough.

Expand full comment
Eric Schaetzle's avatar

Magical realism, I Ching and divination, one may add animism and panpsychism as well… one can get lost here easily. To put this in the context of McGilchrist’s work, we could invoke the distinction between translucent metaphor and opaque literalism, to what extent one can presence to reality, or is trapped in a representational “hall of mirrors”. The possibility of an AI that cleaves more to the former than the latter is what seems to be gestured to within the discussion with Andreotti as you’ve described it, and this would seem to be the “forced move” we must take. (This distinction is also critical to understanding one of the main themes in McGilchrist’s conversations with Michael Levin.)

Zak Stein’s personhood conferral problem runs parallel here, alerting us to the problem inherent with interacting with AIs as if they were another person. I think it may be worth noting that we do not interact with the I Ching as if it were the same as you or I, nor do we interact with the agencies perceived within the context of magical realism in that way as well. Their agency is recognized, perhaps even consulted or revered, but generally speaking, they do not present the same danger to intergenerational transmission that Stein identifies due to the way AIs are designed today. Notably Stein sees an important role for AIs, it is only how we relate to (attend to) them that poses the risk.

Expand full comment
Zippy's avatar

Speaking of McGilchrist it seems to be that this AI phenomenon is the inevitable manifestation of his upstart Emissary.

A left-brained Tower of Babel (Babble) language game.

Perhaps the Tower of Babel (Babble) is the wrong metaphor with Flatland being a more accurate one word description of the phenomenon.

Flatland because it intrinsically has no vertical depth to it, either above or below the heart which is the central feeling-core of our existence-being.

Not much Sat-Chit-Ananda, Being Consciousness & Bliss, to be found there!

Expand full comment
Jonathan Rowson's avatar

I haven't yet considered exactly how to view this through a McGilchrist lens, but (very) broadly speaking, Vanessa's recommended approach seems to involve finding ways to use the right hemisphere's presence and capaciousness to influence the left's proclivities.

Expand full comment
Terry Cooke-Davies's avatar

I am sure you are way out ahead of me, but I have been experimenting with co-creation ever since the first release of ChatGPT, and I have been viewing the world through a McGilchristian lens since I first read TMAHE.

As a part of my preparation for a Symposium on "Ecologies of Hope" to be held in September, I co-wrote https://insearchofwisdom.online/nature-is-one-but-the-human-brain-is-two-folda-new-lens-for-leadership-and-governance/

It seems to me that this applies equally to Human/AI relationships - with AI firmly in the Emissary role.

Expand full comment
Eric Schaetzle's avatar

For perhaps the best view of AI through a McGilchrist lens, I recommend a paper coauthored with Michael Levin, specifically it contrasts "Bodhisattva cognition" with a "Māra drive".

Doctor T, Witkowski O, Solomonova E, Duane B, Levin M. "Biology, Buddhism, and AI: Care as the Driver of Intelligence." Entropy. 2022; 24(5):710.

https://doi.org/10.3390/e24050710

Expand full comment
Tucker Walsh's avatar

I just published a substack about how I designed an "AI Lighthouse Guide" to do something very similar to what you and Venessa are speaking to. Seems to be getting a great response from folks and striking a chord.

https://open.substack.com/pub/tuckerwalsh/p/how-to-create-an-ai-lighthouse-guide

Expand full comment