What do I know about Artificial Intelligence?
Not much, but not nothing, and here's why I am keen to learn more.
There are the tech bros, the tech fans, the tech world, the real world fawning over the tech world, tech users, tech investors, and tech critics. And then there are people who observe technological developments with a mixture of bemusement, alienation, trepidation, resentment, half-baked understanding, half-hearted excitement, fake awe, and general unease.
For these people, their mixed feelings sometimes co-arise with a confected humanism that shields them from their incipient mid-life crisis and lack of relevant knowledge, unprocessed fear, and reflex indignance at being excluded from what is tacitly presented as the new main arena. These reactions are no doubt prompted by a sense of FOMO (fear of missing out), a related fear of being an NPC (a non-playing character), as well as a desire to protect one’s own sense of being relevant. (There is also a lot of suppressed ANGER in there, though it’s kept under a tight lid.)
Such people downplay the relevance of technology, while partly wishing they were back in the nineteen nineties watching Friends before any of this happened; when postcards were a thing, before the internet became our Axis Mundi, before social media turned our daily lives into a performance, before the smartphone distracted us within an inch of our lives, and before AI was something they were supposed to understand well enough to be terrified of.
Yet in their better moments, these people feel too young to capitulate to nostalgia. The game they feel excluded from is becoming too pervasive to ignore, and the stakes, so they are told, are just too high. So such people hope they are mature enough to take responsibility to learn what needs to be learned so that they can establish a meaningful relationship with this new putative lead character who has entered our stage unbidden. And while such people are not actually scared, they often wonder if they should be.
Needless to say, I am one of these people.
What I am describing is not technophobia, because it’s not about fear as such, but more like a sense of disorientation caused by the lack of intelligibility on a matter of apparently great concern.
What I feel is more like ‘tech incredulity’, in which incredulity refers to the state of not wanting or being able to believe something. So for instance, I don’t want to believe that AI will soon change the world beyond recognition, nor do I want to believe that Artificial General Intelligence or ‘Superintelligence’ is an existential threat to humankind - so you can stick your paperclip maximizer up your jacksie.
Moreover, I am not really able to believe these things, because I don’t understand the basis of the claims well enough to see how it plays out. AI is of course only one kind of ‘tech’, and there are developments in quantum computing, synthetic biology, gene editing, virtual reality, and elsewhere that are also worthy of our intelligence, but just as climate change has been described as “a nexus of systemic risk”, I think we can see AI this way too, as an ambiguous multiplier, in this case, we are told, of both threats and opportunities.
*
I am not a philosopher of technology, and I don’t have domain-specific expertise or even very informed thoughts to share on AI. However, since what we now conventionally call AI and our response to it (including what we call it!) is clearly part of the metacrisis, and since I had three conversations about AI this weekend, I feel a personal and professional responsibility to understand it better, and so this seems a good moment to lead from confusion.
As I’ve argued elsewhere, when a complex issue arises in the public sphere, I think the democratic responsibility is to try to make sense of it as well as you can within the constraints of your life. The response should be a kind of epistemic appetite and agility that leads you to look for good source material but not to entirely defer to experts (who often disagree with each other), nor to simply follow the views of your social tribe, but to clarify what you do and don’t understand about what is going on, what you feel is at stake, and what the turning points to look out for might be.
*
I confess I have been in a kind of denial about artificial intelligence. I don’t mean the denial of its very existence. Well actually maybe I do mean that, though not in the way you might imagine.
AI is clearly here, with us, growing in its influence, and perhaps a harbinger of existential risk. Just this weekend I was thinking about what the ‘behind-the-scenes’ story at OpenAI might have been, tweeted for advice, and followed the trail. I also had a voice note from a colleague saying there is something to understand here, and I had a conversation in the park with another parent from my son’s primary school who happens to be a professor of computer science. The next day I was in a discussion with an old university friend who said he thinks the threat to humanity from AI is several orders of magnitude bigger and more difficult to deal with than climate change. There is no hiding from the issue.
However, it’s not clear AI is really ‘intelligence’ and I wonder how much might change if we stopped talking as if it is - it might make it less scary, partly becuase it makes it much less like us, and therefore, perhaps, becuase we are threatening, less threatened by the thing that is not, after all, like us! It’s not clear to me that intelligence, as such, can be artificial because the heart of what makes intelligence intelligent and not merely an amalgam of memory, prediction, and processing power, may be inextricably linked to our embodiment, situational understanding, social conventions, and perhaps even spiritual orientation.
Intelligence is used in everyday language as a buzzword for something like ‘desirable feature conferring problem-solving powers and social status’ but there is, as the academics say with insufficient irony, ‘a literature’ on intelligence which highlights just how complex and contested a notion it is. I studied the idea of intelligence as part of my Ph.D. on Wisdom, as a conceptual comparison (along with creativity) to tease out differences. I am also steeped in debates about the validity of multiple intelligences which I examined under the guidance of Howard Gardner when I was at Harvard. I believe there is decent evidence that there might be something like ‘G’ or general processing power, but also that ‘IQ’ is ultimately just a matter ‘what the tests test’ rather than a good measure of the broader educational and cultural notion of intelligence. I believe this much fuller notion of intelligence relates to adaptability to ecological niches and problem-solving informed by contextual understanding, cultural value, and social practices.
So much for intelligence, which is another conversation. What I mean by being in denial about AI is more about ‘stealth denial’ - the term I coined back in 2013 for a pervasive form of denial about climate change, namely ostensibly not denying the problem is there, but living in such a way that it makes no significant difference. This might look bizarre to those like Eliezer Yudkowsky who has been warning of the problems relating to the lack of value alignment between humans and AI for years and now seems to believe we humans are too late, out of time, soon to be vanquished. I genuinely don’t know if that’s true; I doubt it, but I live as if I don’t have a responsibility to find out, which is a kind of not knowing. I do know there is such a risk. And yet, as the old Chinese saying goes: to know, and not to act, is not to know.
*
I have been trying to learn how to know, but it’s not easy. While helping World Champion Viswanathan Anand prepare for his world chess championship match with Vladimir Kramnik in 2008, it was already clear that the main role of a human Grandmaster like me was not to come up with new ideas but to ‘drive’ the analytical engines we were using (AIs) and occasionally steer them towards variations they would not necessarily consider by themselves. (Part of the reason I stopped playing chess seriously is that game preparation became primarily a matter of interfacing with AI and remembering what it disclosed.)
In 2014, I read Nick Bostrom’s book Superintelligence in preparation for chairing an RSA event, but I was sick on the day and didn’t get to meet him, which is a pity because I had questions… For those who don’t know Bostrom’s work, his open-access material on existential risk is worth perusing. One of his fears is that an AI works on the basis of optimisation and if you ask it to optimise for, let’s say, the end of cancer, there is a risk that it will choose a solution that is unimaginable to humans, like killing us all in order to get rid of cancer. If risk is about hazard (how likely) and harm (how bad), then the threat of existential risk from AI should be coded as high, because even if the likelihood of such a wipeout may be low, the harm is potentially devastating and unrecoverable. It is not clear what follows politically.
The following year I read Frank Pasquale’s The Black Box Society in preparation for chairing an RSA event on Big Data (which is an inextricable part of AI today). Frank’s concern is that decisions will be taken about us without our understanding. I think I had this book in mind when I refused to give my data as a condition of entry for buying my older son a sandwich in Tesco, and related fears are present with respect to data being used, for instance, as a form of social credit as they are in China.
More recently, I interviewed my fellow OSF fellow William Isaac of Deep Mind’s ethics team in 2020 on Can an Algorithmic Society be ethical? And while preparing for that conversation I learned about intellectual debt, which is about the kinds of theory-less knowledge that AIs can generate. In many cases, even the engineers don’t know why a given AI comes up with the answer it does. In addition to ‘hallucinations’, when the LLMs make things up, the growth of intellectual debt is a genuine cause for concern.
In the early days of Perspectiva I created a Digital Ego project and commissioned essays by Dan Nixon: What is this? Why we need to continually question our online experience and Tom Chatfield: Unflattening the Screen: Finding Virtue in the Virtual. These are great think pieces, but I have struggled to to keep Perspectiva thinking about technological matters, and since it is not my natural language, I feel I need help here.
I listened to Stuart Russell’s Reith Lectures on AI in 2021 and they are highly recommended. The last lecture about autonomous weapons is particularly terrifying.
After Chat GBP4 came out I watched several videos, including the warning of Tristan Harris and Aza Razkin on The AI Dilemma that we were at a crucial juncture where we had to take stock before LLMs became so thoroughly integrated into society that it was too late to change them. I also watched Zak Stein describe ‘the alignment problem’ and why he felt (as I understood him) it couldn’t be solved. More optimistically, I enjoyed listening to Jaron Lanier on Unherd arguing How Humanity could Defeat AI. I also feel reassured that my friend and Perspectiva Associate Mark Vernon feels remarkably untroubled by artificial intelligence, and suggests we should not believe the hype. He offers ten ways to understand AI, and suggests that the ultimate antidote to artificial intelligence might be spiritual intelligence, as argued in his Perspectiva essay and elsewhere.
So it’s not as if I have been completely out of the picture. My instinct is that there are upper limits to what an AI can achieve and that in many cases it might get c90% of the way toward something but not any further. My hunch and hope (and therefore not a trustworthy hunch!) is that there are upper limits to what any AI can achieve because it is not able to do the ‘easy’ things relating to situated embodiment, and that in many cases it might get c90% of the way toward something and look extraordinary, but not any further because of unassailable limits. There is an essay by David Deutsch that makes this kind of claim.
But if I am honest, I do feel I have some work to do.
I can’t say I honestly understand the risks in play, particularly the existential risk.
I can’t say I really share the optimism of those who speak of how AI can help us. I can see how it can help medically, and perhaps educationally, but I am far less clear on how it could help economically, sociologically, ecologically, or politically, where our deepest problems are.
I can’t say I believe AGI is likely, but I have never had to articulate why.
I can’t say I truly understand the alignment problem.
I wonder if AI will ever approach human intelligence, never mind super intelligence.
I’m keen to understand each of these things better. So I decided to work a bit harder to clarify my feelings about AI, just as I did on climate change when I first started thinking about it in 2012 or so. I started by trying to make sense of recent news relating to Sam Altman being fired and rehired.
*
My impression is that the general concern with the ethics of AI at OpenAI may have reached a critical moment relating to the cusp of an important technological breakthrough. If that’s so, the firing and rehiring were probably about the balance of power relating to the balancing of safety fears and commercial interests (this video is recommended as an overview).
As I now understand it (and that’s a note of caution!) the large language models that have so impressed everyone in recent months are basically (very) glorified Google searches. Though they offer impressive summaries and decent first drafts of many things, they cannot do basic reasoning, never mind human intelligence or creativity. Since they work with algorithms that encode the structure of language and predict the next word of any given sentence through their memory/data they depend on syntax - the arrangement of words and phrases in a particular order- rather than semantic meaning. That means they cannot do basic logical processes that rely on semantic meaning. For instance: All Scottish people speak English. Jonathan is Scottish. Therefore Jonathan speaks English. Despite their massive processing power, that kind of basic semantic reasoning is, apparently, beyond the LLMs. There’s hope for us yet!
But if that competence were, so to speak, to come online - and it might have behind the scenes over the last few weeks - then simple semantic reasoning for AIs became possible, and combined with the existing power of LLMs this could be a massive breakthrough with consequences that are hard to predict. Please note, that such a breakthrough might also not have huge implications, but intuitively I can see why it might. If you already have a way to approximate ‘that’ (any given phenomenon explained) and then you have a tool that allows you to say ‘if that then this’ suddenly you have a new dimension of computation that could make a series of new inferences leading both to new insights and highly consequential mistakes. This is my guess, it is hazy, and I could well be wrong. I share this example just to report that I feel better, and less afraid, now that I have my own (revisable!) sense of what might be going on.
*
While I have many interests other than AI, and there will definitely be breaks between posts on this subject matter, I will sometimes be thinking out loud about AI in this substack over the next few months. I will try to share my confusion, tracing my relationship to AI and trying to come to a more settled view of what is going on and what’s at stake. I hope it might be useful to others who are trying to find their way. I am trying to fathom exactly what the risk is, how I feel about it, whether the ‘alignment’ problem is a real thing, and what might follow for regulation, or any other kind of collective human action.
So please, if you have books, blogs, essays, social media accounts, podcasts, or videos you can recommend to help me on this particular journey, please let me know. I don’t have infinite time (none of us do) so if you can say why you are recommending something that is particularly appreciated.
Thank you!
Thank you for this piece. As a fan of both 'Chess for Zebras' and 'The Moves That Matter", I'm interested to hear you thinking out loud about topics for which you have slightly less confidence. While I find myself in a similar state of 'stealth denial' about AI, it's also exciting to feel part of a conversation that, until recently, excluded non-scientists like me. But that also puts more of the onus on us to explore the topic.
Whenever someone says, 'AI can't do [X]', though, it usually pops up doing it the following week. And it's easy to refute the claim that AI can't do basic reasoning. I typed into the ChatGPT4 window: "All Scottish people speak English. Jonathan is Scottish. What language does Jonathan speak?"
ChatGPT's answer: "Based on the information provided: All Scottish people speak English. Jonathan is Scottish. Therefore, Jonathan speaks English. This conclusion is derived using deductive reasoning, where a specific conclusion is reached based on a general statement."
I've also uploaded logic trees into ChatGPT which it can 'read', scrutinise your logic and suggest causal links you've missed. So unless I'm missing the point, critical reasoning is definitely on the way.
I've read Stuart Russell's Human Compatible, Marcus de Sautoy's The Creativity Code and many articles for a lay audience. I've got a bunch more books cued up. Nick Bostrom's Superintelligence is up next after Mustafa Suleyman's The Coming Wave. Of course, books come out slower than the pace of change but they remind us that AI has been around for decades yet only recently part of the general public consciousness. I'll follow the links you shared. I look forward to reading more.
Fellow chess player here that has browsed your Grunfeld book and appreciated your thoughts on Hans Niemann. Am a computer scientist, ex-corporate strategist and now entrepreneur. I have been working with AI since 2014.
Thoughts on AGI, core concepts and people to follow on X
AI community is convinced that AGI will happen
Expected years to AGI is now 7 years or 2030 (Metaculus)
Robotics built on AGI also relevant as it impacts manufacturing, agriculture, transport, construction, defence, etc
AGI Upside: science, creativity, imagination, organisational structures and productivity breakthroughs, new industries formed + new forms of human-AI companionship
AGI Downside: restructuring, some or many blue-collar and white-collar job losses
Very long term Implications of AGI
Universal basic income?
What about AI regulation?
Very hard to coordinate globally
Algorithms are freely available via GitHub and research papers
If you’re a state actor or large corporation, all you need is data and compute
What is required to achieve AGI?
More data, more compute (GPUs), new break-throughs in algorithms
Data and compute are not the core issue
New break-throughs in algorithms will require research and experimentation
Is there an existential threat?
Sketchy evidence presented by “AI doomers"
Only credible near-term scenario that I can envisage is as follows:
Imagine a future AI agent running on top of an AI model
This AI model is a next generation Large Language Model like ChatGPT but can also generate, host and execute software code
A human assigns this AI agent a goal such as “destroy humanity”.
AI agent uses AI model to break-down “destroy humanity” into sub-tasks
Lots of trial and error and iterations happens
Now imagine a swarm of AI agents with the same goal using different versions of AI models for some added variety
Eventually it might lead to bad scenarios
Core concepts you might want to look at
Large language models (LLMs) are the backbone of systems like ChatGPT
Foundation models are LLMs trained on all books + all internet text, images, videos etc. Expensive costing hundreds of millions of dollars plus to builds
Fine-tuned models are also LLMs. Smaller and trained for specific tasks. Take a foundation model and fine-tune. Cost hundreds of thousands $ or less
Costs are dropping fast while capabilties are getting better
You can see foundations models are generalists and fine-tuned models as specialists
We are heading towards a world where both will exist and will talk to each other perhaps in some distributed manner
Industry structures worth considering
Do we want a closed AI oligopoly or an open-source led AI?
Important question as it has implications for AI industry and all industries and us humans that will use AI and AGI
Issues to consider include economic control, managing information flow in free-societies and how quickly AI innovation happens
Closed AI oligopoly are few VC—backed AI companies who are GPU-rich and can hire best AI researchers to build foundation models
Open source AI is built on few companies like Meta and Mistral that create foundation models and hand them over for free to AI community to fine-tune
Closed AI is easier to regulate. Important if you believe that existential risks or downside outweighs upside
Open source AI will lead to better and faster innovation and more equitable scenarios
Best people to follow
I’m in the open source camp so that’s the space I know better
Best people I follow are
- @ylecun - AI researcher now leading efforts at Meta
- @AndrewYNg - AI researcher now building AI companies
- @bindureddy - AI CEO who understands tech and business
- @ClementDelangue - AI CEO for HuggingFace, a platform widely used by AI community
- @Dan_Jeffries1 - futurist who often tweets about AI