24 Comments
Nov 27, 2023Liked by Jonathan Rowson

Thank you for this piece. As a fan of both 'Chess for Zebras' and 'The Moves That Matter", I'm interested to hear you thinking out loud about topics for which you have slightly less confidence. While I find myself in a similar state of 'stealth denial' about AI, it's also exciting to feel part of a conversation that, until recently, excluded non-scientists like me. But that also puts more of the onus on us to explore the topic.

Whenever someone says, 'AI can't do [X]', though, it usually pops up doing it the following week. And it's easy to refute the claim that AI can't do basic reasoning. I typed into the ChatGPT4 window: "All Scottish people speak English. Jonathan is Scottish. What language does Jonathan speak?"

ChatGPT's answer: "Based on the information provided: All Scottish people speak English. Jonathan is Scottish. Therefore, Jonathan speaks English. This conclusion is derived using deductive reasoning, where a specific conclusion is reached based on a general statement."

I've also uploaded logic trees into ChatGPT which it can 'read', scrutinise your logic and suggest causal links you've missed. So unless I'm missing the point, critical reasoning is definitely on the way.

I've read Stuart Russell's Human Compatible, Marcus de Sautoy's The Creativity Code and many articles for a lay audience. I've got a bunch more books cued up. Nick Bostrom's Superintelligence is up next after Mustafa Suleyman's The Coming Wave. Of course, books come out slower than the pace of change but they remind us that AI has been around for decades yet only recently part of the general public consciousness. I'll follow the links you shared. I look forward to reading more.

Expand full comment
author

Thanks Angus. Much appreciated. It’s true that I could have checked if that line of reasoning was possible before sharing, but I wanted to write in the spirit of not knowing, which you seem to have appreciated. I did have this knowledge on good authority though, so I wonder if I’ve misrepresented the point. I know the syntax/semantic distinction matters but I need to look more closely into why. I wonder now then what the breakthrough was, assuming there was one. Any ideas?

Expand full comment

Thanks, Jonathan. I've no idea about LLMs under the hood beyond basic explainers such as this one, which does an elegant job: https://www.theguardian.com/technology/ng-interactive/2023/nov/01/how-ai-chatbots-like-chatgpt-or-bard-work-visual-explainer

Often, though, I do find myself reflecting on my own verbal associations (eg, choice of response, wordplay) or even lines of reasoning which make me think my own thought process is not far off how I imagine an Large Language Model to work. I'm semi-consciously sifting through mental word clouds interlinked in various ways (some surely subconscious), and often being surprised by what turns up. Any copywriter or poet is familiar with the experience.

I know the 'brain as computer' mental model was derided a few years ago as just an easy analogy to reach for given our information age (much as the 'brain as machine' was a product of the earlier industrial age). But perhaps the 'brain as computer' (or computer as brain?) is closer than we thought...

I wonder if you have similar feelings about how you work through a chess move, now that chess is no longer about brute force calculation?

Expand full comment

I would say the claim that LLMs can't do reasoning is demonstrably false, or at least missing the point.

These models are certainly doing more than just statistical correlation between words.

Ilya Sutskever explains it well: “On the surface, it may look like we are just learning statistical correlations in text. But, it turns out that to just learn it to compress them really well, what the neural network learns is some representation of the process that produced the text. This text is actually a projection of the world; there is a world out there and it’s as a projection on this text.” (https://analyticsindiamag.com/text-is-a-projection-of-the-world-says-openais-sutskever/)

Anyone who has tried using GPT-4 to aide with computer programming will know that these models are capable of reasoning and logic.

What they do NOT have is reliability (at least not yet). They still sometimes make very silly mistakes that a human would never make. But the relevant question for most applications is what they are capable of most of the time, not whether they achieve 100% reliability.

I think there's also more of a debate about whether they have creativity or not.

Expand full comment
author

Thanks Rory. I clearly overstated the case (which was, as indicated, a kind of guess) and already admitted as much to Angus, though I did say 'semantic reasoning' rather than reasoning as such, and your message helps to make sense of why that makes a difference. Thanks for the Sutskever link. The quote you offer begs a lot of questions, but I'll get back to it when I revisit my AI self-education shortly.

Expand full comment

Even though GPT-4 can do a lot of reasoning type tasks, it is mainly because many such tasks were included in its (massive) training dataset. It has much less ability to generalise to completely novel tasks when compared to humans, this is the key difference in my opinion. So humanity still has a clear advantage (for now!).

I work as a researcher in AI Safety, I'm always happy to chat about AI if you're interested! (We actually met in person once before when you gave a talk at my old company: Improbable)

Expand full comment

I enjoyed Marcus du Sautoy's philosophical approach to the question of creativity, Rory. In 'The Creativity Code' he explicitly links creativity and consciousness, including our human knowledge of being mortal. He wonders what if a machine does become conscious? Would we know? How? And how would it tell us?

Referencing Wittgenstein, he writes, 'If a lion could talk, we probably wouldn't understand it.' And then, ultimately, if machines become conscious we will only understand them 'through their paintings, their music, their novels, their creative output, even their mathematics' (p.306). Which neatly flips the 'are machines creative?' question on its head. They already do creative things, and give us sparks or springboards to become more creative ourselves. But creativity, in du Sautoy's eyes, is tied to consciousness.

Expand full comment
Nov 28, 2023Liked by Jonathan Rowson

Fellow chess player here that has browsed your Grunfeld book and appreciated your thoughts on Hans Niemann. Am a computer scientist, ex-corporate strategist and now entrepreneur. I have been working with AI since 2014.

Thoughts on AGI, core concepts and people to follow on X

AI community is convinced that AGI will happen

Expected years to AGI is now 7 years or 2030 (Metaculus)

Robotics built on AGI also relevant as it impacts manufacturing, agriculture, transport, construction, defence, etc

AGI Upside: science, creativity, imagination, organisational structures and productivity breakthroughs, new industries formed + new forms of human-AI companionship

AGI Downside: restructuring, some or many blue-collar and white-collar job losses

Very long term Implications of AGI

Universal basic income?

What about AI regulation?

Very hard to coordinate globally

Algorithms are freely available via GitHub and research papers

If you’re a state actor or large corporation, all you need is data and compute

What is required to achieve AGI?

More data, more compute (GPUs), new break-throughs in algorithms

Data and compute are not the core issue

New break-throughs in algorithms will require research and experimentation

Is there an existential threat?

Sketchy evidence presented by “AI doomers"

Only credible near-term scenario that I can envisage is as follows:

Imagine a future AI agent running on top of an AI model

This AI model is a next generation Large Language Model like ChatGPT but can also generate, host and execute software code

A human assigns this AI agent a goal such as “destroy humanity”.

AI agent uses AI model to break-down “destroy humanity” into sub-tasks

Lots of trial and error and iterations happens

Now imagine a swarm of AI agents with the same goal using different versions of AI models for some added variety

Eventually it might lead to bad scenarios

Core concepts you might want to look at

Large language models (LLMs) are the backbone of systems like ChatGPT

Foundation models are LLMs trained on all books + all internet text, images, videos etc. Expensive costing hundreds of millions of dollars plus to builds

Fine-tuned models are also LLMs. Smaller and trained for specific tasks. Take a foundation model and fine-tune. Cost hundreds of thousands $ or less

Costs are dropping fast while capabilties are getting better

You can see foundations models are generalists and fine-tuned models as specialists

We are heading towards a world where both will exist and will talk to each other perhaps in some distributed manner

Industry structures worth considering

Do we want a closed AI oligopoly or an open-source led AI?

Important question as it has implications for AI industry and all industries and us humans that will use AI and AGI

Issues to consider include economic control, managing information flow in free-societies and how quickly AI innovation happens

Closed AI oligopoly are few VC—backed AI companies who are GPU-rich and can hire best AI researchers to build foundation models

Open source AI is built on few companies like Meta and Mistral that create foundation models and hand them over for free to AI community to fine-tune

Closed AI is easier to regulate. Important if you believe that existential risks or downside outweighs upside

Open source AI will lead to better and faster innovation and more equitable scenarios

Best people to follow

I’m in the open source camp so that’s the space I know better

Best people I follow are

- @ylecun - AI researcher now leading efforts at Meta

- @AndrewYNg - AI researcher now building AI companies

- @bindureddy - AI CEO who understands tech and business

- @ClementDelangue - AI CEO for HuggingFace, a platform widely used by AI community

- @Dan_Jeffries1 - futurist who often tweets about AI

Expand full comment
author

Thank you so much Anu. I'll take some time with this and get back to you.

As an aside, I notice something here. I am generally quite candid about what I don't know, but weirdly, when talking to chess players especially about these things I actually feel a little sheepish and embarrassed because that's a domain where I have genuine expertise, and somehow the gap between speculative inquiry and expertise feels uncomfortable when they co-arise in the same context/conversation. Professionally I am obliged to learn more about AI, because my work now concerns futurology, rather than Grunfeld theory. On balance, I am very grateful for that...

J+

Expand full comment
Nov 28, 2023Liked by Jonathan Rowson

Chess has given me so much. I can concentrate and think through issues much better. It’s great to have people like yourself with knowledge and insight across many domains looking at topics like AI. It takes honesty and humility to ask questions you are asking and it is a great way I believe to start a dialogue that leads to a common and clearer understanding of what is a complex and important issue. By the way, I also liked your thoughts on meditation, an area I am getting better at.

Expand full comment

My journey with AI has led me to begin creating My Personal AI and I am looking for someone with a metamodern worldview to join me in my project. https://www.personal.ai/

Expand full comment
author

Not me!! I have heard of this idea that everyone will soon have their own digital Jeeves, their own personalised AI, but I find the idea dystopian. I might grow out of that feeling though. I’m not sure what it would mean to encode an AI as metamodern but I imagine Jason Joseph Ananda Storm has already thought about it, and you’d probably need to get into the deep theoretical roots of things to make it work.

Expand full comment

What I am trying to do is build my AI to be as much like the real me as possible. And My Personal AI will have a memory that is better than my limited biological memory. My AI will only be metamodern to the extend that I myself am trying to be so. If successful, it will assist me in achieving scale, able to interact with more people than me alone without AI assistance. And I would love to interact, for example, with a Jonathan Rowson AI trained on everything you have put in the public domain.

Expand full comment

Interesting piece. I'm hoping to find time to explore China's investment in cognitive (as opposed to generative) AI. I may start here, though can't vouch for it: https://cset.georgetown.edu/publication/chinas-cognitive-ai-research/

Expand full comment
author

Yes I had heard about this paradigmatic divergence, though I’m not sure what to make of it. On the more general issue, I wonder if the presumption of AGI being obtainable is similar in spirit to the presumption that consciousness is ultimately an epiphenomenon of matter...It might be conventional wisdom among many experts and yet not true, though it could take decades to show that...

Expand full comment

Interesting thought! In general, it seems as though the evolving conversation on AI is starting to help us re-think cognition, embodiment, consciousness - all these things, and the relationships between them. Marius Dorobantu is amongst those doing interesting work: https://research.vu.nl/en/persons/marius-dorobantu. It quickly becomes a bit technical for a humble historian like myself (!) so if you come across good communicators on these themes, please let me/us know! One thing I'm looking at now is how Japanese culture is shaping these debates - robots are a cliche, of course, but they do say something about the importance of embodiment in Japan.

Expand full comment

Fantastic article! Your insights are not only well-researched but also presented clearly and engagingly. Artificial Superintelligence (ASI) represents the pinnacle of human ingenuity and technological advancement. As we venture into the realm of creating machines that surpass human intelligence, we stand at the precipice of a new era, one where the boundaries between science fiction and reality blur. I have researched a article and found it very knowledgeable do read it https://www.mobileappdaily.com/knowledge-hub/what-is-artificial-general-intelligence

Expand full comment
founding

Sweet Christmas, it's like you were reading my mind in those first few paragraphs 😂 Thanks for surfacing an emerging way forward for us NPCs 👾

Expand full comment

And even tricky riddles...

Prompt: A man walks into an art gallery and concentrates on one particular oil painting. The watchman notices this and asks the man why he's so interested in that one oil painting. The man replies," Brothers and sisters have I none, but that man's father is my father's son." Who's in the oil painting?

ChatGPT: The man in the oil painting is the son of the man who is looking at the painting.

Here's the explanation: When the man says, "that man's father is my father's son," he is referring to himself because "my father's son" is the man himself (he doesn't have any brothers, as stated at the beginning). So, he is essentially saying, "that man's father is me." Therefore, the man in the oil painting is his son.

Expand full comment

Me: All Scottish people speak English. Jonathan is Scottish. Therefore what language does Jonathan speak?

ChatGPT: Based on the information provided, Jonathan, being Scottish, speaks English.

Expand full comment

Hi Jonathan… in terms of embodiment as the enabler of “intelligence”, I would suggest you keep an eye on Mark Solms’s work. Using a rich blend of neuro-psychoanalysis and the systems maths of Karl Friston, he believes that a “felt” artificial consciousness is functionally generatable - arising from valence, serving evolutionary imperatives. The “embodiment” of that feeling entity could be in a virtual world obeying physical laws, as well as material-robotic (the “Markov Blanket”, as a mathematics of interority/exteriority, seems to be important). I’d recommend this from London Futurists as a way in https://youtu.be/L9h8-HFmcjE?si=ZzedwDaKv8n3twOg

Expand full comment

The scary part being when/if this “limbic” motivational AI system is linked up with the more “neo-cortical” extensions of LLMs and GPTs. Perspectiva should be getting involved with AI companies and preventing “chimp-like” emotions driving computation. What if these things *begin* as bodishattvas?

Expand full comment