Discussion about this post

User's avatar
Angus Grundy's avatar

Thank you for this piece. As a fan of both 'Chess for Zebras' and 'The Moves That Matter", I'm interested to hear you thinking out loud about topics for which you have slightly less confidence. While I find myself in a similar state of 'stealth denial' about AI, it's also exciting to feel part of a conversation that, until recently, excluded non-scientists like me. But that also puts more of the onus on us to explore the topic.

Whenever someone says, 'AI can't do [X]', though, it usually pops up doing it the following week. And it's easy to refute the claim that AI can't do basic reasoning. I typed into the ChatGPT4 window: "All Scottish people speak English. Jonathan is Scottish. What language does Jonathan speak?"

ChatGPT's answer: "Based on the information provided: All Scottish people speak English. Jonathan is Scottish. Therefore, Jonathan speaks English. This conclusion is derived using deductive reasoning, where a specific conclusion is reached based on a general statement."

I've also uploaded logic trees into ChatGPT which it can 'read', scrutinise your logic and suggest causal links you've missed. So unless I'm missing the point, critical reasoning is definitely on the way.

I've read Stuart Russell's Human Compatible, Marcus de Sautoy's The Creativity Code and many articles for a lay audience. I've got a bunch more books cued up. Nick Bostrom's Superintelligence is up next after Mustafa Suleyman's The Coming Wave. Of course, books come out slower than the pace of change but they remind us that AI has been around for decades yet only recently part of the general public consciousness. I'll follow the links you shared. I look forward to reading more.

Expand full comment
Anu's avatar

Fellow chess player here that has browsed your Grunfeld book and appreciated your thoughts on Hans Niemann. Am a computer scientist, ex-corporate strategist and now entrepreneur. I have been working with AI since 2014.

Thoughts on AGI, core concepts and people to follow on X

AI community is convinced that AGI will happen

Expected years to AGI is now 7 years or 2030 (Metaculus)

Robotics built on AGI also relevant as it impacts manufacturing, agriculture, transport, construction, defence, etc

AGI Upside: science, creativity, imagination, organisational structures and productivity breakthroughs, new industries formed + new forms of human-AI companionship

AGI Downside: restructuring, some or many blue-collar and white-collar job losses

Very long term Implications of AGI

Universal basic income?

What about AI regulation?

Very hard to coordinate globally

Algorithms are freely available via GitHub and research papers

If you’re a state actor or large corporation, all you need is data and compute

What is required to achieve AGI?

More data, more compute (GPUs), new break-throughs in algorithms

Data and compute are not the core issue

New break-throughs in algorithms will require research and experimentation

Is there an existential threat?

Sketchy evidence presented by “AI doomers"

Only credible near-term scenario that I can envisage is as follows:

Imagine a future AI agent running on top of an AI model

This AI model is a next generation Large Language Model like ChatGPT but can also generate, host and execute software code

A human assigns this AI agent a goal such as “destroy humanity”.

AI agent uses AI model to break-down “destroy humanity” into sub-tasks

Lots of trial and error and iterations happens

Now imagine a swarm of AI agents with the same goal using different versions of AI models for some added variety

Eventually it might lead to bad scenarios

Core concepts you might want to look at

Large language models (LLMs) are the backbone of systems like ChatGPT

Foundation models are LLMs trained on all books + all internet text, images, videos etc. Expensive costing hundreds of millions of dollars plus to builds

Fine-tuned models are also LLMs. Smaller and trained for specific tasks. Take a foundation model and fine-tune. Cost hundreds of thousands $ or less

Costs are dropping fast while capabilties are getting better

You can see foundations models are generalists and fine-tuned models as specialists

We are heading towards a world where both will exist and will talk to each other perhaps in some distributed manner

Industry structures worth considering

Do we want a closed AI oligopoly or an open-source led AI?

Important question as it has implications for AI industry and all industries and us humans that will use AI and AGI

Issues to consider include economic control, managing information flow in free-societies and how quickly AI innovation happens

Closed AI oligopoly are few VC—backed AI companies who are GPU-rich and can hire best AI researchers to build foundation models

Open source AI is built on few companies like Meta and Mistral that create foundation models and hand them over for free to AI community to fine-tune

Closed AI is easier to regulate. Important if you believe that existential risks or downside outweighs upside

Open source AI will lead to better and faster innovation and more equitable scenarios

Best people to follow

I’m in the open source camp so that’s the space I know better

Best people I follow are

- @ylecun - AI researcher now leading efforts at Meta

- @AndrewYNg - AI researcher now building AI companies

- @bindureddy - AI CEO who understands tech and business

- @ClementDelangue - AI CEO for HuggingFace, a platform widely used by AI community

- @Dan_Jeffries1 - futurist who often tweets about AI

Expand full comment
22 more comments...

No posts