Why It’s More Important to Know What AI Can’t Do
The last decade has seen ebbs and flows of mass paranoia over AI automating jobs away. With all of the impressive and newsworthy developments in Generative AI throughout 2022, anxiety is back at an all-time high.
By now, you’ve likely been mesmerized by at least one thing Generative AI created. Maybe it’s been all of the shockingly beautiful artwork. Maybe it was when an 11-year-old designed a game in ChatGPT. For me, it was when ChatGPT created a solid marketing campaign for me from scratch.
It’s human nature to take these moments of awe and fabricate these far-off, idealistic futures where everything develops in a perfect trajectory and AI is now replacing most of our work, actions, and habits. And you either view this as a utopia or dystopia.
Regardless of your view, you’re projecting this immaculate image that is unlikely. Thus, you’re not actually operating with AI in the present reality.
At this point, knowing what AI can't do is more useful than knowing what it can. – swyx
What AI Can’t Do
In the blog L-Space Diaries, swyx published a great piece on What is AGI-Hard? Basically, he documents all of the things that AI struggles with today. And the struggles largely arise when the AI needs to learn human qualities. For example:
The goal for AI applications is not “replicate humans as closely as possible”. We don’t really care about that, and in fact, we want AI to be superhuman as long as it is aligned. But we care that it fills the job to be done, which in this case was “help me learn Chinese”. The chatty second question here might spur me to improve my Chinese, but it might cause someone else to hide in their shell. It would take empathy for an AI language tutor to determine what kind of learner I am, and adjust accordingly. We can all probably tell that GPT-3 does not have this capability, and, if theory-of-mind is AGI-hard, then we should also conclude that a level-adjusting AI language tutor product is not viable to work on (a shame, given this also means Bloom’s 2 sigma problem is AGI-hard). – swyx
What swyx means by calling this AI example AGI-Hard is that it requires that we have achieved Artificial General Intelligence. Empathy is AGI-Hard.
Artificial general intelligence (AGI) is the ability of an intelligent agent to understand or learn any intellectual task that a human being can.
In other words, AGI is the moment we’ve programmed an AI to operate, consume, and learn as a human’s brain does. At this point, with the massive computing power behind this brain, it would expand its intelligence at an exponential rate – reaching what is called superintelligence. There are all sorts of theories on the creation of AGI, if it’s possible and how long it will take, and what the result will be for humanity. If you’re interested, I wrote about AGI here.
Furthermore, there are a lot of tough conceptual questions when it comes to AI and what it will be capable of with current models and future models. The people that will create billion- (maybe trillion-) dollar AI companies or invent AIs that can reason and produce logic are asking themselves questions like:
- Math: Can AI invent calculus from first principles?
- Physics: Can AI do Einsteinian thought experiments and derive relativity?
- Finance: Can AI look at option payoffs and know to borrow from the heat equation to create Black-Scholes?
- Music: Can AI have taste? Set trends? Make remixes and interpolations? How many infinite monkeys does it take an AI to come up with “you made a rebel of a careless man’s careful daughter”, and how many more to understand how special that line is?
- Programming: Can AI create AI? Quine LLMs?
As swyx points out, these physical and conceptual intuition-type problems are also likely to be AGI-hard.
What AI Can Do
For most of us, frankly, it’s unnecessary to think too deeply about topics relating to AGI-Hard concepts. Unless we’re willing to go deep on AI programming and forge the path forward – unless you’re the one fine-tuning ChatGPT towards mathematics – then it’s much better to focus on what AI can do today, or is capable of but just can’t do yet.
This is a moving target; the original CAPTCHA was originally a perfectly good Turing test, but had to be retired when AI solved it better than humans. Not perfect; but useful.
My favorite recent example is detecting deepfakes by asking the caller to turn sideways:
It’s tempting to look at what a ChatGPT or Midjourney can do today and imagine these worst-case scenarios (or ideal scenarios) where the AI does every facet of human creation and work at or above human level. This can have all sorts of impacts on your expectations of the future, and even your motivations to learn new skills or AI skills.
The message I’m getting at is that when it comes to AI, you need to approach future projections with reason. There are a lot of fundamental cognitive features that are incredibly challenging to program. And so many of your thoughts on the future of AI may be impossible or too early to develop.
Member discussion