Issue #1,105 | Celebrating 36 years writing about CAD | 20 September 2021
Another in the upFront.eZine series on problems posed by complexity
In the 2020s, artificial intelligence’s (AI) changed from a technology to a crutch:
-
The blackbox crutch — executives can say decisions were made by AI, and so deny culpability
-
The boombox crutch — marketing departments can claim their products employ AI, unlike competitors, and so you ought to buy the AI-enabled ones
How did AI sink so low?
- - -
Earlier this year, Boston Robotics released a promotional video of three robots “flawlessly and soulfully dancing in rhythm.” An awestruck digital evangelist exclaimed, “Imagine what AI-powered machines will be able to do in the next 5-10 years. 1 ”
We can’t know if the digital evangelist believes his prophecy as he is, after all, an evangelist mandated to spread good digital news.
Predictive vision modeler Filip Piekniewski2 pointed to the reality behind the staged dance: “A machine is performing a set of pre-programmed moves assisted with a bunch of PID [proportional integral derivative] controllers, just like any better CNC [computer numerical control] machine 20 years ago. In this case the machine is made in a form of a humanoid to fool naive people into believing it has a shred of intelligence in it.”
C3DevCon 2021
Software Development Conference
Join C3D Labs at its software developers’ conference, October 14 at 10:00am CEST. See the latest developments in the C3D Toolkit SDK, designed for engineering software developers.
To register for the conference, and learn more about the four-hour event, visit c3dlabs.com/en/blog/events/software-development-conference-c3devcon-2021/
‘Imagine what AI could do in the next x years’ has been a rallying cry of AI [artificial intelligence] proponents since the 1960s. In 1967, cognitive computer scientist Marvin Minsky asserted that “Within a generation... the problems of creating artificial intelligence will be substantially solved. 3 ”
The birth of AI in the late 1950s is now closer to the start of World War One than it is to today, to misquote a meme. Rather than having solved the problems of AI, we are arriving at a better understanding of why AI doesn’t work well.
Melanie Mitchell in her paper “Why AI is Harder Than We Think” 4 proposes that the thinking of AI researchers are infected with fallacies.
Fallacy: Narrow Intelligence Is On a Continuum With General Intelligence
After we saw DeepMind’s AlphaGo win a game of Go 5 (apparently the hardest game of all for humans to win), AI enthusiasts used the event as proof general AI is plausible. Then Watson went on to do poorly at predicting cancer diagnoses 6, while GPT-3 wrote lots of nonsensical articles 7.
Small advances are no proof of eventually arriving at an ultimate destination. The engineering brother of philosopher Hubert Dreyfus, Stuart Dreyfus, wrote in 2012, “It was like claiming that the first monkey that climbed a tree was making progress towards landing on the moon.8 ”
Elon Musk had promised Level 5 (unattended) self-driving by 2020 to purchasers of Tesla electric cars. Then in 2021 he lamented, “Generalized self-driving is a hard problem, as it requires solving a large part of real-world AI. I didn’t expect it to be so hard, but the difficulty is obvious in retrospect. Nothing has more degrees of freedom than reality. 9 ”
I’ll rewrite the fallacy as reality: AI completing one task does not lead to AI completing all tasks.
Fallacy: Easy Things Are Easy, Hard Things Are Hard
When proponents boast of how AI is better at playing Go or at diagnosing health problems than we humans, they are stating of the obvious. Computers are supposed to be better than us at carrying out boring and complex tasks. I expect software FEA to be faster than me using long division.
AI is, however, terrible at tasks we humans find trivial, like walking through a crowded subway station or playing Charades. Psychologist Gary Marcus notes that Charades “requires acting skills, linguistic skills, and theory of mind — abilities that are far beyond anything AI... 10 ”
What blocks AI from arriving at its ultimate destination is its lack of common sense. Mitchell urges AI researchers to target the abilities of one-year-old children, instead of members of Mensa.
I’ll rewrite the fallacy as reality: Simple tasks performed by AI do not lead to AI completing complex tasks.
Fallacy: The Lure of Wishful Mnemonics
“Work on AI is replete with such wishful mnemonics, terms associated with human intelligence that are used to describe the behavior and evaluation of AI programs,” writes Mitchell. She gives examples:
-
“Neural networks are loosely inspired by the brain, but with vast differences.
-
“Machine learning or deep learning methods do not really resemble learning in humans (or in non-human animals).”
IBM boasted in 2011 that “Watson can read all of the health-care texts in the world in seconds” and “...understands context and nuance in seven languages” (emphases added). 11 This fooled us into believing Watson thinks better than humans do and shaped the way AI researchers thought about their field.
If machines learned the way humans learn, then they could apply it to a wide variety of experiences — called “transfer learning.” But machines cannot. What AI software does learn are “shortcuts, statistical correlations to achieve high performance without learning the skill,” kind of like when we memorized facts just to pass a test in school.
I’ll rewrite the fallacy as reality: Calling AI “human-like” does not make AI human-like.
Fallacy: Intelligence Is All In the Brain
Whether influenced by the story of Frankenstein or prodded by philosophical materialists of the Enlightenment, AI proponents assume that rational thinking occurs exclusively in the brain.
The goal is the Brain In the Vat: reproduce the brain as AI, and then replace our brains with hardware/software in our human bodies, or else place them in lifelike robots, as fictionalized by television series like Picard and Battlestar Galactica.
The flaw to the brain-in-a-vat approach, Mitchell points out, is that there is nothing physical to the brain that is unique to the brain. The rational part has one job: processing inputs from all of the rest of the body. Isolated in a vat, the brain fails to operate as it was designed to do.
Deep-learning pioneer Geoffrey Hinton flippantly predicted in 2017, “We have trillions of connections [in our brains], but the biggest networks we have built so far only have billions of connections. So we’re a few orders of magnitude off, but I’m sure the hardware people will fix that.” 12 Declaring AI solved after throwing more hardware at the problem is myopic.
This may explain why enthusiasts have been pushing the date of the Singularity out into the future. (The Singularity is the date when technology exceeds human ability.) First proclaimed to arrive by 2029, then around 2039, and now by 2045 or later. 13
I’ll rewrite the fallacy as reality: AI replicating the functions of living brains does not bring AI to life.
What Ralph Grabowski Thinks
We see obsessiveness over AI because it is the ultimate computing problem. Once AI is solved, there is nothing left for us to solve; AI will then go on to solve all other problems for us.
As humans, we benefit from a litany of perceptual twists that help us cope with what we don’t know, irrationalities like linear predictions, rationalizations, cognitive dissonances, confirmation biases, intuitions, group thinking, free will, common sense, bigotry, and analogies.
So AI researchers face this question: do they incorporate our irrationality into their computer code, or do they leave it out? Judging by the outrage expressed when AI results show race and other biases 14, the current sentiment is to edit out human irrationality.
For Singularity-level AI to work, it needs to incorporate the irrationality of human metaphysics; but it can’t. Yet, if it could, the Singularity would find itself becoming like the fictional Cylons, who launched a war to destroy humans just because we were too war-like 15 .
In the 1960s, we were promised better brains. Instead, in the 2020s we are getting better emoji-picking: “Your Chromebook is getting a new emoji picker this month and 992 new emoji later this year” (Chrome Unboxed 16).
What is AI’s fatal flaw? It cannot replicate human metaphysics; the rational cannot conceive of the irrational.
Can CAD Do AI?
Some CAD vendors state that their software employs AI. In my opinion, I don’t think it does. Here are some of the claims.
MRU. One CAD program says it uses AI to predict the next command(s) users will want to choose. Let’s take a example in which the user has just drawn a circle. What would AI suggest as the next command? It doesn’t take a computer to guess:
-
Draw another circle, or draw a different entity.
-
Edit the circle, or erase it (undo).
-
Print the circle.
This is not AI, in my view; this is a variation of MRU [most recently used], which keeps records of commands commonly employed by users.
S&R. Another CAD system says it uses AI for mundane tasks, such as replacing junctions with details, like bolts and cuts to connecting beams. You choose one junction containing the detail, and as the software finds similar junctions, it adjusts them.
This is not AI, in my view; this is search and replace.
MRU and S&R reduce tedium and so are useful to CAD users. Just don’t call it AI.
- - -
There appears to be a match between AI and parallel processing. Neither works well in CAD, and for the same reason.
Parallel processing cannot be used in most areas of CAD, because user actions cannot be predicted. Prediction is required before the software can split the command being processed among multiple CPUs, then merge the results together afterwards. It is the nature of CAD that the software cannot predict the outcome of our actions. The same holds for AI: the actions of users cannot be predicted sufficiently for AI to assist.
(Parallel processing is indeed used in areas of CAD outside the control of users. Examples include loading files, performing renderings, and calculating finite element analyses.)
Creativity is chaos, as George Guilder brilliantly exposits in “Knowledge and Power.” AI can only take over the design process after human creativity is exhausted.
PS: 1950s AI’s Link to 2020 CAD
The inventor of AI, John McCarthy, wrote in 1958 the specifications for LISP as a programming language that could operate on its own source code as a data structure. LISt Processor was an early attempt at AI. 17
LISP subsequently appeared in many DWG-based CAD programs, after Autodesk embedded and extended XLISP — the free version written by David Betz 18 — into AutoCAD in 1985. It still runs today 19.
[You can download the 12-page paper “Why AI is Harder Than We Think” by Melanie Mitchell as a PDF from arxiv.org/pdf/2104.12871.]
And in Other News
Sadly, most conferences are still online. Here are six I plan to watch in the next six weeks:
Sept 21: Open Design Alliance “Summit 2021”
conference.opendesign.com
Oct 12-14: Hexagon MSC “HxGN LIVE Design & Engineering 2021”
events.hexagon.com/hxgnlive-designandengineering
Oct 13: Dassault Systemes “3DEXPERIENCE Modeling & Simulation Conference”
events.3ds.com/2021-modsim-conference
Oct 14: C3D Labs “C3DevCon 2021”
c3dlabs.com/en/blog/events/software-development-conference-c3devcon-2021/
Oct 20-21: Allplan “Build the Future”
meetyoo.live/register/1/ALLPLAN-Build-the-Future
Nov 1-3: Vectorworks “Design Summit”
vectorworks.net/vectorworks-designsummit-registration
Notable Quotable
“Tech can deliver many wondrous and terrible things, but it will always fall short of really knowing what makes us human.”
- Peter Pomerantsev
Thank You, Readers
Thank you to readers who donate to the operation of upFront.eZine:
-
Neil Peterson (small company donation)
-
Ewen @ CADbloke: “Cheers for all the insights and outlooks. :) ”
To support upFront.eZine through PayPal.me, I suggest the following amounts:
-
$25 for individuals > paypal.me/upfrontezine/25
-
$150 for small companies > paypal.me/upfrontezine/150
-
$750 for large companies > paypal.me/upfrontezine/750
Should Paypal.me not operate in your country, then please use www.paypal.com and use the account of [email protected].
Or ask [email protected] about making a direct bank transfer through Wise (Transferwise).
Or mail a cheque (US$ or CDN$ only, please) to upFront.eZine Publishing, Ltd., 34486 Donlyn Avenue, Abbotsford BC, V2S 4W7, Canada.
*4716
M. L. Minsky. Computation: Finite and Infinite Machines. Prentice-Hall, p. 2, 1967 (cited by Mitchell)
Hubert L Dreyfus. A history of first step fallacies. ‘Minds and Machines’, 22(2):87–99, 2012 (cited by Mitchell)
G. Marcus. Innateness, Alphazero, and artificial intelligence. arXiv:1801.05667, 2018 (cited by Mitchell)
S. Gustin. IBM’s Watson supercomputer wins practice Jeopardy round. ‘Wired’, 2011 (cited by Mitchell)
J. Patterson and A. Gibson. Deep Learning: A Practitioner’s Approach. O’Reilly Media, p. 231, 2017 (cited by Mitchell)
Comments