The AI Scam
Written by Mike James   
Wednesday, 11 June 2025

AI is not a scam, but the doubters are going to doubt. The latest attack is based on the idea that if you understand how it all works then it should be clear to you that it is a scam. What are they missing?

A recent opinion piece in the Atlantic, What Happens When People Don’t Understand How AI Works explains that if you understand how LLMs work then you wouldn't treat them in the way that you do. The argument is that a big autocomplete machine cannot claim to be intelligent and people using them as best friends, therapists and general gurus are so wrong-headed as to be pitied.

For my money the mistake is that understanding the mechanism by which the AI comes into existence is not the same as understanding the AI. One of the current big problems is that we don't have much of an idea how LLMs actually work. This is also one of the defining characteristics of anything you hope to assign to the label of AI or eventually AGI  Artificial General Intelligence.

The problem that has long dogged the AI community is that as soon as you understand the magic - the magic is gone. When I first encountered Eliza, a very early chatbot, I was impressed - perhaps this was the breakthrough we were looking for. Then I found out how it worked and it was a simple trick. When I first encountered a good chess playing program, I really thought it was thinking about the strategy and weighing tactics just like I was. Then I learned about mini-max and alpha-beta pruning. As soon as the algorithm is laid bare the magic has gone and it's not AI, just an A - i.e. an Algorithm.

For A to become AI there has to be something unexplained about it. Since the early days of neural networks we have understood the principles of training, but failed big time to explain how a trained network captures the reality as a model. You can think about it as hyperplanes separating clusters of similar things or the fitting of some huge dimensional function to the training data. All this is fine and very explicable, but when a neural network produces an output that is great (or disappointing) you are left without a specific explanation. You can say its overfitted, underfitted or not generalizing or ... but you don't really know - it's a black box. This is not to say that there aren't techniques to try to understand and even improve the performance, but they aren't the same as really understanding what the network is up to.

Of course, this is exactly the situation with natural intelligence. You can map neurons and theorise on internal state, but we all have to take it on trust that someone else's intelligence is much like our own. 

Now to come back to Large Language Models - they are trained using a "leaving out" method. The network is trained to predict the missing parts of sentences and hence the idea that it is just a super autocomplete device. The reason for training the network using the "leaving out" method is the lack of labeled data. We have a huge body of language ready to use to train the machine, but it isn't annotated with its meaning. The training method has been invented to make up for the fact that the training data cannot automatically provide feedback on how well the network is understanding it - the leaving out and prediction method is simply a way of getting the training data we don't have without it.

And why language?

We train a foundational model on language because language is a ready constructed model of the world. Instead of training the neural network on the world, which we will do in the near future, we take language as a ready-constructed model of the world and train the network on that. Are you suprised that it gets things wrong? Language is an imperfect model of the world and that imperfection cannot be ironed out by more training.

The neural network will organize itself to represent language as economically as possible and the most economical representation is to adopt its hidden internal structure. That is, an LLM learns the structure of language from a training based on guessing missing words - amazing!

The current AI is not a scam, even if the way some companies are pushing its use and claiming all sorts of outlandish properties for it is a scam. The current AI models capture the structure of language and as such are as flawed as our language is when describing the world.

In the future, the same AI will be trained on the real world rather than language. Put a neural network into a robot body and you have embodied AI and one that can use the feedback from the real world to learn rather than predicting missing words.

It is important not to confuse what you know about how a neural network came into existence with knowing how the network actually works.

ab3

More Information

What Happens When People Don’t Understand How AI Works

Related Articles

It Matters What Language AI Thinks In

The Triumph Of Deep Learning

Artificial Intelligence - Strong and Weak

The Paradox of Artificial Intelligence

Google's Large Language Model Takes Control

Runaway Success Of ChatGPT

Why Deep Networks Are Better

The Unreasonable Effectiveness Of GPT-3

To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on Twitter, Facebook or Linkedin.

 

Banner


.NET Aspire 9.3 Adds New Lifecycle Events
22/05/2025

.NET Aspire 9.3 has been released with improvements including the addition of GitHub Copilot to the .NET Aspire dashboard, along with a new context menu in the Resource Graph view; and new lifecy [ ... ]



Closer To A Proof That P!= PSPACE
28/05/2025

You may well know that important conjecture that P! = NP, but of equal theoretical importance is P! = PSPACE, but it hardly gets any of the publicity of its near relation. We seemed to have moved a li [ ... ]


More News

pico book

 

Comments




or email your comment to: comments@i-programmer.info

 

Last Updated ( Wednesday, 11 June 2025 )