The mid-May issue of New York Magazine has a long article on how students are cheating their way through high school and college using Chat GPT. ( Here I’m using “Chat GTP” as a stand-in for any of the Large Language Models to which students can get access free. There are a few others, like Anthropic’s Claude, or Perplexity, or Google’s Gemini)
The article points out that critical thinking in colleges has all but vanished as students accept whatever information and writing style they get back from their favorite chat bot and turn it in as their own work to a demoralized professor who, finding his life’s work rendered meaningless by a technology that has replaced him, is now just waiting for retirement.
That’s one end of the evaluation spectrum on the change in our lives being brought about by AI. Draw your own conclusion.
On the other end there are the Silicon Valley experts, whose technologies have caused this rapid decline of “real” education to happen. While many of these guys publicly appear to be over the moon about their results, at least one of their critics is crying that the emperor has no clothes. And even the inventors of these technological marvels sometimes admit skepticism.
No clothes after a trillion dollars of investment and all the environmental damage caused by the energy resources being thrown at data centers? Well then, what ARE those kids using?
They are using a very primitive capability, in which they have far too much faith. When AI first came on the market, it was fun, but we knew that it would hallucinate. Remember the lawyer who got in trouble for using it to write a brief, only to find out that the AI had cited false cases as precedents? Well that was three years ago. But even though we are many model upgrades further along, the problem of hallucination has not gone away. Now we often ignore it. Especially the students.
For example, a recent request to an AI model to make a complete list of the cities West of the Mississippi returned an answer that left out Billings, Montana. When the requester noticed this omission and asked the chatbot why it had left out Billings, the AI responded confidently that it had been destroyed in an earthquake a few years ago. Well, a bit of old-style Googling revealed that to be a total untruth. And when asked, the bot politely apologized for its error. Oops.
I follow a cognitive scientist named Gary Marcus, an expert in the context of “deep learning” and neural networks. You might say his view is that artificial intelligence, while not completely without clothes, is not sufficiently clothed enough to be seen in public.
After all the money thrown at our current models of artificial intelligence, the end product lacks important finishing touches. Not only isn’t it finished, but it is not going to be finished using the current product development methods, says Marcus.
Worse yet, Marcus says we don’t even know how these models work; they are like “black boxes” inside even to the people who build them. And yet the hype around them is almost endless. Are they going to take all our jobs? Make humans unnecessary? Change the very nature of what it means to be human?
Not just yet.
Marcus argues that deep learning, while powerful in certain domains, has inherent limitations. While it excels at pattern recognition, it struggles with abstract reasoning, common sense, and understanding causality. In other words, while it can code, it can’t multiply!
Drawing on the past work of Daniel Kahneman,(Thinking Fast and Slow), Marcus says that current versions of AI are more like “fast” thinking --good intuitive ways of creating heuristics, but not the full suite of human capabilities necessary to be called artificial intelligence. He thus expresses concern about the excessive hype surrounding AI and the unrealistic expectations that are often set. Urging a more balanced and realistic view of what AI can and cannot do, he’s also concerned about the ethical implications of AI, including issues of bias, fairness, and accountability.
Marcus highlights the importance of addressing these issues as AI becomes more integrated into society. After all this investment, all this hype, and all these trials, we deserve better than this!
True, yet I would argue I know humans who can concatenate complete sentences but are not intelligent. One even been our president twice now… 🤓