Simanaitis Says

On cars, old, new and future; science & technology; vintage airplanes, computer flight simulation of them; Sherlockiana; our English language; travel; and other stuff

RETHINKING A.I. PART 2

YESTERDAY, Gary Marcus’ “The Fever Dream of Imminent ‘Superintelligence’ Is Finally Breaking,” a Guest Essay in The New York Times, described that today’s A.I. is failing to achieve Artificial General Intelligence, a be-all/end-all exceeding the human variety. Today in Part 2, Professor Marcus describes details of three aspects that are currently lacking.  

Promising Ideas: Marcus posits that A.G.I. requires “proper world models,” “core knowledge rather than scratch-learning,” and “neurosymbolic constructions with different tools for different problems.” Here are gleanings about each.

Proper World Models. “First,” Marcus observes, “humans are constantly building and maintaining internal models of the world—or world models—of the people and objects around them, and how things work. For example, when you read a novel, you develop a kind of mental database for who each individual character is and what he or she represents.”

Image by Maria Mavropoulou for The New York Times.

Marcus continues, “We don’t just need systems that mimic human language; we also need systems that understand the world so that they can reason about it in a deeper way. Focusing on how to build a new generation of A.I. systems centered around world models should be a central focus of future research. Google DeepMind and Fei-Fei Li’s World Labs are taking steps in this direction.”

Core Knowledge. “Second,” Marcus observes, “the field of machine learning (which has powered large language models) likes to task A.I. systems to learn absolutely everything from scratch by scraping data from the internet, with nothing built in.”

“But,” he notes, “as cognitive scientists like Steven Pinker, Elizabeth Spelke and me have emphasized, the human mind is born with some core knowledge of the world that sets us up to grasp more complex concepts. Building in basic concepts like time, space and causality might allow systems to better organize the data they encounter into richer starting points—potentially leading to richer outcomes. (Verses AI’s work on physical and perceptual understanding in video games is one step in this direction.)” 

Note how this core knowledge helps to establish an appropriate world model.

A Neurosymbolic Approach. Marcus notes, “We need a new approach, closer to what Mr. Kahneman described. This may come in the form of “neurosymbolic” A.I., which bridges statistically-driven neural networks (from which large language models are drawn) with some older ideas from symbolic A.I. Symbolic A.I. is more abstract and deliberative by nature; it processes information by taking cues from logic, algebra and computer programming.”

That is, it adds these three tools to the Large Language Model which is essentially statistical “filling-in-the-missing-word.”

Marcus says, “I have long advocated for a marriage of these two traditions. Increasingly, we are seeing companies like Amazon and Google DeepMind take such a hybrid approach (even OpenAI appears to be doing some of this, quietly). By the end of the decade, neurosymbolic A.I. may well eclipse pure scaling.”

He concludes, “Large language models have had their uses, especially for coding, writing and brainstorming, in which humans are still directly involved. But no matter how large we have made them, they have never been worthy of our trust.” 

“To build A.I. that we can genuinely trust,” Marcus says, “we need new ideas. A return to the cognitive sciences [psychology, child development, philosophy of mind and linguistics] might well be the next logical stage in the journey.” ds 

© Dennis Simanaitis, SimanaitisSays.com, 2025   

4 comments on “RETHINKING A.I. PART 2

  1. Tom Austin, Sr.
    September 6, 2025
    Tom Austin, Sr.'s avatar

    Gary Markus works very hard to keep’em honest. My hat’s off to him (and you too, Dennis, for your coverage of him, and so on.)

    • simanaitissays
      September 6, 2025
      simanaitissays's avatar

      Thanks, Tom, for your kind words. Two A.I.-knowledgable people I’ve come to respect are Zeynep Tufekci and Gary Markus.

  2. sabresoftware
    September 6, 2025
    sabresoftware's avatar

    Professor Richard Sutton at the University of Alberta and a fellow & Chief Scientific Advisor at the Alberta Machine Intelligence Institute (AMII), along with Andrew Barto developed the field of machine Reinforcement Learning (RL), which basically trains machines to learn by a trial and error process, with feedback from process results and inputs from other sources (human among them) to improve the process.

    Robot vacuums use RL to learn from experiences around the house (pets in the way, etc.). Our robot vacuum obviously isn’t very smart because it keeps on turning itself off using the spring loaded door stops that shut off its poorly located power switch! But in more advanced applications RL can certainly improve industrial, medical and other processes.

    LLMs are starting to apply RL techniques to improve (hopefully eliminating or reducing their hallucinations). My biggest fear is that the whole world data set that is scraped for LLM responses to questions is being negatively reinforced by incorporating these hallucinations as part of the data set for future queries.

    Unfortunately we can’t make AI go away, but by intelligent planning and management AI could be an important tool to help mankind. Poorly planned/controlled AI could become a tool abused by autocrats/greedy people to subjugate/defraud us all.

    My personal experiences with AI include online customer support and Internet searches. Customer support is mostly (about 99%) hopeless as the answers are either too simplistic or the answer to a different question. More detailed questions usually just confuse the system totally. Inevitably I end up asking for a human agent (and unfortunately some of these aren’t much better). On the Internet front I have been pleasantly surprised by the quality of some responses. But also disappointed when some queries don’t result in an AI generated response, but in essence get multiple hits (often pages long), most of which are essentially the same answer. I miss the old MetaCrawler that scanned various search engines and came back with a decent variety of responses to a query.

  3. sabresoftware
    September 6, 2025
    sabresoftware's avatar

    I meant to also mention that Prof. Sutton was part of the DeepMind office in Edmonton until Google closed it. He and Barto were awarded the Turing Award in 2024.

Leave a reply to Tom Austin, Sr. Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.