Blog

Beyond the Hype: Moving AI Toward True Intelligence

Dan Carroll, Chief Scientist at Cranium

This blog continues the discussion from Part 1 , where we explored the limitations of large language models (LLMs). Here, we’ll look at the innovations meant to augment LLMs—like Retrieval-Augmented Generation (RAG) and agent-based systems—and why they still fall short of delivering artificial general intelligence (AGI).

While large language models (LLMs) have captivated the AI world, their standalone capabilities are limited. To address these shortcomings, researchers and developers have explored ways to augment their functionality. Let’s dive into two popular approaches—Retrieval Augmented Generation (RAG) and agents—and why they, too, fall short of moving us closer to AGI.

Retrieval Augmented Generation (RAG)

Introduced in 2020, RAG combines LLMs with external tools like search engines. By retrieving relevant information from external sources, RAG enhances the utility of LLMs, reducing hallucinations and improving the accuracy of outputs. While undeniably useful, RAG doesn’t address the underlying limitations of LLMs—it simply improves their inputs.

The hype around RAG has led many to believe it’s a step toward AGI. However, the core architecture of LLMs remains unchanged. Without semantic understanding or true reasoning capabilities, these systems are still fundamentally next-token predictors.

The Rise of Agents

Building on RAG and Chain of Thought reasoning, agents represent the latest trend in AI. These systems combine LLMs with access to APIs, databases, and other tools, enabling them to “plan” and execute complex workflows. Examples include automating customer service or generating compliance reports.

However, agents come with significant drawbacks:

  • Inefficiency for Simple Tasks: Many tasks are better suited to standard programming or LLM scripts, which are faster and cheaper.
  • Limited Generalization: Agents often struggle with open-ended tasks that require true reasoning.
  • Security Risks: Agents with access to backend systems pose a major threat, as they can be exploited for malicious purposes.

The Bigger Picture

The hype around agents echoes the same overpromises made about RAG and LLMs. While these technologies are valuable, they are not a panacea. Overhyping their potential only risks disillusionment and an eventual AI winter.

What’s Next for AI?

To move closer to AGI, we need to challenge the current paradigm. Promising directions include:

  • Geometric Deep Learning: Researchers like Michael M. Bronstein and Joan Bruna focus on leveraging symmetries in data for more robust AI systems.
  • Symbolic AI: Paul Lessard and Symbolica.ai explore mathematical formalizations like category theory to equip AI with human-like abstraction capabilities.

The path to AGI lies in rethinking our assumptions and pursuing bold, innovative approaches. Only then can we achieve the dream of true artificial intelligence.

Link to blog 1

Sources Cited

  • Lewis, M., et al. (2020). Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. arXiv. Retrieved from https://arxiv.org/pdf/2005.11401
  • Bronstein, M. M., Bruna, J., Cohen, T., & Veličković, P. (n.d.). Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges. Geometric Deep Learning. Retrieved from https://geometricdeeplearning.com