In today's edition:


Though the question, “How far are we from achieving human-level intelligence in machines (or AGI, or ASI)?” predates the term “artificial intelligence” itself, it saw a significant resurgence on Twitter last week, prompted by the Musk vs. OpenAI lawsuit (Musk accuses OpenAI of abandoning open-source principles and prioritizing profit over safety, hindering the safe development of AGI.) But far more interesting were the papers and an article that came out last week tackling this question. Today, we will discuss “How Far Are We from Intelligent Visual Deductive Reasoning?” “How Far Are We From Automating Front-End Engineering?” and Stephen Wolfram’s article “Can AI Solve Science?” These papers offer fascinating explorations of the differences between human and artificial intelligence.

Intelligent Visual Deductive Reasoning

In “How Far Are We from Intelligent Visual Deductive Reasoning?”, researchers from Apple explore Vision-Language Models (VLMs), like GPT-4V, in visual-based deductive reasoning, a complex yet less studied area, using Raven’s Progressive Matrices (RPMs)*.

*Raven’s Progressive Matrices is a nonverbal intelligence test measuring abstract reasoning, using patterns to assess cognitive functioning without language.

What caught my attention was the finding that AI systems like VLMs struggle with tasks requiring abstract pattern recognition and deduction. The paper notes, “VLMs struggle to solve these tasks mainly because they are unable to perceive and comprehend multiple, confounding abstract patterns in RPM examples.” This inability to deal with abstract concepts marks a fundamental difference between computational processing and human cognitive abilities. Being a sophisticated pattern recognizer doesn’t equate to sentience.

Another intriguing point was the models’ overconfidence. The observation that “all the tested models never express any level of uncertainty” highlights the importance of doubt and uncertainty in human cognition, suggesting a nuanced aspect of intelligence that current AI lacks.

Automating Front-End Engineers

In “Design2Code: How Far Are We From Automating Front-End Engineering?”, researchers from Stanford University, Georgia Tech, Microsoft, and Google DeepMind have developed a benchmark for Design2Code, aiming to evaluate how well multimodal LLMs convert visual designs into code. Here, the replacement of humans came closer. Despite some limitations, there were considerable advancements in using generative AI to convert designs into front-end code. It’s remarkable that “annotators think GPT-4V generated webpages can replace the original reference webpages in 49% of cases in terms of visual appearance and content; and in 64% of cases, GPT-4V generated webpages are considered better.” This finding challenges traditional notions of artistic and creative value, questioning whether creativity is uniquely human or can be algorithmically reproduced — or even surpassed.

However, significant limitations persist. VLMs struggle with “recalling visual elements from the input webpages and generating correct layout designs.” posing questions about understanding and interpretation.

So, the important question is actually not how far we are from AGI (whatever it is), but how we embrace human-AI collaboration most effectively.

AI Solving Science

In that sense, Stephen Wolfram’s blog post “Can AI Solve Science?” serves as an excellent example. In the very beginning, he plainly states that AI cannot solve all scientific questions. However, there is significant value in AI assisting scientific progress. He discusses how LLMs can serve as a new kind of linguistic interface to computational capabilities, providing high-level “autocomplete” for scientific work. As he usually does, he emphasizes the transformative potential of representing the world computationally and suggests that pockets of computational reducibility* can be found by AI as well.

*A pocket of computational reducibility — a fascinating concept introduced by Wolfram — is a situation or problem within a complex system where, despite the system’s overall unpredictability, predictable patterns or simplified behaviors emerge, allowing for easier understanding or calculation.

Wolfram argues that AI can significantly aid scientific discovery by providing new tools for analysis and exploration, but its ability to completely “solve” science is limited by fundamental principles such as computational irreducibility. The future of AI in science lies in its integration with human creativity and understanding, leveraging its strengths to uncover new knowledge within the constraints of what is computationally possible.

We might be able to survive without front-end developers (no offense intended), but scientists remain indispensable!

To summarize:

https://x.com/pmddomingos/status/1766945083314827455?s=20&embedable=true

News from The Usual Suspects ©

Cohere and its commitment to the research community

https://x.com/cohere/status/1767275128813928611?s=20&embedable=true

Hugging Face

https://x.com/maximelabonne/status/1767124527551549860?s=20&embedable=true

Russia’s talent is invisible

Inflection enhances its Pi

Chips

OpenAI: new members on the board

Elon’s Grok

https://x.com/elonmusk/status/1767108624038449405?s=20&embedable=true

Anthropic


Enjoyed This Story?

I write a weekly analysis of the AI world in the Turing Post newsletter. We aim to equip you with comprehensive knowledge and historical insights, so you can make informed decisions about AI and ML.


🎁 Bonus: The freshest research papers, categorized for your convenience

Enhancements in Language Models and Multimodal Understanding

Novel Training and Evaluation Techniques

Advances in Generative Models and Data Synthesis

Scalability and Efficiency in AI Systems

Exploring New Frontiers in AI and Machine Learning

Platforms and Tools for Model Evaluation and Interaction


Also published here.