Seriously, LLMs, Agentic AI and all these Generative AI tools are cool, and they are helpful for SOME use cases. Actually, they are amazing at their ideal use cases around content generation.
We need to wake up, though, because there are some serious exaggerations and misconceptions floating around out there that are causing serious issues.
1️⃣ Large Language Models (LLMs) are the foundation of the "Universal Solver" that can find solutions to any problem.
❌ WRONG! LLMs are super powerful at content generation, but they are (at their core) just statistical prediction models that pick what the next "token" is -- there is NO legitimate understanding, true reasoning, or causal understanding behind these models, no matter who tries to tell you otherwise. Any "emerging patterns of AGI" news is just because these models have strong statistical patterns that they learned, which manifests as the ability to parrot back patterns that they were trained on.
Evidence from benchmarks like ARC (Abstraction and Reasoning Corpus) show that most popular models score below 50% on tasks easy for humans, proving they're statistical predictors, not omniscient solvers. Some are climbing the scale though (check out the ARC prize leaderboard here: https://arcprize.org/leaderboard) This misconception stems from hype around "emergent abilities," but scaling laws (e.g., Chinchilla paper) indicate diminishing returns without fundamental architectural changes.
2️⃣ Generative AI is the path to artificial general intelligence (AGI) or artificial superintelligence (ASI).
❌ WRONG! Generative AI is actually more "narrow" than it is "general." Its primary job is mimicry: learn how to look and sound convincingly like the data it was trained on. It can do some impressive tricks, like parsing content of photos and relating it to its training data set, but this is all reduced to maths and statistics instead of objective concepts.
Consider also that general intelligence requires emotional intelligence, and emotions are something that current AI models are not very good at it. Consider also that humans sometimes use up to 90% emotions when making certain decisions. If we want human-level intelligence, or better, we need a lot more improvement with artificial emotional intelligence.
3️⃣ Generative AI models think and learn like humans.
❌ WRONG! GenAI models need to be "spoon-fed" properly processed data. Any multimodal data, like pictures, music, etc., need to be processed and converted into an optimized format for the models to use it. We are just now beginning to build systems that can automatically collect new data and put it into a process to incorporate into itself, but the process is nothing like how human brains work. Most GenAI models need to be explicitly retrained as new data is made available.
4️⃣ Generative AI democratizes creativity, so everybody can be creative now.
👎 👍 Not exactly. GenAI only knows how to remix training data probabilistically! If you trained an image model using only data from Salvador Dali paintings, it would only be able to generate art that looks and "rips off" the artistic paradigm of Dali's style. Fine-tuning a general model with Dali art results in a slightly less effective solution to the same challenge, but it mixes in a bit of other people's work. It certainly IS TRUE that GenAI lets people without artistic training generate new artwork. This is great for certain use cases like marketing content. The quickly generated images look impressive, and a case may be made that "creating a word picture" with a good prompt may be creative, but ultimately this is a shortcut to transcend media, and "creativity" in writing is not directly correlated with creativity in painting or drawing. Just ask anybody who enjoys painting if they would prefer to have AI do it faster, and you will see that the response is strong. Would you pay top dollar for an A.I. painting that was influenced by the Mona Lisa? Are you excited to listen to new AI-generated music?
5️⃣ “Generative AI” is safe and unbiased; we should use it as much as possible. It won’t hurt anybody if I just use A.I. for everything, right?
❌ DANGEROUS! There is no such thing as an unbiased AI model. The industry is also in such a rush to deliver new models faster and faster that they aren’t doing as much testing as they should. The perfect examples of this are around young people developing relationships with AI chatbots, who don’t really understand emotions, and can only repeat patterns from other people’s chats and training documents. LLMs also have hallucinations, where they make up answers (actually, they “pick” almost randomly from concepts because they don’t have good probabilistic choices), but they don’t try to let you know. This means that many people will be fooled into thinking these answers are correct when, in fact, they are very wrong. Some technologies help with this (RAG, MCP, etc.), but the core problem is that the models don’t understand the concepts they are working with.
Think this can’t be dangerous to you or others? What about the young people who “talked” with AI models about suicide, and were given advice, as if they were providing mental health advice? These kinds of use cases only get worked on after they cause problems with users. This is a classic example of trying to use technology for use cases when no expertise is involved. This is very dangerous, and we won’t know the trouble it can cause until it happens.
So how can we adapt and avoid these misconceptions?
✅ Human first. Don’t trust AI to feed you your original ideas. You bring the good stuff, and let AI do editorial/refinement (they are great at that!). This way, you clearly originated the idea.
✅ Human last. Never just blindly accept the result as complete. Triple-check everything to make sure it doesn’t have mistakes! Even generated photos, videos, and music can be submitted to an AI and have an evaluation done for detection/comparison to copyrighted works.
✅ Take pride in what you do. Your specialization is what makes you amazing. If you aren’t good at something, using AI (human first and last!) is ok, as long as you don’t get a big head about you being the producer of that work… On the other hand, you SHOULD be proud of creating the work that uses your core specialized skills.
✅ Talk about the gray area. When something feels like it crossed the line of integrity and ownership because of the use of AI, talk about it with your team, management, or industry experts! Don’t just let a problematic use case cause problems later.
So what do you think? Am I missing any big ones here? Do you disagree? Feel free to engage me on X or LinkedIn.