It’s a difficult question. The team behind the Stack Overflow Blog picked some articles they thought their subscribers might find interesting, including these 4 about AI, which we’ll call article no. 1, no. 2, no. 3, and no. 4 for easy reference. Some people think AI would have a clear future, including “AI 2027” by the AI Futures Project, while others (including the authors of article no. 1 and no. 4) thought that their drawbacks, based on their experience, far exceeded the gains they provided us with; hence, we should avoid using them (at all costs).

One remembered how stressful it was training AI just 4 years ago, in 2021, participating in Kaggle competitions. At that time, the state-of-the-art image generation AI, GANs, had just come out a few months ago, and the images they generated could be clearly distinguished from an image taken by your camera. Reinforcement learning was basic—one remembered trying to train a hummingbird to find nectar in Unity, and though the course was detailed, the training failed, for reasons one failed to discover. LLM was available, but the AI fad was still picking up, and nobody noticed it. Plus, even the smallest LLM cannot fit into a single GTX 1080 Ti’s vRAM, and setting up multiple graphics card support took much more time than expected and failed much more often than if you used only a single GPU, and you had to deal with many more problems that you wouldn't encounter with a single GPU, so you never had the chance to try it out. Unlike today's Q4_K_M quantization, which uses 4 bits for training and inferences and was picked by HuggingFace's staff as the 'balance' between 'quality' and 'size', FP16 (16-bit) was the state of the art at the time! One recalled, just before one quit experimenting with AI, PyTorch and TensorFlow announced that new experimental feature for training. One was, of course, excited to use it and then realized that the graphics card one had, a 1080 Ti, didn't support it! Now, who still talks about FP16? In just 4 years, the advancement made in AI was so quick that when one tried to pick it up again, even just to use it, not to train it, it was impossible. One doesn't understand a thing anymore. What's an agent, anyway? How do you even reduce the size of the model with k-means clustering? Everything seemed new to me.

The concept of a utopia has existed for a long time. Star Trek, Star Wars, and sci-fi movies in general originally existed as a future their authors had imagined. Sure, it’s not everybody’s taste, but their fans that grew up wanted to create a world just like in the movies themselves! Why otherwise do you think the most popular type of AI being introduced to users is chatbots and not ML that predicts trends (whatever trends, be it the stock market or sunspot activity, etc.) or AI that can learn itself via reinforcement learning tools like AlphaGo? It’s the dream to have an AI assistant that, by telling it what you want it to do, preferably by speaking to it without typing anything yourself, would understand what you want it to do and answer, “Yes, master,” and do it. Isn’t that cool? Unfortunately, computer hardware wasn’t strong enough to host an artificial brain until recently. That’s when the conditions are ripe enough for the AI boom. And when your competitor starts integrating AI into their products and you haven’t yet, be on the watch for customer dissatisfaction and the transfer of brand allegiance! Suddenly, everyone’s on the chase for AI, even forcing their users to adopt it (ahem, Meta, ahem) without a function to remove or disable it. Users who resisted the use of AI may start feeling the social pressure to adopt. Why would you waste your time doing something that an AI can do? You could save your time and use that time to do something more useful, something that an AI cannot do! That’s the line of reasoning, though; whether you really use that time to do something more useful or doomscroll is another story.

The problem with AI is confidence. Ultimately, AI searches the web just like you do, except it searches really fast and compiles what it searched into a summary for you to read, so you don’t have to go through the hassle yourself. However, the confidence emanated by AI had us conned. We thought, since AI can access so much information at such a fast rate, it would do far better than our brains. However, how could you expect an AI to do much better if something you tried to search for yourself didn’t come up with many examples on the web? For example, if you try to search for a downloadable short clip of less than 1 minute on the internet, and you don’t want to sign up or sign in to anything to access such a clip, and the clip must contain 2 distinct voices rather than a conversation being read by a single person, you’d despair to find it, as of now. What makes you think the AI can do a better job? Also, do you not think that AI is too confident in its summaries? By reasoning, when you try to search for how to move a database of an old program from one computer to another, and the documentation doesn’t support such action, but someone had tried it on a newer program that superseded that old program and succeeded, given that the newer program is a complete rewrite of the old program such that the new program is not backward compatible with the old, nor is the old program forward compatible with the new, would you have expected that they would work the same? No, but AI can’t see that—instead, AI assumed what works for the new would also work for the old, and states that confidently in its summaries. Moreover, you wouldn’t realize that AI had made the reasoning and inferences until you asked it, “One have checked the documentation which you referenced when you came up with that answer. Unfortunately, one couldn’t find anything related to moving a database from one computer to another. How did you come up with that answer?” Only then would it tell you that it’d inferred it. Such are the dangers of AI—you won’t realize you’ve been conned by it until you try out the solution it gives and it doesn’t work and you ask it to clarify.

Do you really think AI could read your mind and come up with an answer that you want? If you try to explain something to another human being and he/she can't understand what you're talking about, why would you think the AI would understand you? You fell for the con because you really want someone to understand you, and it feels nice to be understood; but is it really true that AI understands your needs, even if the sentences you used to communicate with it are muddy and unclear? How do you expect someone else, be it a living person or an AI, to understand what you want to convey when you can’t even convey it clearly? Worse, how could others understand your needs if you yourself don’t understand what you need? Is it not folly to have too unrealistic an expectation?

Plus, do you really think the engineers and architects that designed and trained the AI models are perfect? They’re just humans, just as flawed as you and me are. And what about the data? Do you really think what you get from the internet is all perfect? Do you really think that the people who clean the data, perhaps dozens of Amazon Mechanical Turks or someone they hired specifically for the job, would perfectly sift out the rubbish in the data? Have you ever, when you try to throw away something, had someone come up to you and ask if they can have it? Have you ever gone up to someone and asked for something that you think is immensely useful to you, but to the person you approached, it only clutters up his/her home? Who would decide what’s useful in training and what’s rubbish? Who would set the guidelines? And, being created by a flawed person, do you think the guideline is perfect? The output an excellent AI produced is only as good as the data they’re being trained on; if you don’t even know what dataset they were being trained on, how could you be sure the output they spit out would be what you want? We humans have a flaw—we like to fill in the blanks when people get stuck in between their sentences rather than letting them think for themselves and fill themselves in. When someone else fills in the blanks for you, you would've stopped thinking and taken their thinking for granted, even if what they’d given you isn’t really what you wanted. It’s more energy efficient to let someone else think for you than to think for yourself. After all, if you are provided with a readily available solution, why would you waste your time and build the wheels from scratch? Do you have the feeling that when AI provides a summary for you, you point at it and say, “Ahh, that’s exactly what I’m thinking.” But is that really true? What if it had let you think for yourself instead? Would you really have come up with the same words, the same solution, and the same answer?

If you’re a machine learning (ML) or AI engineer/architect, do you really think you understand the models you’re creating? AI is a black box where you pump in some inputs and get out some outputs. How do you know what’s happening exactly within it? Indeed, you might believe that you can separate the trained layers and log their outputs after each layer, but do you really understand what the output means? What does it mean when you see the output at layer 3? Does it really have a meaning? Do you feel or think you’re in control of what you’re training? Plus, with your limited attention, do you really think you can consider everything? If you believe you just need to know most of the things, what about the biases in AI that sometimes pop up in the news? Now that a flaw is revealed in your model, how can you be sure there won’t be any other biases that are not surfaced? Or perhaps they’re surfaced, but those who found it out just aren’t popular enough to alert the attention of the news, and hence it would never reach you; and be honest with it—most people aren’t that popular, or we wouldn’t have to fight so hard to get someone to turn their attention to us, and even that is not a guarantee. And how can you be sure the fix that you applied to your models isn’t introducing new biases? Given that you have limited control over the models, are you sure you can really fix it? Would it not be easier to quiet the person by forcing them to remove their negative postings and tracking them down for the next 6 months so you can remove their postings in real time, even if their postings are on another platform, by cooperating together with the other corporations to seal the person off completely, especially if you’re working for a mega-sized corporation, and make a post saying that you have fixed it but you actually haven’t? The public would have quieted down after a while, turned their attention to the ‘next big stuff,’ and forgotten about your company. After all, they’re at your mercy to use your company’s product. They’re forced to sign the T&C before they can use your company’s product. They’re at your company’s mercy.

Also, AI is being modeled based on how humans think. That’s the ‘thinking mode’ popularized by DeepSeek. But why would science, rooted in experimentation and disproving your hypothesis, have replaced philosophy, rooted in rationalized arguments? Have you ever thought you have fully understood how something works, but when it comes out totally different, you felt shocked and/or surprised that you suddenly realized you actually don’t understand a thing at all? Are you willing to demolish your current understanding and start from scratch again, after all the effort and time you put into it? If AI is being modeled on how you think, how can you be sure that it doesn’t suffer the same fate as us? Would its brain not have tricked itself? Why would you think it won’t understand something wrongly? When you clarify someone’s flawed thinking, that someone, feeling shame, would probably strike out at you to defend his/her face, even if it means holding on to something wrong. An AI doesn’t have a face, so it can readily accept its mistake, but do you think it would spend additional computational power on checking for anything wrong elsewhere except for the mistake you’d pointed out? So far, based on one’s experience with Perplexity AI’s default model, it didn’t. And it didn’t learn. The next time you ask the same question again fresh, it’d probably have forgotten the corrections you made from its short-term memory (i.e., num_ctx in technical terms) and would give you the wrong answer again. How could you even trust an AI that didn’t even doubt the answers it gives?

But perhaps your stakes in AI may be so high that nothing would change your mind about it. You have already made up your mind and would stick to it no matter what. In that case, perhaps you should have never read this article in the first place. You’d just wasted your time.