Not long ago, AI agents were still in a nascent stage. Most of the industry wasn’t quite sure what they were, what they could realistically do, or whether they were just another marketing buzzword.
Fast forward to today, and the picture looks very different. Almost every other engineer seems to be building, experimenting with, or at least talking about agentic AI in some form. What was once a vague concept has quickly become a hands-on playground for technologists.
I remember when I first started hearing the term “AI agents” , it felt like one of those concepts which everyone nodded to but very few could clearly explain. Even I wasn’t sure whether it was a real shift or just another buzzword in the market. But as time passed and the more I explored; the more I realized that this is something that is something that will stick around and something that every data engineer should learn and comprehend.
One thing I noticed during my learning phase is that many of the existing blogs, explanations on this topic focus either on the theoretical foundation or on hands-on frameworks and tooling. Also, the thing that often gets less attention is why agentic AI exists in the first place and when it actually makes sense to use it.
In this article, I try to answer these foundational questions for a target audience who already have a tech background but haven’t worked with AI agents yet.
By the end, you’ll:
- Understand what Agentic AI actually means
- Learn the core concepts and terms
- Build a simple Agentic AI “Hello World” project you can run locally
What Is Agentic AI?
An AI agent is like an AI system that doesn’t just answer questions, but can also take actions, make decisions, and work toward a goal.
Think of a normal AI chatbot like someone you ask:
What’s the best restaurant nearby?
and it replies with suggestions.
But an AI agent is like a real assistant who can do much more than just suggestions; like
- Look up restaurants
- Check ratings
- Book a table
- Add it to your calendar
- Set remind
So instead of just giving information, it acts on your behalf.
A simple way to think about it would be with the following analogy:
LLM = Brain
Agent = Brain + Memory + Tools + Decision Loop
If an LLM answers questions, an agent gets things done.
Why Do We Even Need Agents?
Because real problems aren’t one-shot prompts. In continuation to the example we saw in previous section, here is another example to to make things clear on how AI agents are different for the usual AI system and what specific need or gap they fulfil
Let’s say we need to know how today’s weather looks like and based on that decide whether to go for a run or not
This would require:
- Fetching data
- Reasoning about it
- Making a decision
- Producing an action
An agent in this case can be helpful with:
- Deciding which tool to call
- Handling intermediate steps
- Retry if something fails
- Stop when the task is done
Core Agentic AI Concepts
I hope by now you are pretty clear on what AI agents are and why would we even need it. Next, let’s dive into the world of Agentic AI and try to build some basic vocabulary that would help simplify learning of more advanced concepts.
1. Agent
An agent is an AI system that:
- Has a goal
- Can take actions
- Can reason over steps or action items
- Can use tools
Think of it as a long-running task executor, not a chatbot.
2. Goal / Objective
This is the north star for the agent.
Examples:
- “Summarize a document”
- “Answer a customer question using internal docs”
- “Monitor logs and alert on anomalies”
Agents don’t just respond — they work towards a goal.
3. Tools
Tools are external capabilities the agent can call.
Examples:
- APIs (weather, payments, search)
- Databases
- File system
- Functions you write in Python/JS
Agents decide when and how to use them.
4. Memory
Agents often need context beyond one prompt.
Memory can be:
- Short-term (conversation history)
- Long-term (stored facts, embeddings, vector DBs)
For a basic agent, short-term memory is enough.
5. Planning / Reasoning Loop
This is the heart of agentic behavior.
Typical loop:
- Observe current state
- Think about next step
- Choose an action
- Execute the action
- Observe result
- Repeat or stop
You’ll often see this called ReAct or Plan-Act-Observe.
- Multi-agent System
A multi-agent system is a setup where multiple AI agents work together, each with its own role and responsibility, to solve a problem collaboratively like a team.
- Environment
An environment is where the agent operates, like a :
- browser
- file system
- calendar
- business tools
- apps
- Agent Framework
These are the tools used to build agents. Some of the most commonly used frameworks are as below
- LangChain
- AutoGPT
- CrewAI
- LlamaIndex
Time to get our hands dirty…
Enough of theory, let’s jump on to build a basic AI agent that would help clear up any doubts that you still may have.
We will build a Task Agent that:
- Takes a task from the user
- Decides whether it needs a tool
- Uses the tool
- Returns a final answer
What This Agent Will Do
Example task:
“What’s the current time and should I grab a coffee?”
The agent will:
- Decide it needs the current time
- Call a time tool
- Reason about coffee
- Respond
Step 1: Basic Setup
We will use Python and an OpenAI-compatible API.
Run the following command in your terminal or command prompt:
pip install openai
Step 2: Define a Tool
We will define the tool as a simple Python function in a single file, let’s say agent_hello_world.py. For a real project, you would likely organize tools into separate modules, but keeping everything together makes it easier to see how the agent loop works end to end.
from datetime import datetime
def get_current_time():
return datetime.now().strftime("%H:%M")
Step 3: Define the Agent Prompt
This is where agent behavior is designed. Add this to your python file created in previous step.
SYSTEM_PROMPT = """
You are a simple AI agent.
You can:
- Think step by step
- Decide when to call tools
- Use the tool: get_current_time
When the task is complete, respond with a final answer.
"""
Step 4: Agent Reasoning Loop
Next, we will add the reasoning logic to our code.
Below snippet starts by sending the user’s task, along with a system prompt, to GPT-4 through the ChatCompletion API. It then enters a loop where it keeps receiving the model’s responses. If the model’s reply includes a specific trigger like USE_TOOL:get_current_time, the agent calls a helper function to get the current time. Adds both the tool request and the tool result back into the message history and continues the conversation. If the reply doesn’t request a tool, the agent assumes the task is complete and returns the final response.
import os
import openai
from openai import
openai.api_key = "YOUR_API_KEY"
def run_agent(task):
messages = [
{"role": "system", "content": SYSTEM_PROMPT},
{"role": "user", "content": task}
]
while True:
response = openai.ChatCompletion.create(
model="gpt-4",
messages=messages
)
reply = response["choices"][0]["message"]["content"]
if "USE_TOOL:get_current_time" in reply:
time = get_current_time()
messages.append({"role": "assistant", "content": reply})
messages.append({"role": "tool", "content": time})
else:
return reply
Step 5: Test It
Finally, add below lines of code in your script and execute to test it.
task = "What time is it right now, and should I grab a coffee?"
result = run_agent(task)
print(result)
Congratulation! You’ve just built a basic agent.
Not fancy, but real one by yourself.
We just implemented a basic local AI agent with:
- Goal-driven behavior
- Tool usage
- A reasoning loop
- Stateful decision-making
This is the core of every agent framework out there.
LangChain, AutoGen, CrewAI - all of them abstract this exact pattern.
Final Thoughts
To me, agentic AI isn’t some overnight revolution but It is more like a slow and steady shift in the same direction we have been moving for years which is to automate and make systems more intelligent. When you strip away the buzzwords, it is still just a system running in a loop, holding context, making decisions, and calling tools when needed. The only new part is that the “thinking” piece can now be done by an LLM.
I have found that agents make a lot more sense when you treat them like yet another engineering component and not like a futuristic sci-fi concept. Build them the way you would build any other system and you would realize that not every problem needs an agent, sometimes the simplest workflow is still the best one.