Introduction

Artificial intelligence (AI) has helped software developers automate difficult tasks, which has made them more productive by letting them write more systems of higher quality. One of the most important things that has happened is the creation of Artificial Intelligence (AI) agents, which are digital entities that can think, make decisions, and do tasks mostly on their own with little help from people. AI agents can understand natural language, plan over several steps, call external tools or APIs, and change the plan on the fly if the context changes while the plan is being carried out. This is different from static automation algorithms, which are based on hard-wired rule-based logic.

Thanks to advances in Large Language Models (LLMs), the models can now think like an agent and understand things like intent and the structure of a complex response that agents will render. It is said that AI agents are now in charge of tasks like coding, debugging, writing documentation, testing, and coordinating projects through knowledge bases, APIs, and development environments. Combining LLM methods of thinking and action with practical tools significantly transforms the creation and maintenance of software.

One area where the AI agents proved useful was in software development. Modern systems have huge code bases and complicated dependency chains that are crushing developers and development teams under their weight. They also have a never-ending need to release more code faster. AI agents can help lower the cognitive load, speed up service delivery, and improve service quality by automating repetitive tasks in the pre-provisioning process and giving advice that is tailored to the situation. Also, developer teams that use code agents like GitHub Copilot have said that they can develop faster and with fewer bugs, which shows us how these technologies can be used in real life. There have also been some big gains in productivity in the sector.

This article talks about AI agents, how they work, and what they are used for in software development. We start by giving a brief overview of the software architecture that makes up the LLM reasoning engine, external tools, control, and knowledge integration. We're going to talk about how an agent works, how it answers questions, what tools it uses to do its job, and how to combine results. The Steps Lastly, we end this chapter with code example to show game developers how to make their own simple AI agents and add them to their games. So, the goal of this talk is to give you both the theoretical background and the practical steps for how we can use AI agents to change the way software is made in the future.

UNDERSTANDING AI AGENTS

AI agents are intelligent software systems designed to act on their own. They can make decisions, carry out tasks, and adapt to changing conditions—all with little to no human input. Unlike traditional automation, which relies on fixed rules, these agents can reason, plan, and use multiple tools to handle complex, multi-step problems that demand flexibility and learning. [1].

AI agents can be grouped based on their capabilities, roles, skills, and the specific outcomes they’re designed to deliver. Here’s a look at some of the most common and emerging types being developed today:

Copilot Agents for Individual Support

Often called “copilots,” these agents are designed to work alongside individual users to boost their productivity. Think of them as smart assistants that help with everyday tasks like drafting emails, writing code, or pulling up relevant information. Tools like Microsoft 365 Copilot and ChatGPT fall into this category. Some copilots can even adapt to a user's specific workflow. That said, their overall impact depends heavily on how motivated and engaged the user is.

Agents for Workflow Automation

These agents focus on automating tasks—either single steps or entire workflows. They act as AI-driven process managers, helping streamline and execute existing business operations. Examples include Microsoft’s Copilot Studio and Salesforce’s upcoming Agentforce platform. Since these agents are often layered onto current systems, their success depends on strong implementation, effective change management, and continuous oversight.

Domain-Specific Generative AI Agents

These are purpose-built agents created for specific business areas or functions. For example, a customer service bot that handles queries end-to-end, or an AI tool integrated into the software development pipeline. Unlike more generic AI tools, these agents are designed with AI as a core component of the solution—not just an add-on to existing processes.

AI-Native Enterprises and Operating Models

At the highest level of integration are AI-native organizations, where agents are embedded throughout the entire operating model. This includes rethinking how teams work, how processes run, and even how the business generates value. It’s a full-scale shift—similar to the digital transformations many companies went through in the last decade, but now with AI as the foundation.

AI Virtual Workers

AI virtual workers are agents that function like full-fledged employees. They take on roles typically held by humans, working within current organizational structures. These agents give companies a way to gain the benefits of AI without completely overhauling their operations—potentially allowing faster returns while maintaining familiar systems (McKinsey, 2025). These examples show just how diverse AI agents can be. Some are quite rigid, following predefined rules with little adaptability. Others are highly autonomous, capable of learning from experience, correcting their mistakes, and even working alongside other agents or users to reach complex goals. [2]. With the rapid growth of generative AI, these agents are stepping into even more dynamic and demanding roles. They’re already helping to power everything from virtual assistants and recommendation engines to self-driving cars. In the world of software development, AI agents are starting to reshape the entire lifecycle—from writing and testing code to deployment and ongoing maintenance. As more organizations adopt both standalone and collaborative (multi-agent) systems, they're discovering new ways to streamline workflows, improve cross-team collaboration, and build AI tools customized for their unique business needs. [1].

AI Agent System Architecture and Components

A high-level overview of an AI agent system can be described as a collection of parts that work together to meet user needs. Figure 1 shows what a typical architecture looks like. The user makes a request, which the AI agent must handle. A Large Language Model (LLM) is used to model the agent's "brain" for processing the query and coming up with solutions. The agent can use external tools or APIs (like web search, calculators, or databases) and query a knowledge base (like a vector database or a document repository) to uncover information that the LLM model doesn't already know. The agent plans these steps and then sends a response to the user. This structure fits with the growing trend of AI systems that use tools: "An agent uses a LLM to know what actions to take and in what order." [3]. "LLM looks at the question and, if necessary, makes a plan. This could mean using a tool to get a piece of data and then coming up with a final answer."

Core Components

In short, an AI agent architecture has an LLM for reasoning, plug-in tools/APIs for actions, a memory or knowledge base for context, and a control loop that plans and carries out steps to help users reach their goals. This modular design is flexible, so programmers can easily "plug" in agents as new tools or data sources are added. Multiple frameworks enhance its user-friendly nature. For example, LangChain (which is popular in Python) adds abstractions for LLMs, tools, and memory to make it easy to make these kinds of agents quickly . Currently, a C# version of LangChain is under development. [6]. Microsoft's Semantic Kernel is another option. It has been called "a lightweight SDK to make building AI agents easier by seamlessly plugging in the latest AI models and allowing you to write custom AI in C# or Python." [5]. Microsoft Copilot Studio lets non-coders build custom AI agents with drag-and-drop workflows and built-in connectors. Microsoft Copilot Studio is a low-code platform that lets you make your own AI agents with workflow automation, connections to other programs, and more control over business logic. [7] Next, we'll discuss agent workflow and how to create a simple C# agent.

Agent Workflow – From User Query to Action and Answer

To understand what happens behind the scenes when an AI agent does its job, it's helpful to break down the step-by-step process that happens when an AI agent is given a task. The reason for this way of working is that one should think and act in turn until the task is done. Let's use a flowchart in Figure2 to show a simpler version of an agent logic loop.

  1. Get User Query: Receive a User Query: The process begins when an agent receives a user's question or request. For instance, "What were Company X's sales last year, and who was the CEO at the time?" The agent will probably need to obtain data (revenue) and look up the CEO's name for that year.

  2. Step in LLM Reasoning: At this point, the agent first talks to the LLM to figure out what the query means and what to do next. The question-stating agent's prompt to the LLM includes context (in other words, the prompt to the LLM is a prompt to come up with that response based on the context). The output stage of the LLM is about the agent's thoughts on what to do. It might say, "This question needs data retrieval, so I should call the financial database tool to obtain revenue and maybe the Wikipedia tool to obtain the CEO's name." The agent makes this choice.

  3. Execute Action (Tool or Data Retrieval): The agent does what was asked of it. It might use an external tool for this or ask our KB anything. In our case, the agent could use an API call to a financial database to find out how much money Company X made last year, or it could search the web for "Company X CEO 2022." At this point, the agent leaves the world of pure LLM and starts to interact with the real world. For instance, the finance API might say, "Company X's revenue in 2022 was $15 billion," and Wikipedia might say, "The CEO of Company X in 2022 was Jane Doe."

  4. Send Result Back to LLM (Contextual Eterning): After getting the outside information, the agent sends it back to LLM. This is often called "looping back to reasoning." The new data is used to improve the dialog (or prompt) so that the LLM can use it. The agent is saying to the LLM, "I did this, and I'll show you." Go ahead now. The LLM would then put all that information together and, if necessary, either come up with another action or an answer. For tasks with more than one step, steps 3–5 are repeated. The LLM may chain a series of tool calls, deciding what to do next each time based on the new information. The result is the ability of more advanced agents to chain tasks. Most modern libraries handle this loop for you. For example, LangChain agents have to figure out whether the LLM output is a prompt or a response and loop accordingly. OpenAI's function-calling API makes this job even easier by returning a function call in a JSON object that the code can run to keep the conversation going.

  5. Answer to User: The LLM will finally tell the user that it has enough information to answer their question ("Final answer" path). The agent then ends the loop and sends the answer back to the user. If we asked LLM about the company data, she might say something like, "CompanyX made $15 billion in sales last year, and Jane Doe was the CEO at the time." The agent is the one who formats the query as the final answer and sends it on. The end comes when the agent has done the best job possible of fulfilling the user's request.

This process keeps the agent on its toes and lets it figure out the best way to solve a problem on the fly. The output of the AI model on each iteration is what decides what to do or say. The second type deals with longer queries that require multiple steps or actions. For example, an agent trying to plan a trip will probably need to look up flights, then hotels, and then make a trip, which will involve several action cycles.

An extra dimension for dialogue agents is to think about the history of the conversation. When users pose follow-up questions, the agent recalls pertinent Q&A from previous visits to maintain context. For this reason many frameworks have a conversation memory buffer. For example, LangChain's ConversationBufferMemory keeps chat history for prompts. [3].

The agent's prompt to the LLM would include the conversation history so that the model would know what "this paragraph" means. The memory object from “AI Agent System Architecture and Components“ section takes care of this link.

Not all agents use the same methods, so it's important to remember that. You can write simpler agents in a logic-based way, like "If the query has an X word, then use the Y API." More advanced agents use the LLM as an intelligent controllerfor dynamicity. Some agents only do one thing at a time, like answer a question or use a tool, while others do many things at once and are also independent, meaning they keep working on their own goals until they stop. Agents in multi-agent systems can even talk to each other or focus on specific tasks, but that's as far as we'll go here.

The ReAct pattern we showed, where the LLM alternates between giving an action and a direct answer, has become a model for agents that use tools. This is because it lets AI help one agentbreak a complex problem into smaller parts, which can then be solved using the best resources. In the past, this task was done by using prompt patterns and looking at the model's answers. As mentioned, it's now much stronger thanks to OpenAI's function calling interface. The model of the past can return a JSON object that looks like a function call, which the client code runs, and then the model can pick up where it left off with that answered object. [4]. The loop in Fig. 2 is basically there to make sure that the model's choices can be followed in a safe and organized way. For multi-step workflows in C# projects, Microsoft's Semantic Kernel also uses the same idea of Planner and Functions (Plugins). [8] [5].

Once you understand this flow, it becomes relatively easy to determine where to incorporate custom logic in the future. For example, you could add a post-process step after the LLM answer to format or log every tool result for auditing. It also shows what libraries do: they give you this loop for free so you don't have to write it yourself. We're going to get real and have some fun. Next, we'll show how to use C# to create agent-like behavior and connect an LLM to a tool. This will be a straightforward example to reinforce these concepts.

AI AGENTS IN SOFTWARE DEVELOPMENT

AI-driven agents are fundamentally changing the way software is developed and digital products are built. By automating repetitive tasks such as writing code and executing tests, these intelligent systems help developers work more efficiently and with greater accuracy. However, their influence extends well beyond simple time savings. AI agents are beginning to reshape the software development process itself—introducing smarter workflows and enabling higher-quality results with reduced manual effort. AI is already making a noticeable difference in software engineering, particularly through tools like OpenAI’s Codex and GitHub Copilot. These systems can translate natural language prompts into functional code, helping developers automate routine and repetitive tasks. By cutting down on the time spent writing boilerplate code, they allow engineers to focus on more complex and creative challenges. In addition to generating code, AI agents can offer real-time suggestions, auto-completions, and even explain code snippets—making development faster and easier. This not only reduces mental strain for developers but also streamlines workflows, boosts productivity, and improves the overall efficiency of the software development process (Panyam & Gujar, 2025).

One of the most notable advancements in this space is agentic coding, a new paradigm in AI-powered development.

In this approach, autonomous or semi-autonomous agents go beyond basic assistance and take on core responsibilities—such as planning, implementing, and validating complex coding tasks—with minimal human oversight. These agents can interpret natural language instructions and translate them into fully functional, testable code. Supporting such capabilities requires a robust system architecture that integrates goal-based planning, task decomposition, execution environments, safety mechanisms, and continuous feedback loops to ensure accuracy and reliability (Sapkota et al., 2025).

AI agents are transforming software development by offering tailored support that boosts productivity, minimizes errors, and frees up time for creative problem-solving. By combining techniques like retrieval-augmented generation, code search, and fine-tuning, highly specialized and powerful agents can be built for specific roles (Criveti, 2023).

Hands-On Implementation in C# – Building a Simple AI Agent

In this part, I'll show you how to make a very basic AI agent in C#. Our agent will be based on an OpenAI GPT model, and we hope it will be able to use at least one tool or external data source. The goal is to show how to use the different units in the code. And don't worry if you're a new developer; we'll make sure to explain it in simple steps. Readers who are already familiar with that (mid-level engineers and architects, etc.) will be able to use this simple demo to figure out how to set up more complicated, production-ready systems.



Setup: You need to be able to use an LLM API first. We will use OpenAI's API, but you could also use an OpenAI endpoint or another service. You need an OpenAI API key and the right .NET SDK or library. TheOpenAI .NET SDK (like OpenAI_API or Azure.AI.OpenAISDK if you're using Azure) is a useful NuGet package I found. We will use the community OpenAI_API package to show how easy it is to call the OpenAI service [1]. In Visual Studio, you can get it through the NuGet Package Manager. Now that everything is set up, we can write code that works with the model.

Our weak agent will answer questions, and if it sees a math expression, it will use a calculator. This is just a toy example, but it shows how an agent could learn to use a tool. If the user asks, "What is 4+5?" for instance, the agent is able to use a tool to figure out the sum and then give a reason for it.

Let's talk about how to put this into action:

Here is a straightforward code example that illustrates the concept:

using OpenAI_API;
using OpenAI_API.Completions;
using System.Text.RegularExpressions;

class SimpleAgent
{
    static async Task Main(string[] args)
    {
        string openAiApiKey = "YOUR_API_KEY_HERE";
        var api = new OpenAIAPI(openAiApiKey);

        // Tool: A simple calculator function
        Func<string, double> Calculator = expr =>
        {
            // Very basic eval using DataTable (for demo purposes)
            try {
                using var dt = new System.Data.DataTable();
                var result = dt.Compute(expr, ""); 
                return Convert.ToDouble(result);
            } catch {
                throw new Exception("Invalid expression");
            }
        };

        // User query
        string userQuery = "What is 4 + 5? Explain the result in words.";

        // Prompt design: instruct the model about the CALC tool usage
        string systemInstructions = 
            "You are a smart agent. Answer the user's question. " +
            "If the question includes a math problem, respond with 'CALC:<expression>' to use the calculator.";

        // Combine system instructions and user question
        string prompt = systemInstructions + "\nUser: " + userQuery + "\nAgent:";

        // Call the OpenAI API for completion
        var request = new CompletionRequest(prompt, model: "text-davinci-003", maxTokens: 150);
        var response = await api.Completions.CreateCompletionAsync(request);
        string agentReply = response.Completions[0].Text.Trim();

        Console.WriteLine("Raw agent reply: " + agentReply);

        // Check if agent requested calculation
        if(agentReply.StartsWith("CALC:"))
        {
            // Extract expression after "CALC:"
            string expression = agentReply.Substring("CALC:".Length).Trim();
            double value = Calculator(expression);
            Console.WriteLine("Calculator result: " + value);

            // Ask the model again, now providing the calculation result for context
            string followUpPrompt = systemInstructions + 
                $"\nUser: {userQuery}\nAgent: CALC:{expression}\nCalculator: {value}\nAgent:";
            var followUpRequest = new CompletionRequest(followUpPrompt, model: "text-davinci-003", maxTokens: 150);
            var followUpResponse = await api.Completions.CreateCompletionAsync(followUpRequest);
            string finalAnswer = followUpResponse.Completions[0].Text.Trim();

            Console.WriteLine("Agent final answer: " + finalAnswer);
        }
        else
        {
            // No calculation needed, output the answer directly
            Console.WriteLine("Agent final answer: " + agentReply);
        }
    }
}

So let's break down what this code does:

This is a pretty simple example, but it gets the loop: Model decision → Tool executed → Model final answer. In return, it's basically a manual loop from clear and past in Part 2. In real life, this model output would be automatically parsed and changed, and then it would loop back. For example, OpenAI's function-calling API would let us skip the string parsing step. The model would output a structured payload like {“name”: “Calculator”, “arguments”: {“expression”: “4+5”}}, and we could just call the function and then send the result as a new API call with the role "function." Semantically, Semantic Kernel or LangChain would put those together for us. But it's useful to be able to understand this flow at the code level.

If you run the C# program above, you might receive output like this:

Raw agent reply: CALC:4 + 5 Calculator 
result: 9 
Agent final answer: 4 + 5 equals 9, 
so the answer is nine.

This means the agent recognized to calculate 4+5, used the calculator tool (we wrote the result), and answered the final question using it. If the user asked a good question, like "Who is the President of France in 2023?" and gave the tool access to a knowledge base, it could have answered right away or used a different tool.

This is just one way to extend the Agent repeatable pattern:

Testing and Iteration: Creating an agent may involve designing and testing prompts over and over again. You might see that the model doesn't always give you the tool format you want, or it just gives you an answer even when it should have used a tool. To make this better:

One of the first things you should do for production is add error handling. For example, if the calculator threw an exception (like when the user entered a bad expression), you might want the agent to handle it by saying "I'm sorry" or "try a different approach." In the same way, if the LLM output is wrong, they can either try again or use the stockpile.

Advanced Use Cases and Future Directions

Now that we've covered the basics, let's move on to more advanced topics and how AI agents have been used in the real world, especially from the perspective of an experienced architect.

Enterprise and Multi-Agent Systems: In more complex situations, you could have a group of agents with different skills working together. For example, one agent might handle user interactions (a conversational front-end) but send requests to specialist behind agents (one for database access, one for math, etc.). In a single simulation, agents can also play more than one role or personality (for example, two AI agents can pretend to be in a conversation with each other to play both sides of a negotiation). Agents can work together to solve challenging problems. For example, AutoGPT and GPT-Engineer are research projects that show how agents can break down high-level goals, write code, and then critique and improve it in rounds. Agents must set rules for communication and results when working together. One study found that agents might specialize and work together—one for data verification and another for research—so they can work on harder problems as a group. It's a very exciting new area, but it does make it harder to keep things consistent and stop agents from going in an endless loop of asking each other questions!

Integration with Business Workflows: Built into the flow of business Microsoft's most recent products, such as Copilot for 365, Copilot Studio, and Azure AI Foundry, point to a future where AI agents will be built into business processes. For example, an AI agent that works with Microsoft 365 can type replies to emails, pull data from CRM systems, and set up appointments just like a digital personal assistant for a whole company. As shown before in the two layers of the agent we have on-premises and in the cloud, these use cases include connecting the agent to enterprise data while keeping security and compliance in mind. Human programmers should also make sure that the agent doesn't leak private information, that it can be audited (by logging what the agent did or said for review), and that it has fail-safes (for example, it should be able to require human permission for some high-risk actions). Microsoft's Azure AI Foundry is an offer that isn't real. No-Code/Automation level (or if it is, it's an example of a level that's a lot like the top level) The difference between Copilot (extended), Copilot Studio, and Azure AI Foundry might be more about how much control you have over the products, since they all have different goals. For example, with M365, you configure the application and then use it, whereas with a custom solution, you create it yourself. [7]. Copilot (Extended) might work for you if you need to connect some office apps with less code and faster integration. If you want to make your own agent with your own model cases and data orchestration, Azure AI Foundry (code-first) is the way to go. [7].

Performance and Scaling: If your agent is calling an outside tool, you might be adding latency. Each call to a tool could be a web request or a database query. If an agent takes more than one step, those steps add up. Caching results, parallelizing calls (when it makes sense), and optimizing the number of LLM tokens (prompt size) are all things that can be done to improve performance. Cost is also a factor; each LLM call costs tokens, so an agent who just chatted back and forth a lot could end up with a big bill. More advanced systems try to do things like combine multiple questions into one prompt to cut down on loops or learn from previous sessions so they don't have to do the same work again. The easiest way to do this is to run the big model on all the passages and then cut the list down based on reasoning scores. However, there are other more interesting ways to do it, such as using smaller, cheaper models for most of the subtasks and only running the big model at the end or for the most difficult reasoning (a kind of model cascading).

Trust and reliability: Keep in mind that AI agents can sometimes be wrong or see things that aren't there. For instance, the LLM might make up a tool result if it doesn't have access to the tool and has been given wrong feedback (this is why it's important to give it the right tool feedback). Cross-validation steps may be needed to ensure reliability. For instance, after the agent responds, you could have a verification agent or function that checks some of these claims against a trustworthy source. In more sensitive situations, like giving medical or financial advice, it would be best to have a person check an AI agent's work or only let it work with a few specific formats. The architecture's ongoing feedback loops help with this. You can start making the agent reflect on its answer by asking, "Did I answer the question?" In short, when we want to work with healthcare professionals in any kind of clinical practice, we can ask ourselves these questions: Think, Model, Confirm, Execute: Did my answer turn into an answer? and even fix it if it doesn't work.

Use Case Spotlight: To give you an idea, let's talk about some examples:

Future Outlook: There is a lot of research going on right now about Future Work AI agents. New frameworks and methods are on the way that will make them stronger and easier to build:

Finally, creating AI agents is a process of combining the power and complexity of LLMs with a number of other tools and data and then using them to reason in a loop. We started with some ideas (architecture and elements), then we looked at how an agent thinks and acts over time steps. Finally, we used a simple example in C# to put these ideas into action. After that, we looked at how these ideas apply to more advanced agents in several other fields.

Generally, we create AI agents by integrating the generative and inductive capabilities of LLMswith external tools and data through a reasoning loop. We started with a theoretical background (architecture and components), then went through the steps an agent takes to think and act, and finally made a simple C# example that made those ideas more real. We then looked at how these ideas can be used by more advanced agents in different fields.

AI agents are already changing software development and many other fields. For example, McKinsey says that development teams that used AI agents were able to do tasks almost twice as fast in some cases. Their effects will only get stronger. Just like you, the software engineer or data scientist, when you start making AI agents, keep in mind that you should start with a small set of features, do thorough testing, and add them slowly. You now have more tools than ever to create agents that are reliable and useful. These includeLangChain, Semantic Kernel, and our new platform offerings. We hope this series has given you the tools and inspiration to try out AIs in your own work. In future parts, we'll look at a specific advanced use case or even show you how to make something complicated, like a knowledge retrieval QA bot or an agent that works with a certain business workflow. Have fun coding, and may your agents always choose the right tool for the job!

References

[1] IBM. (n.d.). AI agents. IBM. Retrieved September 15, 2025, from https://www.ibm.com/think/topics/ai-agents

[2] Vailshery, L. S. (2024, September 27). AI agents—statistics & facts. Statista. https://www.statista.com/topics/12433/ai-agent/

[3] Patel, M. (2023, August 22). Implementing agents in LangChain. C# Corner. https://www.c-sharpcorner.com/article/agents-in-langchain/

[4] Foy, P. (2023, June 16). Getting started with GPT‑4 function calling. MLQ.ai. https://blog.mlq.ai/gpt-function-calling-getting-started/

[5] Microsoft. (2024, June 24). Introduction to Semantic Kernel. Microsoft Learn. https://learn.microsoft.com/en-us/semantic-kernel/overview/

[6] TryAGI. (n.d.). LangChain .NET documentation. GitHub Pages. Retrieved September 15, 2025, from https://tryagi.github.io/LangChain/

[7] Spiridon, Ș. (2025, March 4). AI agents made easy: Choosing between Microsoft 365 Copilot (Extended), Microsoft Copilot Studio and Azure AI Foundry. ITMAGINATION Blog. https://www.itmagination.com/blog/ai-agents-microsoft-365-copilot-copilot-studio-ai-foundry

[8] Microsoft. (n.d.). Semantic Kernel [GitHub repository]. GitHub. Retrieved September 15, 2025, from https://github.com/microsoft/semantic-kernel

[9] Shen, Y., Song, K., Tan, X., Li, D., Lu, W., & Zhuang, Y. (2023). HuggingGPT: Solving AI tasks with ChatGPT and its friends in Hugging Face. arXiv. https://arxiv.org/abs/2303.17580

[10] Microsoft. (n.d.). Microsoft Learn. https://learn.microsoft.com

[11] Impressive-Fly3014. (2025, January). Best framework to build AI agents like crew Ai, LangChain, AutoGen ..?? [Discussion post]. Reddit. https://www.reddit.com/r/LLMDevs/comments/1i4742r/best_framework_to_build_ai_agents_like_crew_ai/

[12] Prakash, L. D. (2025, June 4). Agentic AI frameworks: Building autonomous AI agents with LangChain, CrewAI, AutoGen, and more. Medium. https://medium.com/@datascientist.lakshmi/agentic-ai-frameworks-building-autonomous-ai-agents-with-langchain-crewai-autogen-and-more-8a697bee8bf8