⚠️ This blog post was created with the help of AI tools. Yes, I used a bit of magic from language models to organize my thoughts and automate the boring parts, but the geeky fun and the 🤖 in C# are 100% mine.

Hi there! 👋

If you’ve been keeping an eye on the AI developer world, you might have heard about the Microsoft Agent Framework. This is a brand-new (as of late 2025) open-source SDK and runtime from Microsoft that makes it much easier to build and orchestrate AI agents – including scenarios with multiple agents working together. In this post, I’ll give you a friendly rundown of what the Agent Framework is, why it’s exciting, and how you can start tinkering with it in C#. We’ll also point to official docs, blog posts, and videos so you can dive deeper.

What Is the Microsoft Agent Framework?

At its core, Microsoft Agent Framework (public preview) is a toolkit for building intelligent agents and multi-agent systems in a simpler, more unified way. It’s available for both .NET and Python, and it’s open-source. In fact, it’s the successor to two earlier Microsoft projects: Semantic Kernel (which focused on enterprise AI orchestration) and AutoGen (a Microsoft Research project on multi-agent interactions). The Agent Framework basically combines the best of both – you get AutoGen’s ease of orchestrating multiple agents together, plus Semantic Kernel’s robust features like state management, telemetry, and reliability. The goal is to give developers one unified framework for building AI agents, from quick prototypes all the way to production-grade, multi-agent applications.

Why do we need it? As more developers embed AI into apps (and as apps get agentic – where AIs take autonomous actions), it can get pretty complex managing everything. You might have to juggle calling various AI models, connecting to external tools/APIs, keeping track of conversation context, handling long processes, etc. Traditionally, you’d stitch together a lot of custom code or use separate libraries. The Agent Framework simplifies this by providing a cohesive set of APIs to handle agents, tools, workflows, memory, and more. Microsoft also built it to integrate nicely with Azure AI Foundry (their cloud platform for AI agents) so you can develop locally and then deploy to Azure with enterprise features like observability and compliance.

Key capabilities at a glance: With Microsoft Agent Framework, developers can…

  • Use any AI model provider – plug in Azure OpenAI, OpenAI, local models, or even community-hosted models (e.g. via GitHub or Ollama) as the brains of your agent.
  • Integrate tools and APIs easily – Agents can use external tools (like web search, databases, or Hugging Face pipelines) thanks to an open standard called Model Context Protocol (MCP). You can also bring in any API by exposing it via OpenAPI specs.
  • Have agents talk to agents – The framework supports Agent2Agent (A2A) communication, meaning an agent running in a Python process could call a C# agent or vice versa. This opens the door to cross-language agent collaboration.
  • Orchestrate complex workflows – You can chain multiple agents (and functions) into workflows that have conditional logic, parallel steps, human-in-the-loop pauses, etc. Think of workflows as a way to script multi-step processes or dialogues among agents in a controlled way.
  • Keep state and memory – Every agent can maintain conversation state using agent threads (so it can remember context from previous interactions). You can even persist this state (e.g. save the conversation history to reload later) to make agents that remember past sessions.
  • Observe and debug easily – The framework has built-in telemetry via OpenTelemetry and a fancy DevUI dashboard. This lets you trace what each agent is doing, measure performance, and even visualize the agent’s chain-of-thought in real time (very cool for debugging!).
  • Stay in control – Because it’s open source, you can run it anywhere, and there are hooks for governance (tool usage filters, content safety, etc.). And when deploying to Azure AI Foundry, you get enterprise features like persistent agents (agents as a cloud resource you can manage) and unified logging across all your agents.

In short, Microsoft Agent Framework is trying to make AI agent development developer-friendly and scalable. Experiment on your local machine, then deploy to the cloud when you’re ready, without rewriting everything.

(For all the official details, check out the Microsoft Learn documentation here: https://learn.microsoft.com/agent-framework/overview/agent-framework-overview .)

Agents and Workflows – The Building Blocks

The framework primarily revolves around AI Agents and Workflows:

  • AI Agents: An AI agent is an autonomous component that uses an LLM (large language model) to handle a task or conversation. You give an agent some instructions (e.g. a system prompt like “You are a travel assistant” or some skills it should have) and then you can feed it user queries. The agent will decide how to respond – possibly by calling tools or web services, invoking functions, or just answering with the model. Under the hood, an agent in this framework can have its own thread (conversation state) to remember context, a context provider for long-term memory or embeddings, and middleware to intercept its actions (for logging or modifying behavior). But at a basic level, you can think of an agent as “an AI-powered entity that takes an input, thinks/uses tools, and returns an answer.” For example, you might have an agent that is a math tutor: it receives a student’s question, maybe uses a calculator tool via MCP, and then returns an explanation.
  • Workflows: A workflow in Agent Framework is like a script or graph that connects multiple agents (and possibly deterministic functions) to accomplish a multi-step process. Instead of one big agent doing everything, you can have specialized agents for different sub-tasks and orchestrate them. Workflows support branching logic, loops, parallel execution, and even pausing for human input. You explicitly define how data flows between steps, which makes complex tasks more reliable. For instance, you could define a workflow for handling a customer support issue: first agent tries to answer from FAQ, if not confident then hand off to a second agent that does a database lookup, and finally maybe escalate to a human if needed. The framework provides an AgentWorkflowBuilder API that lets you easily chain agents sequentially or in other patterns (like Magentic pattern for multi-agent brainstorming). The nice thing is you can treat a whole workflow as an agent itself – meaning once you define a workflow of many agents, you get a single combined agent object that you can call with a user query, and it will internally run through the workflow steps.

Together, agents and workflows let you tackle simple and complex scenarios. Use a single agent for open-ended tasks, or multiple coordinated agents for jobs that benefit from specialization and structured flow.

(Fun fact: The Agent Framework is designed to handle scenarios where a single agent might struggle – for example, something needing more than 20 tool calls or lots of decisions. In those cases, breaking it into a workflow of multiple agents can be more effective.)

Quick Example: Defining and Running an Agent in C#

Enough talk – let’s see a bit of code! 🧑‍💻 Suppose you want to create a simple agent in .NET that can use some external tools. The Agent Framework is available as a NuGet package, so first you’d install it (e.g. dotnet add package Microsoft.Agents.AI – plus any specific extensions you need). For this example, let’s say we have access to some tools from Hugging Face’s MCP (Model Context Protocol) server – perhaps an image generator tool. We’ll create an agent that can generate a fun image based on a prompt. Here’s how you could set it up:

// Create a chat client for an LLM (Azure OpenAI, GitHub Models, local, etc.)
var chatClient = ChatClientProvider.GetChatClient();

// (Optional) Connect to Hugging Face MCP to get a set of tools 
var (mcpClient, tools) = await HuggingFaceMCP.GetHuggingFaceMCPClientAndToolsAsync();

// Build an AI agent with a name, instructions, and attach the tools we got
var imageAgent = chatClient.CreateAIAgent(
    name: "Image Generator",
    instructions: "Use Hugging Face tools to create pixelated images.",
    tools: tools)
    .Build();

// Run the agent on a user query
string userQuery = "create an image of a raccoon in Canada";
var result = await imageAgent.RunAsync(userQuery);
Console.WriteLine(result.Text);

// Cleanup
await mcpClient.DisposeAsync();

In the code above, we first initialize a chatClient (this represents our connection to whatever model backend we choose – it could be Azure OpenAI with a GPT-4 model, or a local model, depending on your configuration). Then we retrieve some tools from a Hugging Face MCP server (for example, one tool might allow text-to-image generation). Using the Agent Framework, we create a new AI agent via CreateAIAgent, giving it a friendly name, some instructions, and plugging in the list of tools it can use. Finally, we call RunAsync with a prompt, and the agent returns a response. In this scenario, the agent will likely call the image generation tool behind the scenes, then return perhaps a URL or description of the generated image.

What’s neat is how little code was required to stand up an agent with a custom capability. The framework handles hooking the tool into the agent’s reasoning process. Also, note that we could easily swap out which model the chatClient uses (e.g., use a local Ollama instance or Azure OpenAI, just by how we configure ChatClientProvider), without changing the agent logic.

Orchestrating Multiple Agents

The true power of the framework shines when you start composing agents. For example, let’s say you have three agents: agentA, agentB, and agentC, each responsible for a different step of a task (perhaps Researcher, Writer, and Reviewer agents for generating a report, as shown in one of the demo blogs). You can chain them into a workflow easily:

// Orchestrate agents sequentially: A -> B -> C
Workflow workflow = AgentWorkflowBuilder.BuildSequential(agentA, agentB, agentC);
AIAgent orchestratorAgent = await workflow.AsAgentAsync();

// Now run the combined workflow-agent on an input
var finalOutcome = await orchestratorAgent.RunAsync("Some initial prompt or question");

In a sequential workflow like the above, agentA will process the input first and pass its output to agentB, then agentC, and so on. We wrap the workflow into an AIAgent (orchestratorAgent) using AsAgentAsync(), so that from the outside you can treat the whole chain as if it were a single agent. The final result you get (finalOutcome) is produced after all three have done their part. The Agent Framework supports other orchestration patterns too (parallel execution, conditional branches, even agent hand-offs), but sequential is the simplest to grasp.

Tip: The framework automatically takes care of threading conversation context through the workflow. And you can monitor each step with telemetry – e.g., see how long each agent took, what model it used, etc., if you’ve enabled OpenTelemetry logging.

Cool Features to Explore

Microsoft Agent Framework comes with a bunch of interesting features and integrations. Here are a few that new users find exciting:

  • Persistent Conversations: Agents can remember what was said before. The framework allows you to serialize an agent’s AgentThread (its conversation history) to disk or a database, and load it back later. This means you can build a chat bot that doesn’t forget everything when you restart the app or between user sessions. (If you’ve ever been annoyed by an AI that forgets context, this is a big deal!) The official samples include demos of saving threads to a file or using a ChatMessageStore (e.g., backed by Cosmos DB or Blob Storage) for persistent memory.
  • Background Responses: This is a lifesaver for long-running answers. Suppose an agent is generating a lengthy response (say summarizing a 100-page report) and it gets interrupted (network drop, user navigates away, etc.). With background response mode, the agent can resume from where it left off using a continuation token – no need to start over. Essentially, the framework can checkpoint the generation mid-way. You can even intentionally pause an answer, handle another quick user question, and then resume the original answer. This feature is great for keeping your AI agent’s train of thought, and it only takes one line of code to enable (AllowBackgroundResponses = true in the run options).
  • DevUI (Developer UI): When running a .NET application with agents, you can enable a special debug dashboard in your browser. DevUI shows a live visualization of agents and workflows: you can see each message exchanged, the internal reasoning steps (in a safe, readable form), and the sequence of tool calls or function calls. It’s like an inspector for the agent’s brain 🕵️‍♂️. This dramatically helps in understanding what your AI agents are doing under the hood. To use it, you add the Microsoft.Agents.AI.DevUI package and map the DevUI in your app, then navigate to the /devui endpoint while the app is running. If you’re building serious multi-agent apps, DevUI can speed up your development and troubleshooting.
  • Azure AI Foundry Integration: While you can use Agent Framework entirely locally, it’s built to integrate with Azure’s Foundry service for cloud deployment. Azure AI Foundry provides a managed environment for agents – including a Foundry Agent Service that can host your agents/workflows with durability and at scale. One feature there is Persistent Agents: you can create an agent in the cloud via code or the Azure portal, and it lives as a resource you can reuse and share. (For example, a team could have a persistent “ResearcherAgent” deployed that everyone’s apps can invoke.) The framework SDK has hooks to create and manage these cloud agents. So you could prototype an agent locally, then create a persistent version in Azure with a single SDK call, gaining features like conversation logs, monitoring, and team collaboration. This is great for enterprise scenarios where multiple apps or users need to invoke the same AI agent and you want oversight of it in Azure.

There’s a lot more (like support for OpenTelemetry tracing across agents, interoperability with other frameworks like LangChain via OpenTelemetry, etc.), but the above are some highlights that demonstrate the framework’s potential to build powerful AI-driven apps with relative ease.

Getting Started and Next Steps

If you’re excited to try the Microsoft Agent Framework, here are some resources and links to get you going:

  • Persistenting Threads in C# – Microsoft Agent Framework in Action – Watch here:

  • DevUI for .NET: Debugging Agents Like Magic – Watch here:

Plus, the .NET community has covered the Agent Framework in events like the “.NET AI Community Standup”. If you prefer an official intro, you can find the Oct 2025 standup video featuring the Agent Framework as well.


Wrapping up: The Microsoft Agent Framework is an exciting development for us as developers because it abstracts a lot of the heavy lifting in making advanced AI agents. Whether you just want a smart chatbot that can use a couple of tools, or you’re orchestrating a fleet of AI workers solving complex tasks, this framework provides a consistent, extensible approach to do it in C# (and Python). It’s still in preview, so expect rapid improvements and maybe some changes, but it’s a great time to experiment and give feedback. I’ve found it pretty straightforward to get started – and quite fun to watch multiple AI agents collaborate within my applications!

Give it a try, check out the docs and examples, and happy coding with your new AI agents. 🤖🚀

Happy coding!

Greetings

El Bruno

More posts in my blog ElBruno.com.

More info in https://beacons.ai/elbruno


One response to “Introducing the Microsoft Agent Framework – A Dev-Friendly Recap”

  1. […] Introducing the Microsoft Agent Framework – A Dev-Friendly Recap (Bruno Capuano) […]

    Like

Leave a comment

Discover more from El Bruno

Subscribe now to keep reading and get access to the full archive.

Continue reading