Azure Container Apps AI
STREAMABLE HTTPHTTP-SSEMCP server implementing AI agent with TODO list tools via HTTP and SSE protocols
MCP server implementing AI agent with TODO list tools via HTTP and SSE protocols
This project showcases how to use the MCP protocol with OpenAI, Azure OpenAI and GitHub Models. It provides a simple demo terminal application that interacts with a TODO list Agent. The agent has access to a set of tools provided by the MCP server.
The current implementation consists of three main components:
flowchart TD user(("fa:fa-users User")) host["VS Code, Copilot, LlamaIndex, Langchain..."] client[MCP SSE Client] clientHttp[MCP HTTP Client] server([MCP SSE Server]) serverHttp([MCP HTTP Server]) agent[Agent] AzureOpenAI([Azure OpenAI]) GitHub([GitHub Models]) OpenAI([OpenAI]) tools["fa:fa-wrench Tools"] db[(DocumentDB Local)] user --> hostGroup subgraph hostGroup["MCP Host"] host -.- client & clientHttp & agent end agent -.- AzureOpenAI & GitHub & OpenAI client a@ ---> |"Server Sent Events"| server clientHttp aa@ ---> |"Streamable HTTP"| serverHttp subgraph container["ACA Container (*)"] server -.- tools serverHttp -.- tools tools -.- add_todo tools -.- list_todos tools -.- complete_todo tools -.- delete_todo end add_todo b@ --> db list_todos c@--> db complete_todo d@ --> db delete_todo e@ --> db %% styles classDef animate stroke-dasharray: 9,5,stroke-dashoffset: 900,animation: dash 25s linear infinite; classDef highlight fill:#9B77E8,color:#fff,stroke:#5EB4D8,stroke-width:2px class a animate class aa animate class b animate class c animate class d animate class e animate class container highlight
This demo application provides two MCP server implementations: one using HTTP and the other using SSE (Server-Sent Events). The MCP host can connect to both servers, allowing you to choose the one that best fits your needs.
| Feature | Completed |
|---|---|
| SSE (legacy) | ✅ |
| HTTP Streaming | ✅ |
| AuthN (token based) | wip |
| Tools | ✅ |
| Resources | #3 |
| Prompts | #4 |
| Sampling | #5 |
To get started with this project using Docker, follow the steps below:
git clone https://github.com/Azure-Samples/azure-container-apps-ai-mcp.git cd azure-container-apps-ai-mcp
docker-compose up
To get started with this project, follow the steps below:
npm install --prefix mcp-host npm install --prefix mcp-server-http npm install --prefix mcp-server-sse
This sample supports the follwowing LLM providers:
| Provider | Supported API |
|---|---|
| Azure OpenAI | Responses API |
| OpenAI | Responses API |
| GitHub Models | ChatCompletion API |
[!NOTE] Accessing Azure OpenAI using Managed Identity is not supported when running in a Docker container (locally). You can either run the code locally without Docker or use a different authentication method, such as AZURE_OPENAI_API_KEY key authentication.
In order to use Keyless authentication, using Azure Managed Identity, you need to provide the AZURE_OPENAI_ENDPOINT environment variable in the .env file:
AZURE_OPENAI_ENDPOINT="https://<ai-foundry-openai-project>.openai.azure.com" MODEL="gpt-4.1" # (optional) Set the Azure OpenAI API key if you are not using Managed Identity # AZURE_OPENAI_API_KEY=your_azure_openai_api_key
And make sure to using the Azure CLI to log in to your Azure account and follow the instructions to selection your subscription:
az login
To use the OpenAI API, you need to set your OPENAI_API_KEY key in the .env file:
OPENAI_API_KEY=your_openai_api_key MODEL="gpt-5"
To use the GitHub models, you need to set your GITHUB_TOKEN in the .env file:
GITHUB_TOKEN=your_github_token MODEL="openai/gpt-5"
This project includes a DevContainer configuration that allows you to run the MCP servers in a containerized environment. This is the recommended way to run the MCP servers, as it ensures that all dependencies are installed and configured correctly.
Once you have opened the project in a DevContainer, you can run the MCP servers using the following the Docker section below.
You can run both MCP servers in Docker containers using the provided Docker Compose file. This is useful for testing and development purposes. To do this, follow these steps:
docker compose in your terminal to check if Docker Compose is installed.docker compose up -d --build
This command will build and start the HTTP and SSE MCP servers, as well as the DocumentDB database container.
docker exec -it mcp-host bash
npm start --prefix mcp-server-http npm start --prefix mcp-server-sse
[!NOTE] For demo purposes, the MCP host (see below) is configured to connect to both servers (on port 3000 and 3001). However, this is not a requirement, and you can choose which server to use. If a server is not available, the host will print an error and continue to scan for other servers. If no server is available, no tools will be available to the agent.
npm start --prefix mcp-host
You should be able to use the MCP host to interat with the LLM agent. Try asking question about adding or listing items in a shopping list. The host will then try to fetch and call tools from the MCP servers.
You can use the DEBUG environment variable to enable verbose logging for the MCP host:
DEBUG=mcp:* npm start --prefix mcp-host
Debugging is enabled by default for both MCP servers.
This project is licensed under the MIT License. See the LICENSE file for details.