Graphiti
HTTP-SSENeo4j-powered knowledge graph server for AI agents with MCP integration and OpenAI-based entity extraction
Neo4j-powered knowledge graph server for AI agents with MCP integration and OpenAI-based entity extraction
🌟 A powerful knowledge graph server for AI agents, built with Neo4j and integrated with Model Context Protocol (MCP).
git clone https://github.com/gifflet/graphiti-mcp-server.git cd graphiti-mcp-server
cp .env.sample .env
.env with your configuration:# Required for LLM operations OPENAI_API_KEY=your_openai_api_key_here MODEL_NAME=gpt-4.1-mini # Optional: Custom OpenAI endpoint (e.g., for proxies) # OPENAI_BASE_URL=https://api.openai.com/v1 # Neo4j Configuration (defaults work with Docker) NEO4J_URI=bolt://neo4j:7687 NEO4J_USER=neo4j NEO4J_PASSWORD=demodemo
docker compose up -d
# Check if services are running docker compose ps # Check logs docker compose logs graphiti-mcp
You can run with environment variables directly:
OPENAI_API_KEY=your_key MODEL_NAME=gpt-4.1-mini docker compose up
| Service | Port | Purpose |
|---|---|---|
| Neo4j Browser | 7474 | Web interface for graph visualization |
| Neo4j Bolt | 7687 | Database connection |
| Graphiti MCP | 8000 | MCP server endpoint |
| Variable | Required | Default | Description |
|---|---|---|---|
OPENAI_API_KEY | ✅ | - | Your OpenAI API key |
OPENAI_BASE_URL | ❌ | - | Custom OpenAI API endpoint (consumed by OpenAI SDK) |
MODEL_NAME | ❌ | gpt-4.1-mini | Main LLM model to use |
SMALL_MODEL_NAME | ❌ | gpt-4.1-nano | Small LLM model for lighter tasks |
LLM_TEMPERATURE | ❌ | 0.0 | LLM temperature (0.0-2.0) |
EMBEDDER_MODEL_NAME | ❌ | text-embedding-3-small | Embedding model |
| Variable | Required | Default | Description |
|---|---|---|---|
NEO4J_URI | ❌ | bolt://neo4j:7687 | Neo4j connection URI |
NEO4J_USER | ❌ | neo4j | Neo4j username |
NEO4J_PASSWORD | ❌ | demodemo | Neo4j password |
| Variable | Required | Default | Description |
|---|---|---|---|
MCP_SERVER_HOST | ❌ | - | MCP server host binding |
SEMAPHORE_LIMIT | ❌ | 10 | Concurrent operation limit for LLM calls |
For Azure OpenAI deployments, use these environment variables instead of the standard OpenAI configuration:
| Variable | Required | Default | Description |
|---|---|---|---|
AZURE_OPENAI_ENDPOINT | ✅* | - | Azure OpenAI endpoint URL |
AZURE_OPENAI_API_VERSION | ✅* | - | Azure OpenAI API version |
AZURE_OPENAI_DEPLOYMENT_NAME | ✅* | - | Azure OpenAI deployment name |
AZURE_OPENAI_USE_MANAGED_IDENTITY | ❌ | false | Use Azure managed identity for auth |
AZURE_OPENAI_EMBEDDING_ENDPOINT | ❌ | - | Separate endpoint for embeddings |
AZURE_OPENAI_EMBEDDING_API_VERSION | ❌ | - | API version for embeddings |
AZURE_OPENAI_EMBEDDING_DEPLOYMENT_NAME | ❌ | - | Deployment name for embeddings |
AZURE_OPENAI_EMBEDDING_API_KEY | ❌ | - | Separate API key for embeddings |
* Required when using Azure OpenAI
Notes:
OPENAI_BASE_URL is consumed directly by the OpenAI Python SDK, useful for proxy configurations or custom endpointsSEMAPHORE_LIMIT controls concurrent LLM API calls - decrease if you encounter rate limits, increase for higher throughputDefault configuration for Neo4j:
neo4jdemodemobolt://neo4j:7687 (within Docker network)You can run with environment variables directly:
OPENAI_API_KEY=your_key MODEL_NAME=gpt-4.1-mini docker compose up
For Azure OpenAI:
AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com \ AZURE_OPENAI_API_VERSION=2024-02-01 \ AZURE_OPENAI_DEPLOYMENT_NAME=your-deployment \ OPENAI_API_KEY=your_key \ docker compose up
{ "mcpServers": { "Graphiti": { "command": "uv", "args": ["run", "graphiti_mcp_server.py"], "env": { "OPENAI_API_KEY": "your_key_here" } } } }
{ "mcpServers": { "Graphiti": { "url": "http://localhost:8000/sse" } } }
graphiti_cursor_rules.mdc)The server supports standard MCP transports:
http://localhost:8000/ssews://localhost:8000/ws# Using uv (recommended) curl -LsSf https://astral.sh/uv/install.sh | sh uv sync # Or using pip pip install -r requirements.txt
docker run -d \ --name neo4j-dev \ -p 7474:7474 -p 7687:7687 \ -e NEO4J_AUTH=neo4j/demodemo \ neo4j:5.26.0
# Set environment variables export OPENAI_API_KEY=your_key export NEO4J_URI=bolt://localhost:7687 # Run with stdio transport uv run graphiti_mcp_server.py # Or with SSE transport uv run graphiti_mcp_server.py --transport sse --use-custom-entities
# Run basic connectivity test curl http://localhost:8000/health # Test MCP endpoint curl http://localhost:8000/sse
# Clean up and restart docker compose down -v docker compose up --build # Check disk space docker system df
# View all logs docker compose logs -f # View specific service logs docker compose logs -f graphiti-mcp docker compose logs -f neo4j # Enable debug logging docker compose up -e LOG_LEVEL=DEBUG
docker-compose.yml┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
│ MCP Client │ │ Graphiti MCP │ │ Neo4j │
│ (Cursor) │◄──►│ Server │◄──►│ Database │
│ │ │ (Port 8000) │ │ (Port 7687) │
└─────────────────┘ └──────────────────┘ └─────────────────┘
│
▼
┌──────────────────┐
│ OpenAI API │
│ (LLM Client) │
└──────────────────┘
Contributions are welcome! Please feel free to submit a Pull Request.
This project is licensed under the MIT License - see the LICENSE file for details.
Need help? Open an issue or check our troubleshooting guide above.