Perplexica
STREAMABLE HTTPMCP server providing AI-powered search functionality using Perplexica's search engine with multiple transport modes
MCP server providing AI-powered search functionality using Perplexica's search engine with multiple transport modes
A Model Context Protocol (MCP) server that provides search functionality using Perplexica's AI-powered search engine.
Important: If you are using Claude Code for development, this project requires the use of the container-use MCP server for all development operations. All file operations, code changes, and shell commands must be executed within container-use environments.
When contributing to this project using Claude Code, you must:
container-use log <env_id> to view the development logcontainer-use checkout <env_id> to check out your environment# Create a new environment for your work container-use create --title "Your feature description" # Make your changes using container-use tools # (All file operations handled by container-use) # Share your work with others container-use log <your-env-id> container-use checkout <your-env-id>
This ensures consistency, reproducibility, and proper version control for all development activities when using Claude Code.
If you are not using Claude Code, you can develop normally using your preferred tools and IDE. The container-use requirement does not apply to regular development workflows.
# Install directly from PyPI pip install perplexica-mcp # Or using uvx for isolated execution uvx perplexica-mcp --help
# Clone the repository git clone https://github.com/thetom42/perplexica-mcp.git cd perplexica-mcp # Install dependencies uv sync
To use this server with MCP clients, you need to configure the client to connect to the Perplexica MCP server. Below are configuration examples for popular MCP clients.
Important: All transport modes require proper environment variable configuration, especially:
PERPLEXICA_BACKEND_URL: URL to your Perplexica backend APIPERPLEXICA_CHAT_MODEL_PROVIDERandPERPLEXICA_CHAT_MODEL_NAME: Chat model configurationPERPLEXICA_EMBEDDING_MODEL_PROVIDERandPERPLEXICA_EMBEDDING_MODEL_NAME: Embedding model configurationThese variables must be set either in your environment or provided in the MCP client configuration.
Add the following to your Claude Desktop configuration file:
Location: ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or %APPDATA%\Claude\claude_desktop_config.json (Windows)
{ "mcpServers": { "perplexica": { "command": "uvx", "args": ["perplexica-mcp", "stdio"], "env": { "PERPLEXICA_BACKEND_URL": "http://localhost:3000/api/search", "PERPLEXICA_CHAT_MODEL_PROVIDER": "openai", "PERPLEXICA_CHAT_MODEL_NAME": "gpt-4o-mini", "PERPLEXICA_EMBEDDING_MODEL_PROVIDER": "openai", "PERPLEXICA_EMBEDDING_MODEL_NAME": "text-embedding-3-small" } } } }
Alternative (from source):
{ "mcpServers": { "perplexica": { "command": "uv", "args": ["run", "python", "-m", "perplexica_mcp", "stdio"], "cwd": "/path/to/perplexica-mcp", "env": { "PERPLEXICA_BACKEND_URL": "http://localhost:3000/api/search", "PERPLEXICA_CHAT_MODEL_PROVIDER": "openai", "PERPLEXICA_CHAT_MODEL_NAME": "gpt-4o-mini", "PERPLEXICA_EMBEDDING_MODEL_PROVIDER": "openai", "PERPLEXICA_EMBEDDING_MODEL_NAME": "text-embedding-3-small" } } } }
Note: When running from source, ensure all required environment variables are set. The stdio transport requires proper model provider and model name configuration to communicate with the Perplexica backend.
#### SSE Transport
For SSE transport, first start the server:
```bash
uv run src/perplexica_mcp/server.py sse
Then configure Claude Desktop:
{ "mcpServers": { "perplexica": { "url": "http://localhost:3001/sse" } } }
Add to your Cursor MCP configuration:
{ "servers": { "perplexica": { "command": "uvx", "args": ["perplexica-mcp", "stdio"], "env": { "PERPLEXICA_BACKEND_URL": "http://localhost:3000/api/search", "PERPLEXICA_CHAT_MODEL_PROVIDER": "openai", "PERPLEXICA_CHAT_MODEL_NAME": "gpt-4o-mini", "PERPLEXICA_EMBEDDING_MODEL_PROVIDER": "openai", "PERPLEXICA_EMBEDDING_MODEL_NAME": "text-embedding-3-small" } } } }
Alternative (from source):
{ "servers": { "perplexica": { "command": "uv", "args": ["run", "python", "-m", "perplexica_mcp", "stdio"], "cwd": "/path/to/perplexica-mcp", "env": { "PERPLEXICA_BACKEND_URL": "http://localhost:3000/api/search", "PERPLEXICA_CHAT_MODEL_PROVIDER": "openai", "PERPLEXICA_CHAT_MODEL_NAME": "gpt-4o-mini", "PERPLEXICA_EMBEDDING_MODEL_PROVIDER": "openai", "PERPLEXICA_EMBEDDING_MODEL_NAME": "text-embedding-3-small" } } } }
Add to your VS Code MCP configuration file (.vscode/mcp.json):
{ "servers": { "perplexica": { "type": "stdio", "command": "uv", "args": ["run", "python", "-m", "perplexica_mcp", "stdio"], "cwd": "/path/to/perplexica-mcp", "env": { "PERPLEXICA_BACKEND_URL": "http://localhost:3000/api/search", "PERPLEXICA_CHAT_MODEL_PROVIDER": "openai", "PERPLEXICA_CHAT_MODEL_NAME": "gpt-4o-mini", "PERPLEXICA_EMBEDDING_MODEL_PROVIDER": "openai", "PERPLEXICA_EMBEDDING_MODEL_NAME": "text-embedding-3-small" } } } }
For any MCP client supporting stdio transport:
# Command to run the server (PyPI installation) uvx perplexica-mcp stdio # Command to run the server with .env file (PyPI installation) uvx --env-file .env perplexica-mcp stdio # Command to run the server (from source) uv run python -m perplexica_mcp stdio # Environment variables (can be exported or set inline) export PERPLEXICA_BACKEND_URL=http://localhost:3000/api/search export PERPLEXICA_CHAT_MODEL_PROVIDER=openai export PERPLEXICA_CHAT_MODEL_NAME=gpt-4o-mini export PERPLEXICA_EMBEDDING_MODEL_PROVIDER=openai export PERPLEXICA_EMBEDDING_MODEL_NAME=text-embedding-3-small # Or set inline for single execution (all required vars) PERPLEXICA_BACKEND_URL=http://localhost:3000/api/search \ PERPLEXICA_CHAT_MODEL_PROVIDER=openai \ PERPLEXICA_CHAT_MODEL_NAME=gpt-4o-mini \ PERPLEXICA_EMBEDDING_MODEL_PROVIDER=openai \ PERPLEXICA_EMBEDDING_MODEL_NAME=text-embedding-3-small \ uvx perplexica-mcp stdio
For HTTP/SSE transport clients:
# Start the server (PyPI installation) uvx perplexica-mcp sse # or 'http' # Start the server (from source) uv run /path/to/perplexica-mcp/src/perplexica_mcp/server.py sse # or 'http' # Connect to endpoints SSE: http://localhost:3001/sse HTTP: http://localhost:3002/mcp/
/path/to/perplexica-mcp/ with the actual path to your installationPERPLEXICA_BACKEND_URL points to your running Perplexica instanceuvx is installed and available in your PATH (or uv for source installations)uvx (or uv for source) is installed and the path is correctPERPLEXICA_BACKEND_URL is properly setCreate a .env file in the project root with your Perplexica configuration:
# Perplexica Backend Configuration PERPLEXICA_BACKEND_URL=http://localhost:3000/api/search # Default Model Configuration (Optional) # If set, these models will be used as defaults when no model is specified in the search request # Chat Model Configuration PERPLEXICA_CHAT_MODEL_PROVIDER=openai PERPLEXICA_CHAT_MODEL_NAME=gpt-4o-mini # Embedding Model Configuration PERPLEXICA_EMBEDDING_MODEL_PROVIDER=openai PERPLEXICA_EMBEDDING_MODEL_NAME=text-embedding-3-small
| Variable | Description | Default | Example |
|---|---|---|---|
PERPLEXICA_BACKEND_URL | URL to Perplexica search API | http://localhost:3000/api/search | http://localhost:3000/api/search |
PERPLEXICA_CHAT_MODEL_PROVIDER | Default chat model provider | None | openai, ollama, anthropic |
PERPLEXICA_CHAT_MODEL_NAME | Default chat model name | None | gpt-4o-mini, claude-3-sonnet |
PERPLEXICA_EMBEDDING_MODEL_PROVIDER | Default embedding model provider | None | openai, ollama |
PERPLEXICA_EMBEDDING_MODEL_NAME | Default embedding model name | None | text-embedding-3-small |
Note: The model environment variables are optional. If not set, you'll need to specify models in each search request. When set, they provide convenient defaults that can still be overridden per request.
The server supports three transport modes:
# PyPI installation uvx perplexica-mcp stdio # From source uv run src/perplexica_mcp/server.py stdio
# PyPI installation uvx perplexica-mcp sse [host] [port] # From source uv run src/perplexica_mcp/server.py sse [host] [port] # Default: localhost:3001, endpoint: /sse
# PyPI installation uvx perplexica-mcp http [host] [port] # From source uv run src/perplexica_mcp/server.py http [host] [port] # Default: localhost:3002, endpoint: /mcp
The server includes Docker support with multiple transport configurations for containerized deployments.
backend (for integration with Perplexica)docker network create backend
# Build and run with HTTP transport docker-compose up -d # Or build first, then run docker-compose build docker-compose up -d
# Build and run with SSE transport docker-compose -f docker-compose-sse.yml up -d # Or build first, then run docker-compose -f docker-compose-sse.yml build docker-compose -f docker-compose-sse.yml up -d
Both Docker configurations support environment variables:
# Create .env file for Docker cat > .env << EOF PERPLEXICA_BACKEND_URL=http://perplexica-app:3000/api/search EOF # Uncomment env_file in docker-compose.yml to use .env file
Or set environment variables directly in the compose file:
environment: - PERPLEXICA_BACKEND_URL=http://your-perplexica-host:3000/api/search
| Transport | Container Name | Port | Endpoint | Health Check |
|---|---|---|---|---|
| HTTP | perplexica-mcp-http | 3001 | /mcp/ | MCP initialize request |
| SSE | perplexica-mcp-sse | 3001 | /sse | SSE endpoint check |
Both containers include health checks:
# Check container health docker ps docker-compose ps # View health check logs docker logs perplexica-mcp-http docker logs perplexica-mcp-sse
The Docker setup assumes Perplexica is running in the same Docker network:
# Example Perplexica service in the same compose file services: perplexica-app: # ... your Perplexica configuration networks: - backend perplexica-mcp: # ... MCP server configuration environment: - PERPLEXICA_BACKEND_URL=http://perplexica-app:3000/api/search networks: - backend
restart: unless-stopped for reliabilityPerforms AI-powered web search using Perplexica.
Parameters:
query (string, required): Search queryfocus_mode (string, required): One of 'webSearch', 'academicSearch', 'writingAssistant', 'wolframAlphaSearch', 'youtubeSearch', 'redditSearch'chat_model (string, optional): Chat model configurationembedding_model (string, optional): Embedding model configurationoptimization_mode (string, optional): 'speed' or 'balanced'history (array, optional): Conversation historysystem_instructions (string, optional): Custom instructionsstream (boolean, optional): Whether to stream responsesRun the comprehensive test suite to verify all transports:
uv run src/test_transports.py
This will test:
http://host:port/ssehttp://host:port/mcp/mcp/ as per protocolThe server is built using:
┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐ │ MCP Client │◄──►│ Perplexica MCP │◄──►│ Perplexica │ │ │ │ Server │ │ Search API │ │ (stdio/SSE/ │ │ (FastMCP) │ │ │ │ HTTP) │ │ │ │ │ └─────────────────┘ └──────────────────┘ └─────────────────┘ │ ▼ ┌──────────────┐ │ FastMCP │ │ Framework │ │ ┌──────────┐ │ │ │ stdio │ │ │ │ SSE │ │ │ │ HTTP │ │ │ └──────────┘ │ └──────────────┘
This project is licensed under the MIT License - see the LICENSE file for details.
container-use log <env_id> and container-use checkout <env_id>For issues and questions: