职位匹配
HTTP-SSE智能职位匹配MCP服务器,提取需求和候选人排名
智能职位匹配MCP服务器,提取需求和候选人排名
A Model Context Protocol (MCP) server that provides intelligent job matching capabilities. Extract structured job requirements from job descriptions and find/rank candidates from your LlamaCloud resume index.
extract_job_requirements - Extract structured data from job description textfind_matching_candidates - Find and rank candidates from LlamaCloud indexsearch_candidates_by_skills - Search candidates by specific skillsscore_candidate_qualifications - Score candidate against job requirementsadd / subtract / multiply - Basic math functions (backward compatibility)extract_job_requirements(job_description_text: str) - Extract structured job requirements from textfind_matching_candidates(required_qualifications: str, preferred_qualifications: str, top_k: int, enable_reranking: bool) - Find candidates matching job qualificationssearch_candidates_by_skills(skills: str, top_k: int) - Search candidates by specific skillsscore_candidate_qualifications(candidate_resume: str, required_qualifications: str, preferred_qualifications: str, job_title: str, job_description: str) - Score candidate against job requirementsextract_job_requirements(jd_text: str) -> str
Input: Job description text (copied from job posting)
Output: JSON string containing:
title: Job titlecompany: Company namelocation: Job locationrequired_qualifications: Array of required qualificationspreferred_qualifications: Array of preferred qualificationsdescription: Job summaryexperience_level: Experience level (entry/mid/senior)employment_type: Employment type (full-time/contract/etc.)find_matching_candidates(required_qualifications: str, preferred_qualifications: str, top_k: int, enable_reranking: bool) -> str
Input:
required_qualifications: Comma-separated required qualificationspreferred_qualifications: Comma-separated preferred qualificationstop_k: Maximum candidates to retrieve (default: 10)enable_reranking: Whether to enable reranking (default: True)Output: JSON string containing:
candidates: Array of candidates with scores and analysistotal_candidates: Number of candidates foundsearch_parameters: Details about search configurationKey Features:
search_candidates_by_skills(skills: str, top_k: int) -> str
Input:
skills: Comma-separated list of skills or keywordstop_k: Number of top candidates to retrieve (default: 10)Output: JSON with matching candidates and their scores
score_candidate_qualifications(candidate_resume: str, required_qualifications: str, preferred_qualifications: str, job_title: str, job_description: str) -> str
Input:
candidate_resume: The candidate's resume textrequired_qualifications: Comma-separated required qualificationspreferred_qualifications: Comma-separated preferred qualificationsjob_title: Job title for context (optional)job_description: Job description for context (optional)Output: Comprehensive analysis including:
You have two options for configuration:
# Required: OpenAI API Key export OPENAI_API_KEY="your-openai-api-key" # Required: LlamaCloud Configuration export LLAMA_CLOUD_API_KEY="your-llamacloud-api-key" export LLAMA_CLOUD_INDEX_NAME="your-resume-index-name" export LLAMA_CLOUD_PROJECT_NAME="your-project-name" export LLAMA_CLOUD_ORGANIZATION_ID="your-organization-id" # Optional: Server Configuration export PORT="8080" export HOST="0.0.0.0" export REQUEST_TIMEOUT="30.0" export OPENAI_TEMPERATURE="0.1"
Open config.py and replace the placeholder values:
# Replace these placeholder values with your actual API keys: OPENAI_API_KEY = "your-actual-openai-api-key-here" LLAMA_CLOUD_API_KEY = "your-actual-llamacloud-api-key-here" LLAMA_CLOUD_ORGANIZATION_ID = "your-actual-org-id-here" LLAMA_CLOUD_INDEX_NAME = "your-actual-index-name"
⚠️ Security Warning: If you edit config.py directly, never commit your API keys to version control!
Without LlamaCloud credentials, the server uses mock candidate data for testing.
If you're planning to commit this code to version control, create a .gitignore file to protect your sensitive information:
# Create .gitignore file cat > .gitignore << EOF # Environment variables and sensitive files .env .env.local .env.production config_local.py # Python __pycache__/ *.pyc *.pyo *.pyd .Python .venv/ venv/ # IDE .vscode/ .idea/ *.swp *.swo # OS .DS_Store Thumbs.db EOF
Alternative: You can also create a separate config_local.py file with your actual keys and import it in config.py, then add config_local.py to .gitignore.
# Install using uv (recommended) uv install # Or using pip pip install fastmcp httpx
python server.py
Server starts on http://localhost:8080/mcp (or PORT environment variable)
Run the comprehensive test suite:
# Start the server first python server.py # In another terminal, run tests python test_server.py
The test suite will:
Job Description Processing
Candidate Retrieval
Intelligent Scoring
Match Calculation
Weighted Score = (Required Total × 2) + Preferred Total
Match % = (Weighted Score / Max Possible Score) × 100
graph TD A[Job Description Text] --> B[extract_job_requirements] B --> C[Structured Requirements JSON] C --> D[find_matching_candidates] E[LlamaCloud Index] --> D D --> F[Ranked Candidates with Scores] C --> G[score_candidate_qualifications] H[Individual Resume] --> G G --> I[Detailed Analysis & Recommendations]
# Extract requirements from job posting job_reqs = extract_job_requirements(job_posting_text) # Find top candidates from your resume database top_candidates = find_matching_candidates("Python, JavaScript, React", "AWS, Docker", 10, True) # Get detailed analysis of promising candidates for candidate in top_5: analysis = score_candidate_qualifications(candidate['resume'], job_reqs, candidate['name'])
Make sure you have the following set up:
export PROJECT_ID=<your-project-id>remote-mcp-servers# 1. Install dependencies uv install # 2. Configure API keys (choose one method): # Method A: Set environment variables export OPENAI_API_KEY="your-openai-api-key" export LLAMA_CLOUD_API_KEY="your-llamacloud-api-key" export LLAMA_CLOUD_INDEX_NAME="your-index-name" export LLAMA_CLOUD_ORGANIZATION_ID="your-org-id" # Method B: Edit config.py directly (see configuration section above) # 3. Run server python server.py
The server will start on http://localhost:8080/mcp and log which configuration it's using:
[INFO]: LlamaCloudService initialized with index: your-index-name
[INFO]: MCP server starting on 0.0.0.0:8080
You can verify your configuration is working by running:
python -c "from config import OPENAI_API_KEY, LLAMA_CLOUD_API_KEY, LLAMA_CLOUD_INDEX_NAME; print(f'OpenAI: {OPENAI_API_KEY[:10]}..., LlamaCloud: {LLAMA_CLOUD_API_KEY[:10]}..., Index: {LLAMA_CLOUD_INDEX_NAME}')"
If you see placeholder values like "your-openai-api-key-here", your configuration needs to be updated.
# Build image docker build -t job-matching-mcp . # Run container docker run -p 8080:8080 \ -e OPENAI_API_KEY="your-key" \ -e LLAMA_CLOUD_API_KEY="your-key" \ job-matching-mcp
📚 Reference Documentation: Build and Deploy a Remote MCP Server to Google Cloud Run in Under 10 Minutes
# Build and push to Artifact Registry gcloud builds submit --region=us-central1 \ --tag us-central1-docker.pkg.dev/$PROJECT_ID/remote-mcp-servers/mcp-server:latest # Deploy to Cloud Run gcloud run deploy mcp-server \ --image us-central1-docker.pkg.dev/$PROJECT_ID/remote-mcp-servers/mcp-server:latest \ --region=us-central1 \ --no-allow-unauthenticated \ --set-env-vars OPENAI_API_KEY="your-key",LLAMA_CLOUD_API_KEY="your-key"
After making code changes to your MCP server, follow these steps to redeploy:
Step 1: Rebuild the container and push to Artifact Registry
gcloud builds submit --region=us-central1 \ --tag us-central1-docker.pkg.dev/$PROJECT_ID/remote-mcp-servers/mcp-server:latest
Step 2: Re-deploy the updated container to Cloud Run
gcloud run deploy mcp-server \ --image us-central1-docker.pkg.dev/$PROJECT_ID/remote-mcp-servers/mcp-server:latest \ --region=us-central1 \ --no-allow-unauthenticated
Step 3: Test the deployment (optional)
Start the Cloud Run proxy to test your updated server:
gcloud run services proxy mcp-server --region=us-central1
Then run your test script:
uv run test_server.py
mcp-on-cloudrun/
├── config.py # Configuration constants
├── models.py # Data structures
├── server.py # Main MCP server
├── Dockerfile # Container configuration
├── pyproject.toml # Python dependencies
├── services/
│ ├── openai_service.py # OpenAI API integration
│ └── llamacloud_service.py # LlamaCloud integration
├── tools/
│ ├── math_tools.py # Math operations (add, subtract, multiply)
│ ├── job_tools.py # Job description extraction
│ └── candidate_tools.py # Candidate search and scoring
└── test_server.py # Test client
Modify the scoring prompts in services/openai_service.py to adjust evaluation criteria:
For production deployment with real candidate data:
config.pyChange the model in config.py:
OPENAI_MODEL = "gpt-4o-mini" # Fast and cost-effective # OPENAI_MODEL = "gpt-4o" # Higher quality, more expensive
gpt-4o-mini for most operations, gpt-4o for critical analysis--no-allow-unauthenticated to require authentication for Cloud Runroles/run.invoker IAM role to access the server"Invalid API key" errors:
config.py or environment variables"LlamaCloud index not found":
LLAMA_CLOUD_INDEX_NAME matches your actual index nameLLAMA_CLOUD_ORGANIZATION_ID is correctServer shows placeholder values:
config.py directlylsof -ti:8080 | xargs kill -9 to free up port 8080View Cloud Run logs:
gcloud run services logs tail mcp-server --region=us-central1
View local server logs:
# Server logs are printed to console when running locally python server.py
MIT License - see LICENSE file for details.
Ready to revolutionize your hiring process with AI-powered job matching! 🎯✨