A simple CLI chatbot that demonstrates the integration of the Model Context Protocol (MCP).
This chatbot example demonstrates how to integrate the Model Context Protocol (MCP) into a simple CLI chatbot. The implementation showcases MCP's flexibility by supporting multiple tools through MCP servers and is compatible with any LLM provider that follows OpenAI API standards.
If you find this project helpful, don’t forget to ⭐ star the repository or buy me a ☕ coffee.
python-dotenv
requests
mcp
uvicorn
Clone the repository:
git clone https://github.com/3choff/mcp-chatbot.git
cd mcp-chatbot
Install the dependencies:
pip install -r requirements.txt
Set up environment variables:
Create a .env
file in the root directory and add your API key:
LLM_API_KEY=your_api_key_here
Configure servers:
The servers_config.json
follows the same structure as Claude Desktop, allowing for easy integration of multiple servers.
Here's an example:
{
"mcpServers": {
"sqlite": {
"command": "uvx",
"args": ["mcp-server-sqlite", "--db-path", "./test.db"]
},
"puppeteer": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-puppeteer"]
}
}
}
Environment variables are supported as well. Pass them as you would with the Claude Desktop App.
Example:
{
"mcpServers": {
"server_name": {
"command": "uvx",
"args": ["mcp-server-name", "--additional-args"],
"env": {
"API_KEY": "your_api_key_here"
}
}
}
}
Run the client:
python main.py
Interact with the assistant:
The assistant will automatically detect available tools and can respond to queries based on the tools provided by the configured servers.
Exit the session:
Type quit
or exit
to end the session.
flowchart TD
A[Start] --> B[Load Configuration]
B --> C[Initialize Servers]
C --> D[Discover Tools]
D --> E[Format Tools for LLM]
E --> F[Wait for User Input]
F --> G{User Input}
G --> H[Send Input to LLM]
H --> I{LLM Decision}
I -->|Tool Call| J[Execute Tool]
I -->|Direct Response| K[Return Response to User]
J --> L[Return Tool Result]
L --> M[Send Result to LLM]
M --> N[LLM Interprets Result]
N --> O[Present Final Response to User]
K --> O
O --> F
Initialization:
Runtime Flow:
Tool Integration:
Feedback and contributions are welcome. If you encounter any issues or have suggestions for improvements, please create a new issue on the GitHub repository.
If you'd like to contribute to the development of the project, feel free to submit a pull request with your changes.
This project is licensed under the MIT License.