Today we are releasing the first version of the awaBerry MCP Server — a Model Context Protocol server that exposes awaBerry Agentic API capabilities to AI agents, orchestration frameworks, and any tool that speaks MCP. It is open source and available now on GitHub.
What Is the Model Context Protocol?
The Model Context Protocol (MCP) is an open standard for connecting AI agents to external tools and data sources. By speaking MCP, an AI assistant — whether it is Claude, GPT, Gemini, or a custom agent built with LangChain or LlamaIndex — can discover and use external capabilities at runtime, without bespoke integration code for each tool.
The awaBerry MCP Server bridges this ecosystem and your physical devices. Once running, any MCP-capable AI agent can use it to execute commands on your devices, read files, query device state, and run automation workflows — all through the secure, authenticated awaBerry Agentic API.
What the awaBerry MCP Server Enables
With the MCP Server running, your AI agent can:
- Execute commands on registered devices through natural language instructions
- Read file contents and directory listings from remote devices
- Query system state (CPU, memory, disk, running processes)
- Trigger automation workflows defined in awaBerry Agentic projects
- Chain multi-device operations in a single agent interaction
Security by Design
Every action taken through the MCP Server is governed by the same fine-grained access controls introduced in awaBerry Agentic 1.4. The AI agent operates within the permission boundaries you have defined — specific folders, specific commands, specific devices. The MCP Server never bypasses those controls.
Get Started
The awaBerry MCP Server is open source and available at:
https://github.com/awaberry/mcp_server_awaberry
The repository includes full setup documentation, configuration examples, and integration guides for popular AI orchestration frameworks. Contributions and feedback are welcome — open an issue or pull request on GitHub, or reach out to us via the contact form.