awaBerry Agentic MCP Server — LLM Device Integration in Action

One of the most consequential shifts in the security and privacy landscape right now is not happening in traditional infrastructure. It is happening at the boundary between large language models and the physical world. LLMs are no longer confined to text processing in sandboxed environments. They are being given tools. They are being given access to external systems. They are being asked to read from real devices, write to real databases, and trigger real actions on real machines.

This is genuinely exciting. It is also, from a security standpoint, a category of risk that demands careful architectural thinking. The Model Context Protocol (MCP) — Anthropic's open standard for connecting LLMs to external tools and data sources — is one of the most important developments in this space. And the awaBerry Agentic MCP Server is how we bring zero-trust device access into that ecosystem.

What the Model Context Protocol Does

MCP defines a standardised way for language models to invoke external tools: read a file, execute a query, call a service, retrieve data from a device. Applications like Claude Desktop implement the MCP client side. External systems — databases, APIs, device managers — implement the MCP server side. When an LLM needs data or needs to take an action, it calls the appropriate MCP server, receives a structured result, and incorporates it into its reasoning.

The power of this architecture is modularity and composability. The LLM does not need to know how to talk to your specific device. It just needs to know that an MCP tool exists that can retrieve data from it. The security of the system depends entirely on how that MCP server is implemented — specifically, on whether it enforces appropriate access controls between the LLM's requests and the devices they target.

The awaBerry Agentic MCP Server

The awaBerry Agentic MCP Server exposes your registered awaBerry devices as MCP tools. When configured in Claude Desktop (or any MCP-compatible client), it allows the LLM to:

  • Read data from specific directories on specific devices
  • Execute allowed commands and retrieve their output
  • Load structured data files for analysis
  • Write results back to defined output locations (if write permission is granted)

All of this happens through the awaBerry Agentic API — which means every interaction is governed by a Project Key with precisely scoped permissions, tunnelled over an outbound-only HTTPS connection, and logged with a full audit trail.

A Concrete Scenario

Let me make this concrete. Imagine a research team that collects environmental sensor data on a fleet of remote monitoring devices — temperature, humidity, air quality readings logged to structured CSV files on each device. They want to use Claude to analyse patterns in this data and generate a written summary of anomalies for the weekly research report.

With the awaBerry Agentic MCP Server configured in Claude Desktop:

  1. The researcher types a prompt: "Analyse this week's sensor data from the field stations and summarise any anomalies or trends I should be aware of."
  2. Claude identifies the relevant MCP tool — the awaBerry device data reader — and calls it with the appropriate parameters.
  3. The MCP server authenticates against the awaBerry Agentic API using the configured Project Key, opens a scoped tunnel to each registered field station device, and reads the relevant data files.
  4. The raw sensor data is returned to Claude as structured context.
  5. Claude analyses the data, identifies anomalies, and writes a clear, structured summary — directly in the conversation.

The researcher receives AI-quality analysis of data that lives on remote physical devices, without writing a single line of integration code, and without exposing those devices to the internet in any way.

Why the Security Architecture Matters Here

I want to be direct about the security properties that make this approach responsible to deploy — because "give an LLM access to your devices" is, without the right controls, a sentence that should make any security professional uncomfortable.

The awaBerry Agentic API's permission model applies to MCP server interactions exactly as it does to any other programmatic access:

  • Privilege scope: The LLM can only operate with the privilege level defined in the project configuration — standard user by default, root only if explicitly enabled.
  • Filesystem scope: Read access can be restricted to named directories. The LLM cannot browse the entire filesystem unless the project explicitly permits it.
  • Write permissions: Write access is off by default. It must be explicitly enabled per project, and can be scoped to specific paths.
  • Command scope: An explicit allowlist of permitted commands can be defined. The MCP server will not execute anything outside that list, regardless of what the LLM requests.

Every MCP interaction generates a full audit log entry. If something unexpected happens — if the LLM requests something it should not, or if an anomalous pattern appears in the access logs — the security team has a complete, structured record to investigate.

And instant revocation means the risk exposure is strictly bounded in time: when the project is deleted, access terminates completely, with zero residual artifacts.

Device as a Service, Done Securely

The phrase "device as a service" is increasingly common in the AI infrastructure space. What it means in practice varies enormously. At one end of the spectrum, it means broadly exposing device data to any LLM that asks for it. At the other end — the awaBerry end — it means making device data available to LLMs within a zero-trust boundary that is precisely scoped, fully audited, and instantly revocable.

The LLMs will interact with your devices. The question is whether you are comfortable with the architecture that mediates that interaction. Explore the Agentic API →