MCP Tooling
Make your data queryable by AI agents, including private databases, all in the control of the data holder.
When the Machine Wants to Ask a Question
Your data lives in databases, catalogues, APIs, file stores. People query it through dashboards and search interfaces built for humans. But AI agents don’t use dashboards. They need an interface they can talk to — structured, predictable, machine-readable. The Model Context Protocol (MCP) is that interface.
MCP Tooling builds MCP servers that sit in front of your data sources — public open data portals, private databases, internal APIs, document stores — and let AI agents query them directly. The agent can search, inspect, filter, and retrieve data through a standard protocol. No screen-scraping, no brittle workarounds. The agent talks to your data the way a developer would talk to an API, but through a protocol designed for machines.
The crucial point: the data holder stays in control. You decide which data sources to expose, which operations to allow, what access controls to enforce. The MCP server is your gatekeeper. The AI agent sees exactly what you choose to show it — no more, no less. Private databases remain private. Sensitive fields stay hidden. Access policies are enforced at the server, not trusted to the agent.
The immediate result: you can ask an AI agent “what transport datasets do we have?” or “show me last quarter’s sales by region” and get a real answer drawn from live data, not from training data that might be months old — all without giving the agent direct access to your database.
A Controlled Window Into Your Data
Imagine you have a room full of filing cabinets. Some drawers are public, some are confidential. A very helpful robot wants to answer questions about what’s in the cabinets, but you don’t want to give it the keys to every drawer.
MCP Tooling builds a service hatch in the wall. The robot asks questions through the hatch: “what files do you have about transport?”, “can you summarise the budget report?” The person behind the hatch — your MCP server — decides which drawers to open and what to pass through. The robot gets answers. It never gets the keys.
That’s MCP Tooling. It lets AI agents query your data — public or private — without ever having direct access. You control the hatch.
From Agent to Answer
1. Connect a data source
Point the MCP server at your data source: a CKAN portal, a PostgreSQL database, a REST API, a document store. Configuration specifies the connection details and access credentials. The server starts locally or on your infrastructure and registers with your AI client (Claude Desktop, or any MCP-compatible agent).
2. Define the tools
The server exposes a set of tools the agent can call. For a CKAN portal, that might be package_search and datastore_search. For a database, it might be query_table and list_schemas. You choose which operations to expose. The agent sees only what you allow.
3. Enforce access control
The server enforces your policies. Read-only access by default. Specific tables or fields can be excluded. Row-level filtering can restrict what the agent sees. API keys and authentication are handled server-side — never exposed to the agent.
4. Query
When the agent calls a tool, the MCP server translates the request into a query against your data source, fetches the result, and returns structured JSON. The agent never connects to the data source directly — the server mediates every request.
5. Compose
The agent combines results from multiple tool calls to answer complex questions. “Find all transport datasets, check which ones were updated this month, and summarise the changes.” Each step is a separate tool call; the agent orchestrates them.
AI Agents That Query Your Data, On Your Terms
Open data discovery
A policy analyst asks an AI agent: “what environmental datasets does the EPA publish?” The agent queries the open data portal via MCP, filters by organisation, and returns a summary of matching datasets with descriptions and formats. No need to visit the portal or learn its search syntax. The data is public; MCP just makes it machine-readable.
Private database querying
A logistics company wants its operations team to ask questions of their shipment database in plain language. The MCP server exposes read-only access to three tables — shipments, routes, depots — with customer names redacted. An analyst asks: “which routes had the most delays last month?” The agent queries the database through the MCP server and returns a summary. No SQL knowledge required. No direct database access granted.
Compliance and audit
A data protection officer needs to check what personal data is stored across multiple internal systems. An MCP server is configured for each system with read-only, metadata-only access — the agent can see table structures and column names but not row data. The agent scans each system and reports which tables contain fields that look like personal data. The DPO gets a cross-system inventory without anyone having to query each database manually.
Cross-source analysis
A researcher studying housing data connects agents to a public open data portal and a university’s internal research database simultaneously. Each data source has its own MCP server with its own access rules. The agent combines public statistics with internal survey data to produce an analysis that neither source could provide alone — without the research database ever being exposed beyond the university’s MCP server.
Under the Bonnet
Protocol
Model Context Protocol (MCP) — an open standard for connecting AI agents to external tools and data sources. The server implements the MCP tool specification.
Supported data sources
CKAN portals (Action API v3), PostgreSQL databases, REST APIs. Additional connectors can be built to the same pattern. Each data source gets its own MCP server with its own access configuration.
Access control
Read-only by default. Table and field exclusion lists. Row-level filtering. API keys and database credentials held server-side, never passed to the agent. All queries mediated by the server — no direct data source access.
Tools (CKAN example)
package_search, package_show, organization_list, organization_show, group_list, group_show, resource_show, datastore_search, tag_list, site_read.
Configuration
Environment variables for connection details. CKAN_URL for portals, DATABASE_URL for databases, API_BASE_URL for REST sources. Optional API_KEY for authenticated operations.
Stack
Python 3.13+, aiohttp for async HTTP, MCP SDK. Deployable locally or via Docker.
Tested against
CKAN portals (data.gov.ie, 21,830 datasets). PostgreSQL 14+. Compatible with any data source that can be wrapped in a Python connector.