What MCP actually solved
The Model Context Protocol, introduced by Anthropic in November 2024, took the M*N integration problem of "M agent frameworks talking to N data sources" and collapsed it to M+N. Before MCP, every agent platform shipped its own connector for Slack, GitHub, Postgres, Salesforce, Confluence, your warehouse - and so did every other agent platform. The combinatorics were brutal. MCP fixed that with three primitives: resources (read-only context), tools (functions an agent can invoke), and prompts (reusable templates). Any agent that speaks MCP can talk to any MCP server. Any data source that exposes an MCP server is reachable by any agent.
By 2026, this had become the dominant pattern. OpenAI, Google DeepMind, Microsoft, AWS, and Cloudflare all ship MCP support. Eighteen months after launch, the protocol was donated to a Linux Foundation directed fund (the Agentic AI Foundation), making it formally vendor-neutral. Enterprise data platforms - including Snowflake, Databricks, and the major catalogs - have shipped first-party MCP servers. SQL and MCP increasingly sit side by side as the two protocols enterprise data is reachable over.
What MCP did not solve
MCP standardises access. It does not standardise meaning. That distinction is the whole reason this article exists.
Concretely, an MCP server for your warehouse exposes tools like query_warehouse(sql) or list_tables(). The protocol lets the agent call them; the protocol does not tell the agent what "revenue" means in your warehouse, which join paths are valid, which definition of revenue applies to which scope, or which rows the agent is even allowed to see under the requesting user's policy. All of that is left to the agent.
And the agent gets it wrong. Generic LLMs over MCP-exposed warehouses produce SQL that is syntactically valid, returns plausible-looking numbers, and is silently incorrect a substantial fraction of the time. The agent does not know the orders table excludes voided transactions; it does not know the finance department uses a different revenue definition than the product team; it does not know it is not authorised to see refund data for accounts outside its scope. The protocol does not encode any of that. The semantic layer does.
Three architectures of MCP-to-data integration
To make this concrete, consider three ways an enterprise can wire MCP to its data estate. Two of them are common in 2026 production. One is what we recommend. Their failure modes differ sharply.
Architecture 1: Bare-metal MCP server over the warehouse
The simplest pattern. The warehouse vendor ships an MCP server that exposes tools like execute_sql, get_schema, and list_tables. The agent calls them directly. There is no layer between the agent and the raw schema.
This works for a hackathon. It does not work at production scale because:
- The agent is responsible for inventing correct SQL from raw schema. It will frequently fabricate joins.
- Governance is enforced (if at all) only via warehouse-side row-level security, which the agent has no way to anticipate. The agent generates a query, the warehouse silently filters rows, and the agent presents an incomplete result as if it were complete.
- Every agent has to re-derive what every other agent already figured out. Two agents on the same warehouse compute the same metric two different ways.
- There is no audit trail of why a particular SQL query ran - no proof of join validity, no record of which definition of which metric was used.
Architecture 2: MCP server over a metric store / BI semantic model
Slightly better. The MCP server exposes pre-defined metrics from a tool like dbt Semantic Layer, Cube, or LookML. The agent can call get_metric("revenue", filters) and receive a governed number.
This solves part of the problem. Where a metric exists, the agent gets a consistent definition. But it has three structural limits:
- Pre-defined metrics only. The moment the agent needs to reason multi-hop ("show me churned EU customers in the BFSI segment whose contract value dropped 20% YoY"), the metric layer cannot resolve it because the metric was never authored.
- BI-shaped, not AI-shaped. Metric stores were designed for dashboards, where humans pick fields from a curated menu. Agents need to compose novel queries. The metric layer is not a graph; it cannot prove a join the modeller did not pre-author.
- Single-scope. Most metric layers do not encode the multi-scope semantics (global -> datastore -> persona -> user) that real enterprise governance requires. One revenue definition fits all - which is why finance and product still disagree.
Architecture 3: MCP server in front of a semantic execution layer
The pattern we ship with at Colrows. The MCP server exposes a single semantic intent tool: the agent describes the question in natural language or structured intent, and the semantic execution layer compiles it. The compilation does the work the agent should not have to do.
- The agent sends intent over MCP: "churn risk for EU BFSI segment, finance scope, last quarter."
- The semantic execution layer resolves every term against the typed semantic graph. "Churn risk" is a versioned concept; "finance scope" selects the policy plane; "EU BFSI segment" resolves to entities with provable relationships.
- It proves the join paths through the graph. If a path does not exist, compilation fails - the agent receives a structured error, not a fabricated answer.
- It applies compile-time governance. RBAC + ABAC + row/column predicates are evaluated before SQL is emitted. Unauthorised intent fails compilation; data is never read.
- It emits dialect-perfect SQL against the warehouse, executes, and returns a structured result with a full audit trail (graph version, definitions used, executed SQL, identity context).
- The agent receives a typed result and the audit trail. The next agent, asking the same question in the same scope, gets the same answer - because the compiler is deterministic.
What MCP carries is intent and result. What the semantic layer carries is meaning, governance, and proof. Together, the two protocols become the addressing-and-routing pair that production enterprise agents need.
Why "tools/list -> call_tool" is the wrong granularity for data
One subtlety worth naming. The natural MCP pattern is to expose many tools: one per data domain, one per metric, one per system. For application-style integrations (Slack, Linear, GitHub), this is the right granularity. For governed enterprise data, it is the wrong granularity. The reason is that agents do not know which tool to call when. Faced with twelve tools that all return revenue-shaped numbers, the agent picks one - often the wrong one - and runs it.
Putting a semantic execution layer behind a single MCP tool inverts this. The agent does not have to choose between twelve dialects of revenue. It expresses intent. The compiler chooses. Tool surface goes down; correctness goes up; the agent stops failing in the choose-a-tool layer.
What this looks like end-to-end
Picture an enterprise with a governance team that has standardised on Anthropic's MCP for all agent connectors. They have MCP servers for Slack, Salesforce, Confluence, GitHub, Jira, the warehouse, the data catalog, and an internal documentation index. An agent receives: "Draft a renewal proposal for Account 7421 reflecting their current usage and any contractual constraints."
- The agent calls the Salesforce MCP server (resource): pulls the account record.
- It calls the Confluence/MSA RAG MCP server (tool): retrieves the relevant clauses of the master service agreement that govern renewal terms.
- It calls the semantic execution layer MCP server with structured intent: "current usage for Account 7421, sales scope, last 90 days, broken down by product line." The semantic layer compiles the intent against the typed graph, applies the sales scope, proves the join paths through Account -> Subscription -> UsageEvent, governs against the requesting user's RBAC + ABAC policy, emits SQL, and returns a structured usage breakdown with audit trail.
- The agent composes the proposal: contract clauses cited from the document index, usage numbers from the semantic layer, all with provenance.
Three MCP servers. Three different jobs. The semantic layer is the only one that produces governed structured truth - which is why it sits where structured truth is required, not anywhere else.
What enterprise architects should plan for
If you are designing an enterprise agentic AI architecture in 2026, the practical guidance:
- Standardise on MCP for connectors. The protocol has won. Build your agents to consume MCP; expose internal data sources as MCP servers.
- Do not expose your warehouse over a bare MCP server. Either expose only governed metrics, or - better - expose a semantic intent surface backed by a semantic execution layer.
- Treat the semantic layer as the governed half of your MCP fleet. Documents, tickets, and CRM records can be RAG-style MCP resources. Governed structured data should be one MCP tool that compiles intent.
- Audit at the protocol level. MCP gateways can enforce policies at the tools/list -> call_tool cycle. Use that surface for cross-cutting concerns (rate limits, scope checks). Use the semantic layer for semantic governance (RBAC + ABAC + row/column predicates against typed entities).
- Watch for fragmentation, not just integration. MCP solves connector fragmentation. It does not solve definitional fragmentation. Two agents over the same MCP fleet with no semantic layer behind it will still disagree about revenue.
MCP made every connector look the same. The semantic execution layer makes every answer mean the same thing. Both protocols are required. One without the other ships agents that can connect to everything and agree on nothing.
How Colrows fits
Colrows is engineered to be the semantic execution layer behind your MCP fleet. Agents emit intent over MCP; Colrows compiles that intent through the typed semantic graph, enforces compile-time governance, emits dialect-perfect SQL across 16+ engines, and returns a structured, audited result. For the wider architecture see The Emergence of the Semantic Operating System and, on the orthogonal half of agent context, RAG vs Semantic Layer.
Closing thought
The protocol war is over. MCP won. The interesting question for the next eighteen months is not how agents talk to data - it is what they find when they get there. Enterprises that put a semantic execution layer behind their MCP servers ship agents that produce governed, reproducible, audited answers. Enterprises that do not ship agents that fluently invoke the wrong tools and confidently report contradictions. Same protocol. Different outcomes.
