Colrows vs Looker Semantic Layer: Which Is Right for AI?

Looker is a BI tool with a hand-authored LookML semantic layer optimised for human dashboards. Colrows is a semantic execution layer for AI agents. Both call themselves "semantic," but one resolves meaning at presentation time for people clicking dropdowns; the other resolves meaning at compile time for agents generating SQL.

Executive summary

Looker (now part of Google Cloud) is a mature BI platform whose central innovation, LookML, gave the analytics world a real semantic layer years before "semantic layer" became fashionable again. LookML lets data teams declare views, explores, measures, dimensions, and access controls in a versioned modelling language. Looker resolves user clicks into SQL and renders results as dashboards, looks, and visualisations. For human-facing analytics, this stack works well.

Colrows is built for what comes after BI. It is a semantic execution layer designed to operate the AI-agent layer of the enterprise - autonomously building a typed semantic graph across the data estate, then compiling every agent intent into governed, deterministic, dialect-perfect SQL. Where LookML is hand-authored and resolved at presentation time, the Colrows graph is built and maintained autonomously and resolved at compile time.

In short:

  • Looker helps people see data through dashboards.
  • Colrows enables systems to compile intent into governed SQL.

For organisations whose roadmap runs from BI to AI agents, the architectures diverge - and the difference is foundational, not incremental.

Two philosophies, two outcomes

Looker assumes meaning is declared once and consumed by humans. Analytics engineers write LookML; users explore through Looker's UI; the platform composes SQL on demand. The semantic layer is rich within LookML's vocabulary - measures, dimensions, joins, derived tables, access filters - but it is bound to Looker's runtime and its assumption that the consumer is a person clicking through curated explores.

Colrows assumes meaning is distributed across systems and consumed by machines. Definitions live in catalogues, dbt models, BI tools (including Looker), runbooks, Confluence, and historical query usage. Colrows ingests these sources and constructs a typed semantic graph autonomously. The graph is multi-scope (global, datastore, persona, user) and uses multi-vector embeddings (definition, usage, combined per concept). The consumer is an agent that generates intent in natural language and needs that intent compiled deterministically.

The downstream consequence: Looker scales with the LookML modelling effort your team can sustain. Colrows scales with the data, documentation, and usage signals you already have.

BI semantic layer vs. semantic execution layer

The most important architectural distinction is what the layer is built to serve.

Looker is presentation-time semantics. A user picks dimensions, applies filters, and clicks Run. LookML resolves the click into SQL inside Looker's runtime. The semantic layer is an authoring artifact that lives in version control and gets compiled when humans request it.

Colrows is compile-time semantics. An AI agent generates intent. The intent is resolved against a typed semantic graph; join paths are proven; RBAC, ABAC, and row/column-level predicates are injected into the SQL before any byte is fetched from the warehouse. The semantic layer is an execution artifact that lives at the centre of every agent query, with a point-in-time reproducible audit trail.

Both are real semantic layers. They are not interchangeable.

A concrete scenario: regulated portfolio review

A financial services firm runs Looker for its analyst-facing dashboards. The LookML project is well-maintained, access filters segment customer data by region, and the analytics team is rightly proud of its work. Analysts run their portfolio reviews from saved Looker dashboards every Monday morning.

Now the firm wants to build an autonomous portfolio review agent: "Investigate why our distressed-asset recovery rate dropped in Q3, and propose corrective actions while staying inside RBI SARFAESI and DRT regulatory scope." The agent must reason across:

  • Loan, Borrower, RecoveryEvent, Collateral, and RegulatoryScope entities and the relationships between them.
  • The compliance officer persona's access to specific borrower-PII columns, but only when scoped to active enforcement files.
  • Predicates that vary per regulatory framework, which change as new RBI circulars are issued.
  • An audit trail proving which definitions and policies were in force when the agent reached its conclusion.

Looker can render the historical recovery-rate chart. LookML access filters can hide rows for some users. But the agent reasoning is multi-hop, the governance is multi-scope, the policies evolve mid-quarter, and the audit must be defensible to a regulator. None of this is a Looker problem, because Looker was not designed to be the runtime for autonomous agents.

With Colrows, the agent compiles its intent through a typed semantic graph that already encodes the entities, relationships, and regulatory scopes. Compile-time governance injects the right predicates per persona, per file, per date. The audit trail is point-in-time reproducible. New circulars ingested as documentation update the graph without a LookML deploy.

Why BI semantic layers cannot do this

Looker is excellent at one thing: helping humans explore curated data through visualisations. It is not designed to:

  • Compile free-text agent intent into governed SQL deterministically.
  • Inject row/column-level predicates at compile time across persona, tenant, and regulatory scope.
  • Build or maintain itself from documentation, runbooks, and usage signals.
  • Produce machine-readable compile traces for AI audit.
  • Resolve the same concept differently across multi-scope semantics without forking the LookML view.

Each of those is foundational for putting AI agents into production. Adding them on top of Looker is not a configuration change - it is a different architecture.

Reframing the comparison

Looker and Colrows operate at different layers of the modern data stack.

Looker = presentation-time BI semantic layer. Optimised for human-facing dashboards and curated exploration. LookML is hand-authored. Resolution happens inside Looker's runtime.

Colrows = compile-time semantic execution layer. Optimised for governed, deterministic compilation of AI agent intent. The semantic graph is built and maintained autonomously. Resolution happens before any SQL touches the warehouse, with policy injected at the SQL level.

The two coexist cleanly: keep Looker for analyst dashboards; add Colrows for the agent-execution surface that compiles intent into governed SQL.

The bottom line

If your goal is mature human-facing BI - dashboards, embedded analytics for customers, and a hand-authored modelling language your analytics team owns - Looker is a capable choice and a sensible default for many enterprises.

If your goal is to put AI agents into production - to compile their intent into governed SQL with proven joins, compile-time policy enforcement, and a semantic graph that evolves autonomously with your business - Looker alone is not the right architecture. The semantic execution layer above it is what matters, and that is what Colrows provides.

At the core

LookerColrows
Built forBI dashboards and explorationSemantic execution for AI agents
Primary consumerHumans clicking dashboardsAI agents, autonomous workflows
Core strengthVisual exploration and embedded BIGoverned compilation of agent intent
Intelligence lives inLookML files and Looker UITyped semantic graph

Semantic and intelligence handling

CapabilityLookerColrows
Semantic modelManual LookMLAutonomous, continuously maintained
Resolution timePresentation time (human click)Compile time (agent intent)
Multi-scope resolutionPer-user via access filtersGlobal, datastore, persona, user scopes
Context preservationPer-sessionCross-workflow, persistent
AI readinessIndirect via Looker APIAgent-native by design
ExplainabilitySQL-runner inspectionEnd-to-end (intent → semantics → SQL → source data)

Engineering and operational reality

DimensionLookerColrows
Compile-time governancePartial - access filters and grantsYes - RBAC/ABAC injected into generated SQL
Join path proofSchema-checked via exploresProven against typed graph at compile time
Drift handlingManual LookML updatesAutonomous detection plus proposed mappings
Cloud lock-inTied to Google CloudWarehouse and cloud agnostic
Audit trailSystem Activity logsPoint-in-time reproducible compilation trace
Coexistence-Ingests LookML; Looker continues for dashboards

Frequently asked questions

Is Colrows a Looker replacement?

For dashboarding and visual exploration, no - Looker is a BI tool and Colrows is not. For the semantic layer underneath your AI agents, yes - Colrows replaces LookML as the source of truth for compiled, governed query intent. Many enterprises run Colrows as the agent-execution layer while keeping Looker for human-facing dashboards.

Can Colrows ingest LookML definitions?

Yes. Views, explores, measures, dimensions, and access grants from a LookML project can seed the Colrows semantic graph. From there the graph evolves autonomously, ingests other documentation sources, and exposes the agent-native compile pipeline that Looker does not provide.

Why is a BI semantic layer not enough for AI agents?

BI semantic layers like LookML resolve meaning at presentation time, primarily for humans clicking dropdowns and writing filters. AI agents generate intent in free text. The layer between them and the warehouse must compile that intent deterministically, with proven joins and compile-time governance - not just look up a metric definition. Those are different architectural problems.

Does Colrows support visualisation?

Colrows includes self-serve dashboards built on its semantic graph, but visualisation is not the product's centre of gravity. The centre of gravity is compiling agent intent into governed SQL. For full BI-tool feature breadth, Looker remains a stronger fit. For trustworthy AI execution, Colrows is built for it.

How does Colrows compile-time governance compare to LookML access controls?

LookML supports access_filter, access_grant, and PDT permissions that the BI layer enforces against authenticated user sessions. Colrows enforces RBAC, ABAC, and row/column-level predicates at compile time, before any SQL is generated. Filtered-out rows are never read; unauthorised intent fails compilation rather than reaching the warehouse. This matters more for AI agents than for humans because agent volume and unpredictability stress the governance layer differently.

Is Colrows tied to Google Cloud the way Looker is?

No. Colrows is warehouse-agnostic and runs across Snowflake, Databricks, BigQuery, Redshift, Postgres, MySQL, ClickHouse, Trino, and 8+ more. Deployment options include Cloud, dedicated, and fully private VPC across AWS, Azure, and GCP.

Further reading

See Colrows compile a real agent query.