Colrows AI

Colrows AI is the natural-language interface to your governed semantic graph. It answers business questions with results that are repeatable across teams, auditable across time, and explainable to anyone reading the trace.

What it does

Most "AI for data" tools generate plausible SQL from a prompt and a schema. Colrows AI is different in one structural way: it does not generate SQL directly. It compiles an intent through the same four-stage pipeline every other Colrows request goes through - context resolution, constrained planning, governed execution - and presents the result with a complete reasoning trace.

That means every answer Colrows AI gives is:

  • Grounded in the governed semantic graph, not in a model's guess about your column names.
  • Repeatable - the same question, asked tomorrow, returns the same answer (and any difference is traceable to a definition change).
  • Explainable - every concept, join, constraint, and policy used in the answer is shown alongside it.
  • Governed - personas, scopes, and row/column predicates apply at compile time. There is no path to data outside the persona's allowed subgraph.

How it works

A natural-language question travels the same compile-then-execute path as any other request:

  1. Intent normalization & binding

    The question is parsed and binds to entities, metrics, events, and concepts in Consensus. Multi-vector embeddings handle natural-language drift; structural reasoning makes the final call.

  2. Persona & scope resolution

    The requester's persona resolves an allowed subgraph. Anything out of scope is invisible to the rest of the pipeline - there is no way to "smuggle" a column past the planner.

  3. Join path proof

    Multi-entity questions are solved as a constrained graph traversal. Ambiguous paths fail compilation with an explainable error rather than silently generating the wrong number.

  4. Constraint solving

    Grain compatibility, time-window predicates, and policy restrictions are applied before any plan is produced.

  5. Dialect-perfect execution

    The plan is specialized to your warehouse dialect. The query runs. The trace is captured for audit.

Privacy & governance

The model never sees your data.

Colrows AI sends the LLM only the relevant slice of the semantic graph - concept names, definitions, and structural relationships scoped to the requester's persona. Row data is fetched by the SQL engine, after compilation, on the data plane.

  • Compile-time governance - RBAC, ABAC, row/column predicates, and persona scope shape the allowed subgraph before planning.
  • No raw schema leakage - only governed semantic surface area is exposed to the LLM.
  • Audit trail by construction - every answer carries a structured trace that survives forever.
  • Point-in-time reproducibility - re-run any historical question against the semantic state that was active when it was first asked.
  • Model-agnostic - bring your own LLM provider (OpenAI, Anthropic, Azure OpenAI, AWS Bedrock, or self-hosted Llama / Mistral / Qwen).

Key capabilities

  • Multi-hop questionsResolves questions that span multiple entities and time windows without ambiguity - including "why did X change?" causal questions.
  • Explainable answersEvery answer is paired with the concepts used, the join path traversed, the constraints applied, and the dialect SQL that ran.
  • Schedule & recurPromote a question into a Signal - runs on a cron, alerts on threshold, with full lineage to the underlying graph.
  • One-click visualizationCharts and dashboards generated from the same compiled plan. Same definitions, same governance, no copy-paste.

Supported datasources

Colrows AI runs on top of the SQL engine and supports every dialect Colrows supports - see Datasources. In practice that means most enterprise warehouses, lakehouses, and modern OLAP engines: Snowflake, Databricks, BigQuery, Postgres, ClickHouse, Trino / Starburst, Oracle, SQL Server, Exasol, Redshift, and more.

When Colrows AI refuses

Sometimes the right answer is "no". Colrows AI fails compilation, with an explainable error, when:

  • A concept can't be uniquely resolved in the requester's scope.
  • The join path between two entities is ambiguous and no anchor breaks the tie.
  • A constraint is violated - incompatible grain, cross-comparable time windows, or denied PII columns.
  • A metric depends on a node outside the persona's allowed subgraph.

That refusal is the feature. A weak system gives you a confident wrong number; a governed system gives you a clean error you can fix.