How a leading Indian asset reconstruction company eliminated subjectivity, reduced evaluation time by over 95%, and deployed institutional knowledge at machine scale — using the Colrows semantic reasoning layer.
In India's asset reconstruction market, the window to acquire a distressed retail NPA portfolio is brutally narrow. The firm that bids first — and bids accurately — wins. Every day spent evaluating a portfolio is a day a competitor could pre-empt the deal.
The institution had assembled a formidable multi-disciplinary team: technologists, legal specialists, field investigators, and senior executives. But the evaluation process was fundamentally bottlenecked by human bandwidth and sequential workflows.
A standard portfolio evaluation required the team to manually segment thousands of accounts, assess borrower behavioural traits, review KYC and loan documentation, and interpret an evolving regulatory landscape — before arriving at an indicative valuation.
"The process was sound. The people were excellent. But the market doesn't wait for excellent processes."
Internal assessment, Pre-deployment reviewEnd-to-end portfolio assessment took several months, constraining the firm's ability to bid competitively in fast-moving auctions.
Correct valuation demanded fluency in RBI SARFAESI, DRT frameworks, and overlapping sub-regulations — expertise that varied across analysts.
Segmentation logic and file interpretation were anchored to individual analysts' experience — creating inconsistent outputs and key-person risk.
Each borrower file contained layered KYC, payment history, field reports, and behavioural indicators — requiring skilled human review throughout.
The obvious question is whether a machine learning model trained on historical NPA portfolio outcomes would solve the problem. It's a legitimate hypothesis — and one the institution examined carefully. The answer reveals something important about the nature of the problem itself.
Predicts recovery probability for borrower profiles it has seen before — fast and scalable for routine accounts.
Surfaces accounts that resemble past defaulters or high-recovery profiles with high throughput.
A model trained on 2021 data doesn't know a 2024 RBI circular changed enforcement thresholds. It reasons from an outdated legal reality — confidently.
Training data reflects portfolios the firm bid on and won — not the full evaluation universe. Edge cases and passed deals are invisible to the model.
The accounts that determine bid profitability — atypical collateral, complex DRT histories, contested SARFAESI enforcements — are precisely where frequency-based models fail.
An investment committee bidding hundreds of crores cannot accept "the model says 34 paise on the rupee." A rationale is required, not a score.
Historical data still powers pattern recognition — but anchored to the semantic graph, not free-floating.
SARFAESI and DRT provisions are modeled as nodes and relationships. Update the graph once; every subsequent evaluation inherits the change instantly.
Unusual accounts are evaluated against the full regulatory and institutional knowledge graph — not extrapolated from statistical frequency.
Every valuation output carries a reasoning trail — which provisions applied, which borrower signals were weighted, which heuristics drove the final figure.
Outputs are structured as defensible rationales, not probability scores — designed to survive scrutiny at the bid approval stage.
Legal heuristics, executive judgment, and field-team expertise are encoded explicitly — not hoped to emerge from training data.
The fundamental insight: NPA evaluation is not primarily a data problem. The data existed and was accessible. The bottleneck was interpretation — knowing which regulatory provision applied, what a payment pattern signalled about borrower intent, how to weight collateral quality against recovery probability in a DRT proceeding. Interpretation requires reasoning, not pattern matching.
Colrows was deployed as a semantic reasoning layer — not a rules engine, not a dashboard, not a conventional AI model. The regulatory graph, tribal knowledge, and trained AI agent work as a unified system. Each component does what it does best.
The AI agent sits above both layers — using the ML component for speed and scale on routine accounts, and the semantic graph for correctness and explainability on complex ones. Neither alone is sufficient. Together, they cover the full evaluation surface.
The agent ingests the raw portfolio and pre-classifies accounts against the semantic parameter graph.
Accounts are segmented by regulatory standing, recovery probability, borrower profile, and collateral quality — simultaneously, at scale.
The agent selects representative samples and performs deep file analysis — surfacing behavioural traits, compliance gaps, and risk indicators.
Drawing on historical outcomes and the regulatory model, the agent derives an indicative portfolio value with a full reasoning trail.
In NPA markets, the firms that consistently overpay are almost never the ones with bad analysts. They are the ones whose evaluation frameworks have quietly fallen behind the regulatory environment. Regulatory drift is silent, accumulative, and expensive — and it is the specific failure mode that a purely data-trained model cannot detect.
A model trained before the amendment continues applying the old threshold to every account it evaluates. Entire account categories shift legal enforceability status — the model doesn't know. Valuations drift from reality, silently.
Historical recovery timelines underpin ML probability estimates. When DRT processing times shift due to procedural changes or case backlog, every timeline-dependent valuation becomes systematically miscalibrated — and the model has no way to surface this to the analyst.
When a regulatory change occurs, the Colrows graph is updated at the node and relationship level. Every subsequent evaluation — that day, that hour — automatically reflects the new regulatory reality. No retraining cycle. No lag. No silent miscalibration compounding across a portfolio.
In asset reconstruction, a valuation that cannot be explained is a valuation that cannot be defended — to an investment committee, to a regulator, or to a counterparty challenging the bid. Explainability is not a nice-to-have. It is a governance requirement.
The Colrows agent produces structured reasoning outputs alongside every valuation, tracing exactly which regulatory provisions, borrower signals, and institutional heuristics drove each conclusion. Below is a representative output structure for a single evaluated account.
Reasoning: Clean legal standing + settlement intent + strong collateral coverage supports above-average recovery. Colrows recommends inclusion in bid at upper-band pricing.
"The investment committee didn't ask us to justify the number. For the first time, the number came with its own justification — traceable to regulation, to precedent, to the account itself."
Post-deployment reflectionThe deployment reframed what evaluation means for the institution — shifting it from a capacity constraint to a strategic capability.
What previously consumed multiple months is now completed within hours — enabling the firm to evaluate and bid on portfolios that were previously logistically inaccessible.
Every account evaluation is automatically cross-referenced against the full SARFAESI and DRT regulatory surface — not just the provisions an individual analyst happened to know.
Legal, technical, field, and executive knowledge — previously distributed across individuals and siloed in experience — now operates as a single coherent reasoning system.
With evaluation latency eliminated, the firm can participate in a significantly larger share of portfolio auctions — including fast-moving deals that previously expired before assessment was complete.
Conventional AI approaches can surface patterns. But they cannot reliably reason about why those patterns matter, or apply that reasoning consistently against a shifting regulatory backdrop.
Colrows constructs a semantic graph — a structured representation of the domain in which concepts, relationships, rules, and heuristics are explicit nodes and edges, not latent parameters. The AI agent is then anchored to this graph, ensuring every inference is traceable, auditable, and grounded in the firm's own institutional reasoning.
The result is a system that combines the scale of ML with the correctness of structured legal and institutional knowledge — and produces outputs that survive both market scrutiny and investment committee review.