Skip to main content
AdaptHub is in Early Access·Free Forever
Signals & Noise
Structural Analysis 2026-02-23 EST: 12_MIN

Decoding the 2025 DILR Trap Sets

A post-mortem of the infamously opaque routing puzzles from last year's paper, and the precise moment where 90% of candidates lost their 99th percentile.

[ TRANSMISSION ]

The 2025 CAT DILR section produced one of the sharpest performance cliffs in recent memory. The median score in the top decile dropped by approximately 4 scaled points compared to 2024, and the distribution of correct attempts in the routing puzzle sets showed a peculiar pattern: most candidates who attempted these sets got the first two questions correct and the final two wrong. This is not randomness. It is a structural trap.

The routing puzzles in Slot 2 were designed around a single, non-obvious constraint. The constraint was not hidden — it was stated in the second line of the problem description. But it was stated in a way that the vast majority of candidates, under time pressure, pattern-matched to a more familiar constraint type and proceeded on an incorrect assumption.

The Architecture of a Trap Set

CAT setters do not write hard questions by making the logic obscure. They write hard questions by exploiting the gap between what you read and what you process. A trap set has three structural components: a surface structure that resembles a familiar problem type, a hidden asymmetry in the constraint, and a final question that is trivially easy if the constraint was correctly parsed and impossible if it was not.

In the 2025 routing puzzles, the surface structure was a standard sequencing problem — five couriers assigned to five routes with exclusion conditions. The hidden asymmetry was that two of the exclusion conditions were bidirectional (if A cannot follow B, then B cannot follow A) while one was unidirectional. Candidates who parsed all three as bidirectional could construct a consistent partial table for the first two questions. By the third question, the table collapsed.

The Two-Minute Entry Decision

The correct response to this set, in retrospect, was a 90-second triage at entry: scan all constraints before writing a single deduction. The candidates who identified the unidirectional constraint at the start were able to solve all four questions in under nine minutes. The candidates who began immediately started building a flawed model and spent their time defending it.

This is the meta-skill that separates 99th-percentile DILR performance from 95th-percentile performance. It is not raw logical speed. It is the discipline to invest the first two minutes of a set into structural parsing rather than solution generation. The time cost of this discipline is approximately 90 seconds. The time benefit, on a correctly selected set, is five minutes of clean, unambiguous deduction.

What to Practice Instead

The practical implication is that DILR practice should be structured around set triage and constraint classification, not just solution speed. For every set you attempt, the productive training question is not 'Did I get it right?' but 'How quickly did I correctly identify the binding constraint, and how did I know which constraint was binding?'

A high-quality DILR error analysis tags each incorrect attempt with the constraint that was misread and the deduction step where the model first broke. This is exactly the data that AdaptHub's telemetry captures — not just whether you were right or wrong, but which specific reasoning step failed. Over 30 or more tagged attempts, your personal constraint-blindness pattern becomes visible, correctable, and ultimately eliminable.

Sources