Skip to content
Formal Philosophy
Menu
Tutorial 31

Hybrid geometry-changing loops

Study loops that first change ambiguity geometry and then compile or control the residue, rather than only learning a front-end for a verifier.

View source Built 2026-04-07

The motivating contrast

Verifier-compilers are one important neuro-symbolic loop family. They try to learn a reusable symbolic front-end for an exact verifier. But they are not the only strong loops. Another family works differently. Instead of compiling the whole task family at once, it first changes the geometry of the task space and only then compiles or controls the smaller residue that remains. That is the design pattern this tutorial isolates.

Vocabulary note

Quotient means a partition of the task family into cases the current loop cannot yet tell apart. Residue means the smaller leftover problem after that first partition. Controller means the compact symbolic rule that handles that leftover problem. Frontier means the best checked tradeoff on a named bounded family.

The front stage can take several forms. It might build a witness basis, apply a quotient map, change the temporal basis, or decompose the task family into structural regimes. The back stage then acts on whatever has been exposed by that reshaping step. It might ask a smaller sequence of questions, run a residual verifier, or compile a direct amount law for the cases that remain. The common idea is always the same: pay to simplify the geometry first, then solve a smaller problem second.

Direct control versus hybrid control

Hybrid loop comparison A teaching figure comparing direct control, pair-plus-atom hybrid control, and pair-plus-block hybrid control on the same requirements-discovery task. Three loop designs on one task The comparison is honest only when acquisition, pure resolved mass, and residual controller depth are priced separately. U = alpha * pure_resolved_mass + beta * depth_saving - gamma * acquisition direct Raw-family controller One controller acts on every omission set directly. acquisition = 0 pure mass = 0 depth = n pair + atom Geometry first, then atoms Pair witnesses remove every non-singleton case before the residual controller even starts. acquisition = C(n, 2) pure mass = 2^n - n - 1 depth = n - 1 pair + block Geometry first, then blocks The same pair basis leaves only singleton residue, then block questions cut that residue logarithmically. acquisition = C(n, 2) pure mass = 2^n - n - 1 depth = ceil(log2 n) change geometry stronger separator Deep lesson: the hybrid win is not only a smaller controller. It comes from settling most of the family before the controller begins.
The decisive difference is not only controller size. The hybrid loop pays for front-stage geometry change, settles a large mass of cases early, then uses a smaller residual controller.

Part I: the cleanest bounded example

The clearest example so far comes from requirements discovery. On exact missing-set identification with the same block-query language, a direct raw-family controller needs depth n, while a pair-basis plus block-separator residual controller needs only ceil(log2 n). That is a real structural win on the controller side.

The gain comes from changing geometry first. The pair basis does not magically answer the whole problem. It reorganizes the family so that the remaining ambiguity collapses toward a singleton residue before the residual controller even starts. Once the loop reaches that smaller residue, the back-stage controller is solving a much easier problem than the direct controller was asked to solve at the start.

Part II: why cost models matter

That does not mean the hybrid loop is automatically better in every regime.

Witness acquisition still costs something.

The bounded comparison line introduced a cleaner way to talk about that:

U = alpha * pure_resolved_mass + beta * depth_saving - gamma * acquisition

Formula refresher

`alpha` weights the value of cases settled early. `beta` weights the value of a smaller residual controller. `gamma` weights the cost of collecting the front-stage structure. The formula is therefore a scored tradeoff, not a universal law of nature. It says which loop wins under the chosen cost model.

This matters because “better” becomes conditional. Some loops win when all cases are priced equally. Others win only when acquisition cost is expensive enough to matter. The right comparison is therefore never “which loop is smarter in the abstract.” It is always “which loop wins under the stated weights.”

Hands-on exploration

The Hybrid Loop Comparison Lab lets the weighting question stay concrete by changing the front-stage acquisition cost and watching the preferred loop change with it.

Part III: graph-side regime compilers

The corrected graph line suggests a second kind of hybrid loop. Here the direct compiler does not come from one monolithic family. It comes from taking the maximum of several exact regimes, namely balanced star forests, repaired multipartite structure, and, on the checked small domain, one explicit point correction. That is already a geometry-changing loop because the frontier is being decomposed before anything is compiled.

Instead of learning one front-end for one verifier, the loop carves the frontier into exact regions and compiles those regions directly. On the checked high band, that overlap compresses into a piecewise compiler with a low plateau, a middle interval handled by the star-plus-leaf family, a near-top multipartite peak, and a top tie. The important point is not the list of regions by itself. The important point is that the competition between structural families has become a direct symbolic object that can be compiled.

High-band overlap on the corrected graph line

Graph regime overlap map The corrected graph frontier splits into a low-edge balanced-star regime and a higher overlap band where star-plus-low-edge and repaired multipartite compete. The odd line is a special selector slice inside the higher band. Corrected graph regime map Low-edge balanced star forests sit on the left. Higher budgets enter an overlap band where two exact regimes compete. m = 0 m = n - 1 m = 2n - 3 higher Low-edge regime balanced star forests are exact High overlap band exact frontier = max(star-plus-low-edge, repaired multipartite) Beyond the checked band still open on this branch Star-plus-low-edge family full star + low-edge leaf correction OPT_star_leaf(n,m) = 2^(n-1)-1 + F_bal(n-1,e) odd-line middle rung: perfect matching on leaves Repaired multipartite side base partitions + up to two internal edges wins on some middle budgets before the star side returns Odd-line selector slice n = 2r + 1, m = 3r r = 1 tie r = 2 multipartite wins r ≥ 3 star-plus-leaf only in family selector Teaching rule: the corrected frontier is easiest to understand as a regime map with a genuine overlap band.
The high checked band is now a genuine overlap zone. The exact compiler chooses between star-plus-low-edge and repaired multipartite rather than treating the graph line as one monolithic family.

Hands-on exploration

The Graph Regime Compiler Lab turns the regime-overlap story into a live piecewise compiler, so the reader can watch the frontier split into exact regions instead of only reading the description.

Part IV: the low-edge concentration mechanism

The low-edge analysis changed character at this point. It no longer has only a family compiler. It now has a proof-shaped hybrid with two distinct stages. First each connected component is starified. Then the resulting star sizes are balanced.

Advanced detail

Part IV is the densest section of the tutorial. The reader only needs the reshape-then-balance picture on a first pass. The rest of the section explains how local move language, depth bounds, and finite alphabets refine that same story.

Graph vocabulary note

Star means the graph-theory shape with one hub vertex and several leaf vertices attached to it. Starify means reshape a component into that form. Balance star sizes means redistribute leaves so the component stars are as even in size as the model allows. A pendant subtree is a subtree hanging off the rest of the graph at one attachment point. A hub-target move is a local move that pushes mass toward a hub rather than away from it.

The first stage survives in a surprisingly clean form on the checked tree domain. An improving pendant-subtree move can always be chosen so that it moves a pendant subtree with one internal fork toward a hub. That hub-target form survives, while the weaker leaf-only and pendant-star-only variants fail. The second stage is then exact balancing on the star-family side. So the hybrid is not merely an interpretation placed on top of the data. It is a two-stage mechanism whose first half and second half can each be stated precisely.

The composed two-stage result

This subsection sharpens the depth bound and move alphabet. It can be skipped on a first reading without losing the main hybrid picture.

The low-edge line is another hybrid, but of a different kind. It begins with a concentration stage and only then applies an exact balancing stage. The newer composition result says that this is not merely a convenient interpretation. On the checked low-edge forest family, it is the surviving mechanism.

The tree-side subproblem sharpens that claim. The concentration path itself can stay inside the smaller local move language, so the method does not need to fall back to arbitrary pendant-subtree moves after the first step. The newest checked refinement is stronger again: the path survives even if each step is restricted to a smallest available hub-target move of the same form.

That controller now has an exact checked depth cutoff. On the checked tree domain, the smallest surviving branching-depth bound is 2. The cutoff is not arbitrary. A terminal-cherry ladder family, a tree family built by stacking terminal cherry attachments, witnesses it because the h = 2 case is exactly the shape that forces depth 2. The positive side is exact too. Every move selected by that depth-2 controller belongs to a finite rooted alphabet consisting of leaves, cherries, three-leaf stars, broom-1 shapes, and broom-2 shapes. Here the broom names refer to two small rooted tree templates with one long handle and a short branching head. That alphabet is already minimal on the checked domain, so no proper subset survives.

The exceptional route also fits the hybrid pattern

One checked two-fan state still escapes the stable local controller, namely NTF(2, 8, 2) on n = 13. At first that looked like bad news for compression because it introduced a special route through several named intermediate families.

The newer result reverses that impression. The route now has a compact amount law. On the feeder side, the exact count splits into a Fibonacci term plus a four-step periodic correction. If B(r, t) names that feeder count, then the full route deficit is simply the star amount minus B(r, t):

Delta(r, t) = 2^(r + t + 5) - 1 - B(r, t)

So even the exceptional case ends up following the same hybrid pattern. The loop first moves into better structural coordinates and then exposes a much smaller symbolic mechanism there. The special case did not break the hybrid picture. It forced the hybrid picture to become more explicit.

Low-edge concentration and balancing

Low-edge concentration map The corrected low-edge graph branch can be read as a two-stage process. First starify each connected component by moving pendant subtrees toward a hub. Then balance the resulting star sizes. The endpoint is the balanced star forest. Low-edge concentration map On the checked low-edge branch, the best loop does not jump straight to a formula. It first concentrates shape, then balances size. Positive low-edge forest arbitrary component shapes same total vertices same edge budget Stage 1: starify components pendant subtree to hub Starified forest component shape is concentrated size profile may still be uneven Stage 2: balance sizes pairwise smoothing on star sizes Balanced star forest shape concentrated size profile balanced F_bal(n, m) Teaching rule: the low-edge branch is no longer only a frontier fit. It is a two-stage concentration process.
The low-edge analysis is a hybrid too. It first concentrates component shape, then applies an exact size-balancing law.

The strongest future loop candidates are unlikely to be single-stage loops. The current bounded evidence points toward three recurring patterns: a quotient stage followed by a residual controller, a regime decomposition followed by a direct compiler, and a concentration stage followed by exact balancing. Those patterns look different on the surface, but they all express the same idea. The front stage pays an acquisition cost to reshape the problem, and the back stage reaps the savings on a smaller residue. In short, these loops reshape first and solve the smaller residue second.

Part VI: what these bounded results have achieved

This tutorial line can now teach two broad lessons about loop design. The first is that geometry change can beat direct control, but only when the comparison is priced honestly. The pair-basis plus block-separator example remains the cleanest case, and the cost model from Part II explains why witness acquisition and residual savings have to be evaluated together rather than separately.

The second is that hybrid loops come in more than one exact form. Some become regime compilers, as on the graph line, where the frontier itself decomposes into exact regions. Others become concentration processes, as on the low-edge line, where a reshape stage exposes a much smaller balancing law. Together, those lessons justify hybrid geometry-changing loops as a tutorial line in their own right.

This page sits closest to Tutorial 27: Verifier-compiler loops, Tutorial 29: Loop-space geometry, and Tutorial 30: Counterexample-guided requirements discovery. Tutorial 27 is the best comparison if the question is when a front-end can be compiled directly. Tutorial 29 is the better next step if the focus is loop-space shape. Tutorial 30 is the better next step if the interest is requirements discovery as a geometry-changing loop.