Skip to content
Formal Philosophy
Menu
Tutorial 29

Loop-space geometry

Map neuro-symbolic loops into witness languages, quotients, separator policies, label bases, and compiled artifacts, then study the bounded results that show why some loops are structurally stronger than others.

View source Built 2026-04-01

The motivating question

Two loops can solve the same task and still be very different.

One loop might:

  • ask the verifier again
  • collect one more example
  • and continue almost unchanged

Another loop might:

  • change the witness basis
  • collapse the task into ambiguity classes
  • and then run a much smaller controller on what remains

That difference is the reason to talk about loop-space geometry.

The best current bounded results suggest that the strongest loops are usually not just:

  • better prompts
  • larger models
  • or a bigger verifier budget

They are stronger because they reshape the remaining search problem.

Working Vocabulary

  • Witness means a small concrete object that exposes structure, for example a counterexample, signature, trace, or parameter choice. It does not mean eyewitness testimony.
  • Quotient means a grouping of cases that the current loop cannot distinguish. If two hidden targets induce the same stored state, they land in the same quotient class.
  • Residue means what is still unresolved after that grouping. It is the smaller problem that remains after the first compression.
  • Controller means a compact symbolic rule for what to do next, for example route, ask, accept, reject, or refine. It does not mean a hardware controller.
  • Frontier means the best checked tradeoff or best checked value on a named bounded family. It does not mean an unexplored frontier in the loose everyday sense.
  • Carve means make an initial coarse split of the family, or restrict attention to a smaller subfamily, before using a finer second-stage language.
  • Star means the graph-theory shape with one hub vertex connected to several leaves. A star forest is a disjoint union of those shapes.
  • Repair in the nearby verifier and software-repair tutorials means a structured correction object that fixes a mixed or failing case, not just informal cleanup.

Part I: the main axes of loop space

A neuro-symbolic loop can be factored into at least six axes.

  • Witness language: what counts as an admissible failure, trace, or local certificate
  • Stored state: what the loop keeps after collecting evidence
  • Ambiguity quotient: which hidden targets still look the same after that state is stored
  • Separator language: what extra questions, tests, or residual policies are allowed above the quotient
  • Label basis: the coordinates in which the loop names outcomes
  • Target artifact: what reusable symbolic object the loop is trying to compile

That factorization is useful because it separates changes that are often mixed together in informal descriptions.

Loop-space geometry map

Loop-space geometry map A neuro-symbolic loop can be factored into witness language, observation map, ambiguity quotient, separator policy, and compiled artifact. Stronger loops often change geometry before controlling the remaining residue. Loop-space geometry The loop gets stronger when it changes the shape of the search space before applying a residual controller. Witness language singletons pairs temporal traces motifs Observation map O(x) stored traces residual evidence Ambiguity quotient what still looks the same geometry changes here Separator policy membership block query residual verifier Compiled artifact question policy small controller regime compiler repair grammar Label basis flat labels, monitor cells, temporal traces Teaching rule The best loop is often not the one that predicts better. It is the one that leaves a simpler residue after the first stage.
A loop gets stronger when it changes witness geometry first, then compiles or controls only the smaller residue that survives.

The cleanest mental model is a sequence of compressions:

  1. observe failures or witnesses
  2. store them in a state
  3. collapse the hidden family into ambiguity classes
  4. run the smallest separator policy needed above that quotient
  5. compile the surviving structure into a controller, question policy, or regime law

The point is not speed alone. A geometry-changing loop reshapes what remains to be searched.

Part II: the first big correction

The simplest story says:

  • collect counterexamples
  • repair the missing part
  • ask a stakeholder when needed

The bounded results correct that story.

After saturated witness collection, the right object is not a bag of examples. It is an observation map:

O_W(M) = {S in W | S ⊆ M}

where:

  • W is a witness library
  • M is the hidden target, for example a missing requirement set

Quick Logic Refresher

  • `⊆` means "is a subset of". It says every element on the left is contained in the set on the right.
  • `S ⊆ M` means the witness signature S is fully contained inside the hidden target M.
  • `O_W(M)` is the set of all witnesses from W that fit inside M.
  • `M ~ M'` means the loop cannot currently tell M and M' apart. Here ~ means indistinguishable under the current observation state, not vague similarity.

That moves the loop from example collection to quotient construction.

Two hidden targets are equivalent for the loop when they induce the same observation state:

M ~ M'  iff  O_W(M) = O_W(M')

A plain direct controller acts on the raw target family. A geometry-changing loop acts on this quotient first, then controls only the residue.

Part III: the requirements-discovery case study

The requirements case study is the clearest worked example.

Its core question is:

When a counterexample exposes a missing requirement, how much of the repair is already determined by witness structure, and how much still needs follow-up questions?

Three bounded corrections matter here.

1. Pure recovery is not only about singleton witnesses

The first step says direct recovery works when every missing requirement has a singleton witness. That is real, but it is not the whole story.

Even without singleton witnesses, pure recovery can still succeed if the full observation map is injective on the omission family.

So the real question is whether the stored state already separates the hidden targets, not only whether a singleton witness exists.

2. Scope matters sharply

On unrestricted omission families, singleton witnesses remain a global bottleneck.

On scoped families, especially pair-lobotomy families, oracle help becomes strictly stronger.

That means requirements discovery is not one monolithic task. The omission family changes the loop geometry.

3. Pair basis plus good separators is a real hybrid

Once the witness library contains all pairs:

  • the residual ambiguity collapses to singleton uncertainty
  • the remaining difficulty moves to separator language

Then the bounded ladder becomes:

  • pair-subset queries, no help
  • singleton-membership queries, linear depth
  • block-intersection queries, logarithmic depth

That is the first clean example of a loop becoming stronger by changing geometry first and only then using a stronger residual controller.

Counterexamples, quotient, then separator policy

Requirements discovery ladder A hidden missing requirement set is first compressed by a pair basis into a singleton residue, then resolved by two block-separator questions. Requirements discovery ladder Example with four requirement atoms and a hidden singleton omission. Raw family possible missing sets {r1} {r2} {r3} {r4} hidden target: {r3} Pair-basis observation observe all pairs S ⊆ M O_pair({r3}) = ∅ No pair is visible, so every singleton still looks possible. {r1,r2,r3,r4} Residual singleton family ambiguity quotient after the pair basis r1 r2 r3 r4 Now only a separator question is missing. Block separators Question 1: does M intersect {r1,r2}? answer: no → {r3,r4} Question 2: does M intersect {r3}? answer: yes → identify r3 depth = ceil(log2 4) = 2 Teaching rule The pair basis changes geometry first. Block questions only solve the small singleton residue that remains.
The pair basis collapses the hidden family to a much smaller singleton residue. Only then do block questions do the remaining work.

Interactive labs

Part IV: temporal labels are also part of loop space

Loop-space geometry is not only about witnesses and separators.

It also includes the label basis.

The temporal-label result shows a small but important result:

  • raw monitor-cell labels are strictly finer than flat two-step trace labels on the full family
  • after the right first coarse split, the richer temporal basis stops buying a finer partition

That means a stronger basis is not always the right global basis. Sometimes it is the right second-stage basis.

The easiest analogy is coordinates in geometry.

Polar coordinates can be the right language for one subproblem and the wrong language for another. Temporal labels behave the same way in this bounded loop setting.

Part V: graph-side compilers

The corrected graph line teaches a different lesson.

At first it looked like a large collection of unrelated graph families. After the corrected total-domination metric, the analysis compressed much more sharply.

The stable bounded survivors group into three families:

Multipartite family results:

  • complete multipartite families on the corrected frontier
  • repaired multipartite additivity
  • a corrected small-domain single-block compiler
  • a structurally explained point correction at (7, 9)

Low-edge family results:

  • exact low-edge and repaired-block optimizers
  • a full-star-plus-low-edge family compiler

Cross-family results:

  • a wider two-family overlap compiler on the checked high band

So loop space contains exact regime compilers, not only question policies, verifier front-ends, and witness languages.

Corrected graph regime map

Graph regime overlap map The corrected graph frontier splits into a low-edge balanced-star regime and a higher overlap band where star-plus-low-edge and repaired multipartite compete. The odd line is a special selector slice inside the higher band. Corrected graph regime map Low-edge balanced star forests sit on the left. Higher budgets enter an overlap band where two exact regimes compete. m = 0 m = n - 1 m = 2n - 3 higher Low-edge regime balanced star forests are exact High overlap band exact frontier = max(star-plus-low-edge, repaired multipartite) Beyond the checked band still open on this branch Star-plus-low-edge family full star + low-edge leaf correction OPT_star_leaf(n,m) = 2^(n-1)-1 + F_bal(n-1,e) odd-line middle rung: perfect matching on leaves Repaired multipartite side base partitions + up to two internal edges wins on some middle budgets before the star side returns Odd-line selector slice n = 2r + 1, m = 3r r = 1 tie r = 2 multipartite wins r ≥ 3 star-plus-leaf only in family selector Teaching rule: the corrected frontier is easiest to understand as a regime map with a genuine overlap band.
The corrected graph line is easiest to understand as a regime map. Low budgets live on the left. Higher budgets enter an overlap band where two exact structural families compete.

Part VI: the low-edge concentration mechanism

The newest part of the graph line is the clearest “show what was achieved” story.

It no longer says only:

  • a balanced star forest fits the checked frontier

It now has a mechanism.

Stage 1: starify components

Here “starify” means: turn each connected component into a star-shaped graph, one hub with several leaves, without changing the bounded objective being optimized.

On checked connected trees, every non-star class has an improving pendant-subtree move.

The analysis then compresses further:

  • on checked trees with n in {8, 9}, the improving move can always be chosen so that the reattachment target is a maximum-degree hub
  • and on the same checked domain, the moved subtree can be chosen with at most one internal fork
    • leaf-only moves fail
    • pure pendant-star moves fail

So the surviving move language is not “some local repair happens somewhere.” It is:

  • move a pendant subtree with one internal fork toward a hub

Stage 2: balance star sizes

Once component shape is concentrated, the star-family product law can be optimized exactly by balancing component sizes.

That gives a second clean stage:

  • smooth an uneven star profile into the balanced one

The composed result

The checked low-edge analysis is now a two-stage concentration process:

  1. starify components
  2. balance star sizes

The newer composition result tightens that from a teaching paraphrase into the checked mechanism itself. On the checked low-edge forest family, those are the two surviving stages.

The tree-side subproblem then tightened through several steps, each narrowing the surviving move language further.

First, on the checked tree domain, the whole monotone path to the star can stay inside the smaller move language, repeatedly moving pendant subtrees (a subtree hanging from one cut edge) with one internal fork into hubs.

Second, the path still survives if each step uses a smallest available hub-target move of that same form.

Third, there is an exact checked depth cutoff on that local controller. Depth 0 fails, depth 1 fails, and depth 2 is the smallest surviving bound on the checked tree domain. The terminal-cherry ladder family is the named witness: its smallest surviving hub-target move has branching depth exactly h, so the h = 2 case is the clean family-level reason depth 1 fails and depth 2 survives.

Finally, on the same checked domain, every move selected by that depth-2 controller falls into one finite rooted alphabet of five templates: leaf, cherry (a hub with two leaves), three-leaf star, broom-1 (a star with one extended arm), and broom-2 (a star with two extended arms). The full five-template set is the unique surviving controller subset on the checked domain.

The exceptional route also compressed

Most checked low-edge states follow that stable depth-2 controller directly.

One checked two-fan state does not:

  • NTF(2, 8, 2) on n = 13

Earlier, that state looked like a named escape route through several intermediate families. That was already useful, but it still looked like a route table.

The newer collapse is cleaner. On the feeder side, the exact amount separates into:

  • a Fibonacci growth term
  • and a repeating period-4 correction

In symbols:

B(r, t) = A_r F_t + B_r F_{t + 1} + C_r cos(pi t / 2) + A_r sin(pi t / 2)

and the full deficit to the final star is:

Delta(r, t) = 2^(r + t + 5) - 1 - B(r, t)

Quick Formula Refresher

  • `F_t` means the Fibonacci sequence.
  • `cos(pi t / 2)` and `sin(pi t / 2)` are only a compact way to encode a four-step repeating pattern. They do not mean the graph itself is circular or wave-like.
  • `2^(r + t + 5) - 1` is the exact amount for the target star on the same number of vertices.

So the exceptional route is no longer only a chain of checked moves. It now has a compact symbolic description:

  • exponential target term
  • minus Fibonacci-periodic feeder term

That is a much better fit for loop-space geometry. Once the right structural coordinates are found, the route becomes simple enough to compile.

Low-edge concentration map

Low-edge concentration map The corrected low-edge graph branch can be read as a two-stage process. First starify each connected component by moving pendant subtrees toward a hub. Then balance the resulting star sizes. The endpoint is the balanced star forest. Low-edge concentration map On the checked low-edge branch, the best loop does not jump straight to a formula. It first concentrates shape, then balances size. Positive low-edge forest arbitrary component shapes same total vertices same edge budget Stage 1: starify components pendant subtree to hub Starified forest component shape is concentrated size profile may still be uneven Stage 2: balance sizes pairwise smoothing on star sizes Balanced star forest shape concentrated size profile balanced F_bal(n, m) Teaching rule: the low-edge branch is no longer only a frontier fit. It is a two-stage concentration process.
The low-edge analysis is no longer only a frontier fit. It is a two-stage concentration process with a geometric front stage and an exact balancing back stage.

Interactive lab

Part VII: what these bounded results have already achieved

The stable bounded ladder that is now strong enough to teach runs through four main ideas.

  1. Observation quotients
    • the loop should be analyzed through the state it stores, not only through the examples it sees
  2. Witness-basis and separator ladders
    • pair bases, separator expressivity, and staged label choices are real loop axes
  3. Exact regime compilers
    • some problems compress into exact piecewise families rather than into one monolithic controller
  4. Concentration mechanisms
    • some strong loops work by reshaping the problem in stages before the final exact law is even applied

Together, these give the strongest bounded reason so far that useful neuro-symbolic loops exist beyond plain verifier-compilation.