Classifier logic, signed encodings, and a visible learning loop.
This lab keeps two worlds separate on purpose. The upper half is backed by replayed Tau traces, so every accepted step is evidence. The lower half is a host-side learning theater that makes the weights move, the boundary rotate, and the signed state visible in Tau-friendly offset form.
One step, three storage choices
External unsigned weights, signed-offset weights, and internal parameters all implement the same basic classifier relation. What changes is the interface boundary.
In the unsigned lane, the workbench mirrors the same bounded relation the Tau spec checks directly.
Install Tau upstream, then let the lab call the local bridge
The site does not ship Tau. It can only talk to a localhost helper that the reader starts deliberately after installing Tau from the official upstream repository.
Local setup from the repo root: 1. Install Tau from the official Tau GitHub. 2. Run: python3 scripts/tau_local_bridge.py 3. Click "Check local bridge", then "Run current step with local Tau".
What the local Tau runs actually said
These cards are populated from the recorded trace bundle generated by the local runner. They are the evidence layer.
Watch the boundary move
This is a host-side learning loop. It uses signed weights and bias, then shows the offset-encoded Tau-facing form of the same state.
Each bar is the number of mistakes after one full pass through the current dataset.
Signed learning, Tau-facing encoding
The learner updates signed weights. The lower readout shows how that same signed state would be serialized into the offset-encoded lane from the tutorial.
XOR is here on purpose. A single perceptron cannot separate it with one line. A failed convergence story is still a useful experiment, because it teaches what this model class cannot express.