Tau Perceptron Lab

Classifier logic, signed encodings, and a visible learning loop.

This lab keeps two worlds separate on purpose. The upper half is backed by replayed Tau traces, so every accepted step is evidence. The lower half is a host-side learning theater that makes the weights move, the boundary rotate, and the signed state visible in Tau-friendly offset form.

Replayable Tau traces
Unsigned and signed lanes
Internal-parameter variant
Host-side perceptron learning
Current Tau Lane
Unsigned external
Host provides weights and Tau checks the claim.
Current Score
24
Weighted sum before thresholding.
Learning Preset
OR
A single perceptron can converge on this dataset.
Boundary Status
Untrained
The learning theater starts from zero weights.
Tau classifier workbench

One step, three storage choices

External unsigned weights, signed-offset weights, and internal parameters all implement the same basic classifier relation. What changes is the interface boundary.

Score
24
Actual class
1
Claimed class
1
Tau-style acceptance
accept

          

          

In the unsigned lane, the workbench mirrors the same bounded relation the Tau spec checks directly.

Optional live local Tau run

Install Tau upstream, then let the lab call the local bridge

The site does not ship Tau. It can only talk to a localhost helper that the reader starts deliberately after installing Tau from the official upstream repository.

Official Tau GitHub Download spec files
Bridge Unchecked
Tau path Not detected
Mode Current workbench mode
Local setup from the repo root:
1. Install Tau from the official Tau GitHub.
2. Run: python3 scripts/tau_local_bridge.py
3. Click "Check local bridge", then "Run current step with local Tau".
Certified trace ledger

What the local Tau runs actually said

These cards are populated from the recorded trace bundle generated by the local runner. They are the evidence layer.

Loading trace bundle…
Learning theater

Watch the boundary move

This is a host-side learning loop. It uses signed weights and bias, then shows the offset-encoded Tau-facing form of the same state.

1.0
med
OR is linearly separable. A single perceptron should converge quickly from zero weights.
Weights
0, 0
Bias
0
Current sample
(0,0)
Prediction
0
Accuracy
3/4
Mistakes
1
Min signed margin
0
Autoplay
Idle

          
Dataset ledger
Epoch error chart

Each bar is the number of mistakes after one full pass through the current dataset.

State history

Signed learning, Tau-facing encoding

The learner updates signed weights. The lower readout shows how that same signed state would be serialized into the offset-encoded lane from the tutorial.

Encoded w1
127
Encoded w2
127
Encoded bias
127
Status
Cold start
Weight trajectory (w1 vs w2)

XOR is here on purpose. A single perceptron cannot separate it with one line. A failed convergence story is still a useful experiment, because it teaches what this model class cannot express.