Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
41 commits
Select commit Hold shift + click to select a range
aabf01d
Rebase onto main; reconcile journey figures with new journey structure
claude May 10, 2026
f0a9549
Restore viz preview workflow
adewale May 10, 2026
19fbc6c
Fix figure scaling and strip duplicate prose from inside SVGs
claude May 10, 2026
6c1b711
Sizing + prose-duplication root-cause rules; ship 12 example figures
claude May 10, 2026
7b49338
Smoke test viz preview after deploy
adewale May 10, 2026
365b59e
Major coverage push: 50 examples attached (was 13); Workers figures r…
claude May 10, 2026
b66b435
Third coverage push: 90/109 examples attached (82.6%); 84 figures reg…
claude May 10, 2026
b699995
Fourth coverage push: 100% (109/109); 103 figures; lessons captured
claude May 10, 2026
05d8a17
Fifth pass: lift 5 figures off the 8.0 reuse floor; tighten workers s3
claude May 11, 2026
728ba1f
Production layout: cells stay 2-col; figures sit in banner rows between
claude May 11, 2026
96513f4
Sixth iteration + rubric saturation analysis
claude May 11, 2026
ceda658
Fix marginalia lint and cache manifest
adewale May 11, 2026
ac79011
Example-figure rubric v2: 'earns its place', caption quality, page co…
claude May 11, 2026
245e41c
Auto-resolve asset manifest conflicts during merge/rebase
claude May 11, 2026
888c6ec
Remove footer worker note; drop unused --subtle CSS variable
claude May 11, 2026
649c25b
Tune layout scale per impeccable layout rubric
claude May 11, 2026
8c619a6
Unify responsive collapse at 780px
claude May 11, 2026
1759a20
Fix 45 figures clipping content outside viewBox
claude May 11, 2026
014cd37
TDD red-green-refactor: enforce figure geometry contracts in CI
claude May 11, 2026
04a16bf
Extend marginalia contracts: text collisions, registration, grammar
claude May 11, 2026
eaad0df
Three more forward-locking contracts for the figure system
claude May 11, 2026
c92248f
Enforce emphasis scarcity: at most one orange accent per figure
claude May 11, 2026
deea0f2
Audit gestalt paint functions too; fix six gestalt regressions
claude May 11, 2026
52a1db6
Gestalt = production: marginalia-gestalt renders FIGURES, not duplicates
claude May 11, 2026
ed9ce82
Align misaligned dashed lines on three figures
claude May 11, 2026
62da521
exception-group-peel: tree edges now meet the dots cleanly
claude May 11, 2026
4d3f77f
Audit for internal consistency and duplication
claude May 11, 2026
d9c81ac
Tighten rubric and Lessons Learned from the contract-driven audit pass
claude May 11, 2026
64d54f6
Render journey-section figures inline on /journeys/<slug>
claude May 11, 2026
13f7978
Figures scale up to fill available space across all breakpoints
claude May 11, 2026
a62cefd
PAD_X 8 → 14 so lane labels stop clipping on the left
claude May 11, 2026
273af6f
TDD pass for the lane-label clipping fix
claude May 11, 2026
197d2df
Replace the networking figure; rewrite both code chunks' prose
claude May 11, 2026
00a8288
Audit unsupported-cell prose; fix three more examples with the same bug
claude May 11, 2026
c8d6cc9
Audit code cells for unwrappable lines; fix three offenders + safety net
claude May 11, 2026
6eced3c
strings: add French "café" to the first comparison
claude May 11, 2026
076bbb5
Document the comparison-loop lesson from the strings fix
claude May 11, 2026
9abf7ae
Cut 8 stale journey prototype pages superseded by production
claude May 11, 2026
681bdaa
Cut 5 stale artifacts (plus 1 transitively-stale) + 1 dead test
claude May 11, 2026
7ff89b7
Fix CI: regenerate stale generated files; remove unused import
claude May 11, 2026
5cbc4f0
Regenerate golden parity fixture after example edits
claude May 11, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 7 additions & 0 deletions .gitattributes
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
# `src/asset_manifest.py` is generated by `scripts/fingerprint_assets.py`.
# On merge/rebase, keep our side of the conflict — the post-merge and
# post-rewrite hooks regenerate the file deterministically afterwards.
# This works once `scripts/install-git-hooks.sh` has been run locally,
# which registers `merge.ours.driver = true` and points `core.hooksPath`
# at `.githooks/`.
src/asset_manifest.py merge=ours
9 changes: 9 additions & 0 deletions .githooks/post-merge
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
#!/usr/bin/env bash
# Regenerate the asset manifest after a merge or pull so the digest
# reflects the merged tree, not whichever parent won the conflict.
set -e
cd "$(git rev-parse --show-toplevel)"
uv run python scripts/fingerprint_assets.py >/dev/null
if ! git diff --quiet src/asset_manifest.py public/_headers; then
echo "post-merge: asset manifest regenerated; stage and amend if needed"
fi
9 changes: 9 additions & 0 deletions .githooks/post-rewrite
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
#!/usr/bin/env bash
# Regenerate the asset manifest after rebase/amend so the digest matches
# the rewritten history, not whichever commit happened to win each step.
set -e
cd "$(git rev-parse --show-toplevel)"
uv run python scripts/fingerprint_assets.py >/dev/null
if ! git diff --quiet src/asset_manifest.py public/_headers; then
echo "post-rewrite: asset manifest regenerated; stage and amend if needed"
fi
74 changes: 74 additions & 0 deletions .github/workflows/preview-viz.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
name: Preview viz

on:
push:
branches:
- claude/tuftean-marginalia-viz-TB0fw
workflow_dispatch:

permissions:
contents: read

concurrency:
group: preview-viz
cancel-in-progress: true

jobs:
upload-preview:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: astral-sh/setup-uv@v5
with:
enable-cache: false
- uses: actions/setup-python@v5
with:
python-version: '3.13'
- uses: actions/setup-node@v4
with:
node-version: '22'
- name: Install dependencies
run: uv sync --all-groups
- name: Build generated assets
run: make build
- name: Verify Cloudflare auth
env:
CLOUDFLARE_API_TOKEN: ${{ secrets.CLOUDFLARE_API_TOKEN }}
CLOUDFLARE_ACCOUNT_ID: ${{ secrets.CLOUDFLARE_ACCOUNT_ID }}
run: npx --yes wrangler whoami
- name: Sync Python Workers vendor
run: uv run pywrangler sync
- name: Upload Cloudflare Preview
env:
CLOUDFLARE_API_TOKEN: ${{ secrets.CLOUDFLARE_API_TOKEN }}
CLOUDFLARE_ACCOUNT_ID: ${{ secrets.CLOUDFLARE_ACCOUNT_ID }}
run: |
set -x
uv run pywrangler preview \
--name viz \
--message "${{ github.sha }}" \
--json
- name: Smoke test deployed Preview
run: |
set -euo pipefail
base="https://viz-pythonbyexample.adewale-883.workers.dev"
for path in \
"/" \
"/examples/values" \
"/prototyping/journey-figures-gestalt"; do
url="${base}${path}"
echo "Checking ${url}"
curl --fail --show-error --silent --location --output /tmp/preview-smoke.html --write-out "%{http_code} %{url_effective}\n" "${url}"
if grep -qiE "error code: 1101|PythonError|Traceback" /tmp/preview-smoke.html; then
echo "Preview rendered an exception for ${url}"
head -200 /tmp/preview-smoke.html
exit 1
fi
done
- name: Dump wrangler logs on failure
if: failure()
run: |
find ~ /tmp /root -name "*.log" -path "*wrangler*" 2>/dev/null | while read f; do
echo "=== $f ==="
tail -300 "$f" || true
done
6 changes: 6 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,6 +63,12 @@ Install dependencies with `uv`, then run:
python3 -m unittest discover -s tests -v
```

After cloning, install the local git hooks once so merges and rebases regenerate `src/asset_manifest.py` instead of producing conflicts:

```bash
./scripts/install-git-hooks.sh
```

Run locally on Workers:

```bash
Expand Down
209 changes: 209 additions & 0 deletions docs/example-figure-rubric.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,209 @@
# Example figure rubric

Parallel to `docs/journey-visualisation-rubric.md`, but for the figures
that attach to **example pages** (literate-program lessons), not journey
sections. The journey rubric scores the figure beside a section heading;
this one scores the figure that sits between prose and code inside a
single cell of an example walkthrough.

The two rubrics share craft criteria (palette, primitives, emphasis
scarcity) and diverge on content criteria, because the audience and
task differ. A journey-section figure depicts the *conceptual shift*
unifying multiple lessons; an example figure depicts the *single move*
the surrounding cell discusses.

Score each example figure on a 10-point scale. Version 2 of this
rubric, applied 2026-05; see `docs/rubric-saturation.md` for the
reasoning that produced these upgrades. The previous criterion 2
("match the running variables") and criterion 5 ("caption asserts")
have been replaced; a new page-level coherence rubric joins the
per-figure scoring.

## Content (5.5)

1. **Cell fidelity (0-1.5)** — the figure depicts the move the cell's
prose discusses, not the example's title. If the example is
"Mutability" but cell 1 is about immutable strings, a figure on
cell 1 must depict immutability, not aliasing. Wrong cell, wrong
figure.
2. **The figure earns its place (0-1.0)** — the figure surfaces
something the prose cannot show in the same word count: a
relationship, a before/after, a hidden mechanism, an invariant.
A figure that merely restates the prose in diagram form earns
0.5; a figure that adds nothing the prose hasn't already said
earns 0. Generic placeholders (`a`, `b`, `xs`) are fine; what
matters is whether the figure carries pedagogical weight beyond
the prose. (Replaces v1's "match the running variables", which
punished honest reuse of library figures across multiple cells.)
3. **One conceptual move (0-1.0)** — exactly one shift, before-state
to after-state, or one mechanism. Squint test: a reader should
identify the figure's single point in two seconds.
4. **Mechanism over metaphor (0-1.0)** — the figure shows the actual
machinery (the cell, the binding, the dispatch, the iterator),
not a cartoon of it. Knuth's rule.
5. **Caption quality (0-1.0)** — `figcaption` declares what is true,
in the section summary's voice; it does not narrate what the
figure does. "Two names share one mutable list — appending
through one name changes the object visible through both."
earns 1.0. "The figure shows two names pointing at one list."
earns 0 (narration, not assertion). Mixed-voice captions earn
0.5. The SVG itself contains no prose duplicating the caption;
only diagrammatic labels (`stdout`, `iter()`, panel tags, type
signatures). See pipeline invariant 2 in the spec.

## Craft (3.0)

6. **Grammar conformance (0-1.0)** — composed exclusively from
`Canvas` primitives in `src/marginalia_grammar.py`. No bespoke
SVG, no new colours, no stroke weights outside the locked set.
7. **Emphasis scarcity (0-1.0)** — at most one accent mark per
figure. The accent goes on the single element the cell prose
names (the live mutation, the captured cell, the dispatch arrow).
Three accent marks competing for attention is no emphasis at all.
8. **Restraint (0-1.0)** — no decoration that does not carry
information. No drop shadows, gradients, ornamental rules,
non-orthogonal tilts, or marks placed for "balance".

## Context (1.5)

9. **Banner-row fit (0-1.0)** — the figure's intrinsic width sits
comfortably inside `.cell-banner`'s auto-fit grid. Intrinsic widths
beyond ~360 px clamp to the column without growing past it; much
narrower viewBoxes leave whitespace either side of the centred
figure. Aim for an intrinsic viewBox between 200 and 360 px wide.
10. **Pairs with the surrounding cell (0-0.5)** — the banner sits
AFTER the named cell, so the eye reads cell-prose → cell-code →
banner. The figure should summarise the move the surrounding
cell just made, not stand alone as a generic illustration of the
example title.

## Topic gates (cell-shape specific)

- **Binding cells** (assignments, `=`) — show the name-arrow with the
type tag and the resulting value. The canonical Python picture.
- **Mutation cells** — show before-state and after-state with the
same object identity, OR rebinding with a new identity. The
difference is the lesson.
- **Iteration cells** — show the iterator advance: a caret moving,
or `iter()`+`next()` producing values one at a time.
- **Function-definition cells** — show the signature with parameter
separators (`/`, `*`) explicit when relevant, or the
caller→body→return shape.
- **Class cells** — show state and methods bundled, or the
instance→class→type triangle, or MRO chain. Pick one, not all.
- **Exception cells** — show the lanes (try/except/else/finally)
with a single traced path, or the exception-cause arrow (`__cause__`
vs `__context__`).
- **Async cells** — show two parallel lanes (loop · coroutine) with
await handoffs.

## Release gates outside the score

These are not scored; a figure that violates any of them does not
ship. The geometry, palette, font, stroke, emphasis, registration,
and caption gates are now enforced by automated contracts in
`tests/test_marginalia_geometry.py` (Contracts 1-9). CI fails before
the figure can merge.

- **One figure per cell, at most.** Two figures on one cell signal
the cell is doing two things; split the cell instead.
- **figcaption present and declarative.** Captions in the form
"Two names share one mutable list — appending through one name
changes the object visible through both." Not "this shows X" or
"see how Y".
- **figcaption agrees with the cell's prose.** The cell's prose
paragraph in the markdown and the figure's figcaption assert the
same thing in different words. If they disagree, one is wrong.
- **figcaption is unique across slugs.** A reused figure can serve
multiple lessons (`iter-protocol` attaches to four), but each
lesson must frame the figure in its own voice. Verbatim caption
reuse copies the lesson voice the same way verbatim code reuse
copies the example. *Contract 5b — FigureCaptionContract.*
- **No clipping.** Every `<rect>`, `<text>`, `<line>`, `<circle>`,
`<path>` lives inside the padded viewBox. Text width counts: a
long mono string in a too-narrow box clips even if the geometry
looks right at first glance. *Contract 1.*
- **No element collision.** Text that overlaps a rect must be
fully contained by that rect. A type tag sitting on top of the
box above it (the `/examples/values` STR-LIST-DICT bug) is the
canonical violation. *Contract 2.*
- **No text-text overlap.** Two text elements may not occupy
overlapping bounding boxes (the `itertools-chain` "ITER A" /
"1 · 2" collision in a too-narrow box). *Contract 3.*
- **Palette discipline.** Only `INK`, `INK_SOFT`, `EMPHASIS`,
`SOFT_FILL`, or `"none"` may appear as fill or stroke. *Contract
5a — FigureGrammarContract.*
- **Font discipline.** Only `FONT_SERIF`, `FONT_MONO`, `FONT_SANS`
may appear as `font-family`. *Contract 5b.*
- **Stroke-weight discipline.** Only `W_HAIRLINE`, `W_STROKE`,
`W_EMPHASIS`, `W_GHOST`. *Contract 5c.*
- **Emphasis scarcity, enforced.** At most ONE accent mark
(`EMPHASIS`-coloured arrowhead, caret, dot, or rect stroke) per
figure. Was a soft v1 criterion; now hard. *Contract 9.*
- **Banner-fit, enforced.** Every figure's intrinsic width
(Canvas.w + 2 · PAD_X) must fit `.cell-banner--1`'s 440px max
ceiling. *Contract 8.*
- **Twin consistency.** When two figures depict parallel concepts
(`kw-only-separator` ↔ `positional-only-separator`,
`class-triangle` ↔ `metaclass-triangle`), their metrics must
match coordinate-for-coordinate where the concepts coincide. A
fix to one is a fix to both, in the same commit.
- **Geometric termination.** Lines that connect to dots, circles,
or rects must terminate AT the element's edge — not 1-2px short
(looks disconnected) and not inside the glyph (looks broken).
When in doubt, end the line at the centre and let the dot draw
on top.
- **Mono character alignment.** When a vertical divider marks a
position in mono text, its x must match the character's actual
centre. JetBrains Mono advances ~6px per char at fs=10. A
visually-similar `82` and `75` are not interchangeable.
- **Pipeline invariants** (see spec) hold: SVG renders at intrinsic
size; SVG contains no prose duplicating the caption.
- **Gestalt = production.** Review pages under `/prototyping/*`
must render the same paint code as the production attachments.
Parallel `e_*` paint functions for "gestalt versions" drift from
production and hide bugs; we eliminated 76 of them in May 2026.

## Page-level coherence (per slug, multi-figure)

A separate 0-1.0 score applied to slugs whose `ATTACHMENTS[slug]`
list contains more than one figure. Multi-figure pages must form a
coherent set, not three angles on the same point.

- **1.0** — figures show distinct aspects of the lesson in a
natural reading order (intro picture, mid-walkthrough mechanism,
summary). Each banner earns its placement.
- **0.5** — figures are individually fine but redundant; one would
do the work of two. The page reads as cluttered.
- **0** — figures contradict each other, or one figure is on the
wrong cell, or the page has three figures where one would teach
better.

For single-figure slugs (today, all 109 of them), page coherence is
trivially 1.0 and does not enter the per-figure score. As multi-
figure attachments grow this criterion will become the discriminator
that prevents the "more figures is better" failure mode.

## Quality bands

- **9.0-10.0** — depicts the cell's move in two seconds; the figcaption
could only describe this figure; reads pleasantly on return visits.
- **8.0-8.9** — depicts the right move but uses generic placeholders
where specific names would land harder, or the caption hedges, or
one secondary mark steals attention from the primary one.
- **7.0-7.9** — depicts the cell but loses something in scope: shows
the example title rather than the specific cell's move; or topic
gate not satisfied.
- **below 7.0** — wrong cell, wrong shape, multiple primary ideas
competing, or accent marks scattered rather than scarce. Redesign
before promoting.

## Project gate

A cell figure may ship to production once it scores **≥ 8.5**. The
example's figure average should exceed **8.7** so a multi-figure
example reads as a coherent set rather than independently authored
diagrams.

The score is a guide, not a substitute for reading the cell beside
its surrounding prose.
32 changes: 0 additions & 32 deletions docs/example-graph-score-impact.md

This file was deleted.

Loading
Loading