A Design Book demo
Procedural Tokens — encoding design decisions
Most token systems already separate values from semantics. The interesting part is the connection between them. Usually it is a fixed link: one token points at another and stops there. I am interested in making that connection work more like a spreadsheet cell. It can stay static, choose one value from a set, or apply a small formula. That is not just a technical convenience. It is a way to bake flexibility into the system early enough that changing the basics can produce new outcomes instead of a repair job. In that sense, procedural tokens are closer to design programs than static collections: systems with internal logic that keep generating outcomes.
1. Static links are the baseline
Most token systems already have the familiar split: raw values in
one place, semantic names in another. A small pool of source
values like values.gray800,
values.gray50, and values.blue500 is
enough.
Those inputs become useful once semantic tokens give them a job:
-
color.brand= — points atvalues.gray800 -
color.surface= — points atvalues.gray50 -
color.onSurface= — paired withcolor.surface; the text colour that goes on it
Static links are good. They are direct, inspectable, and
predictable. I do not want to replace them. I want to stop using
them for relationships they are too simple to express. A pair like
surface / onSurface already hints at the
limit: two pinned choices are standing in for one design decision.
2. Some links are really formulas
Some semantic tokens are not really aliases. They are tiny
policies. color.buttonText is a good example. Writing
ref('values.white') stores an answer, but not the
reasoning behind it: white against what, chosen for what purpose,
valid for how long?
That is where the spreadsheet analogy helps. A token can behave like a cell with a fixed link, or like a cell with a simple formula. The fixed link remains the default. The point is simply to allow formulas when the relationship is the thing you actually care about.
The button label already works like that. The brittle version
would be color.buttonText = ref('values.white'). The
stronger version stores the rule instead of the answer:
color.buttonText =
bestContrastWith(, ramp)
Re-point the brand colour and the label re-decides itself. The token does not just expose the outcome. It exposes the logic that produced it.
3. Choose one of a set
One kind of dynamic connection looks at a pool of candidates and picks one. It is useful whenever the decision is comparative: the most readable foreground, or the faintest line that still clears a threshold.
The most readable foreground
color.onSurface starts as a plain ref to
values.gray900. That works for a light surface, and
fails the moment the surface stops being light.
color.onSurface =
bestContrastWith(color.surface, values)
Edit
and the foreground re-decides itself. The pair
surface / onSurface stops being two
separate pinned choices and becomes one relationship the system can
maintain.
Paired treatment
surfaceTreatment.default = { surface, onSurface }
surface
onSurface
Treat these as one named decision, not two unrelated variables: edit the surface and the readable foreground re-decides itself to keep the pair coherent.
The faintest line that still works
The same pattern can choose not the strongest option, but the weakest one that still satisfies a condition.
color.line =
minContrastWith(, ramp, { ratio: })
Where bestContrastWith picks the highest
contrast available, minContrastWith picks the
lowest step that still clears the threshold. The token is
not saying "use this grey." It is saying "pick the weakest
boundary that still reads as a boundary."
4. Do something to a value
The second kind of dynamic connection does not choose from a set. It takes an input and applies a small operation to it. This is where utility or transform tokens become useful: scale, mix, darken, lighten, offset.
Scale from a base
Every gap inside the example panel — title to paragraph, paragraph
to chart, chart to button — is a multiple of
. Pull the base and the whole rhythm scales together via
spacingScale.
Edit the scale here
space.base
space.xs
spacingScale(base, { multiplier:
})
=
space.s
spacingScale(base, { multiplier:
})
=
space.m
spacingScale(base, { multiplier:
})
=
space.l
spacingScale(base, { multiplier:
})
=
space.xl
spacingScale(base, { multiplier:
})
=
Shift a value
The simplest version of this kind of rule takes one input and
nudges it. The link's hover state in the example panel is one of
those: a darker shade of color.interaction computed
on the fly, not stored.
color.linkHover =
darken(color.interaction, )
Move the amount. Every link that uses the hover token shifts at once because there is only ever one decision to re-evaluate.
Blend two inputs
The seven bars in the chart are not seven hand-picked colours. They are seven steps of a ramp generated between two inputs:
ramp.sN =
colorMix(shade(, 10%), , ratio)
Drag either endpoint and the bars re-tone together because the connection stores a formula, not seven frozen outputs. A hover shade or pressed state works the same way; the scale changes, not the idea.
5. Truly procedural design
Tokenising everything used to feel like the death of creative freedom. Every choice locked in. Every relationship buried in a dependency graph that no human wanted to climb. Once the system was set up, exploration moved elsewhere — back into Figma, back into a script, back to ad-hoc decisions made outside the system and then patched in.
Procedural tokens flip that. The exploration tool can live inside the system. Every value token in this article — every grey, every accent — is now a point on a single colour line. Drag any anchor and the entire system shifts together: the values move, the refs that point at them follow, the ramp re-interpolates, the contrast rules re-pick. One small input. Hundreds of coherent outputs.
Each dot on the wheel is an anchor; the line between them is
the palette. Every values.* token is sampled
somewhere on that line — even the ones named for greys.
The names values.gray800, values.red500
stay where they were. Their colours don't. That is the point: a
token system that can be explored rather than just maintained.
Coherence is enforced by the structure; freedom is restored at
the input.
Each space.* token reads its multiplier from this
curve. Drag the two handles to reshape the scale — gap, padding
and the chart bars' spacing in the panel all follow.
6. Better links
This is not an argument against static refs. Most links should stay static, because static is clear. The point is simply to stop forcing every relationship to pretend it is an alias.
When a connection is really a choice or a transformation, the token should be allowed to say so. That makes systems safer to change: change the brand colour, the surface, the base spacing, or the threshold, and the dependent choices can be recalculated instead of fossilizing old assumptions.
Too often the workflow goes the other way around: generate a ramp, freeze it into tokens, then build the design around that frozen result. It works until dark mode arrives, or a second theme, or a broader brand shift. Then all the flexibility that should have been inside the system has to be retrofitted after the fact.
Once a token system can encode relationships instead of only results, consistency turns into discovery. It can preserve coherence across more complexity than anyone wants to track by hand, and it can surface combinations that would not have been picked manually.
That is the part that feels genuinely exploratory. Change the surface, shift the interaction colour, alter the spacing base, and the system answers back. Sometimes the answer is simply robust. Sometimes it is surprising. That is where emergence starts to matter: not as mystique, but as the practical result of encoding relationships instead of pinning every outcome in advance.
That is the shift I care about. Not more tokens. Not cleverness for its own sake. Just better links: static when enough, formula-based when necessary. Colour and spacing make the pattern easy to see, but the same logic applies anywhere the relationship matters more than the stored result: type treatments, motion rules, icon choices, grouped decisions like a surface/on-surface pair. At that point a token system stops being a catalogue of frozen answers and starts behaving more like a design spreadsheet, or at its best, a small design program: one that makes flexibility part of the design instead of a chore that shows up later.