The tree
Darwin's tree of life is one of the most powerful images in the history of science. Species diverge from common ancestors. They branch, speciate, radiate. The image was so productive that it migrated. Historical linguists borrowed it for language families — Indo-European, the proto-languages, the branches and sub-branches reconstructed from shared sound changes and cognate vocabularies. Then cultural evolutionists borrowed it again, for pottery styles, manuscript traditions, folk tales, tool designs.
The template is elegant. It assumes vertical inheritance: parent to offspring, ancestor to descendant, one generation to the next. In biology, where genes flow predominantly from parent to offspring, this assumption largely holds.
In culture, it largely does not. People borrow from neighbors. Traditions blend. Stories converge independently. A shared feature between two folk tales might be common ancestry — or it might be last week's traveler. A borrowed word is not a branch point. A borrowed cooking technique is not speciation.
The tree algorithm arrives in its new domain and does not work. Not because the algorithm is wrong — it is mathematically sound. But because the rules for connecting it to cultural data have not been built. The model has structure. It does not yet have an interface.
The interface
The bottleneck of model transfer is never the template. It is the connection between the template and the domain it enters.
To make the tree work for cultural data, researchers must decide: what counts as a "character change"? When two artifacts share a feature, is it inherited from a common ancestor or borrowed from a neighbor? How do you handle horizontal transmission — influence between branches, not just descent within them? Without these decisions, the algorithm runs and produces trees, but the trees mean nothing. Every output looks formally correct and evidentially empty.
These decisions are not footnotes to the model. They are where the epistemic content lives. The tree algorithm is the skeleton. The rules connecting it to data — character coding conventions, observation models, decisions about what constitutes noise — are everything else.
The same bottleneck appears wherever models travel. Network diffusion: physicists' equations for how things spread through connected systems, applied to cultural influence. The model assumes that influence flows through social ties. But correlation between connected people could be homophily — similar people connecting — rather than influence — connected people becoming similar. The interface must separate these. Without it, the model sees influence everywhere, even where there is only clustering.
Natural selection applied to culture: "fitness" in biology means differential reproductive success. In culture, fitness could mean prestige, utility, memorability, emotional resonance. The concept of selection travels intact. Its explanatory power depends entirely on what you plug into the fitness slot — and that slot is not part of the template. It is part of the interface.
In each case, the formal machinery is domain-general. What makes it domain-specific — what turns an analogy into an explanation — is the set of decisions that connect the model to the data. The interface determines whether the borrowed model has anything to say.
The reshaping
As the interface matures, something happens that is easy to miss. The new domain starts to look like what the model expects.
Cultural data starts to look tree-shaped. Social networks start to look like diffusion channels. Cultural change starts to look like selection. The model's home domain — its assumptions about how inheritance works, how influence spreads, how competition operates — begins to appear in the domain it was borrowed into.
This is not necessarily wrong. Languages really do branch. Some cultural traits really do spread through networks. Some innovations really do compete for adoption. Sometimes the borrowed model tracks genuine structure — joints in the new domain that the model from the old domain was suited to find.
But sometimes the model has reshaped the domain in its own image. The character coding conventions were refined until tree-shaped results emerged. The observation model was tuned until diffusion-like dynamics appeared. The fitness proxy was chosen because it was the one that made selection look operative. The interface was adjusted — over years, across dozens of studies, through the ordinary refinement of method — until the data confirmed the template.
When the interface becomes familiar enough — embedded in graduate training, enshrined in standard methodology, taken for granted in peer review — its assumptions stop looking like choices. They start looking like features of the domain itself. The borrowed structure appears to describe what was always there. The model's home assumptions have colonized the new domain so completely that no one remembers they were imported.
The loss
The entanglement runs both ways. Models reshape domains. But the reverse also happens: fit gets captured by codification, and something is lost in the capture.
A regularity that works gets preserved. A cooking technique becomes a tradition. A posture becomes a norm of propriety. A folk taxonomy becomes a scientific classification. A working interface becomes a standard methodology. The preservation mechanism codifies what was once fluid — and the codification always loses resolution relative to the practice it tries to capture.
The protocol cannot handle every case the experienced practitioner could. The tradition preserves the form but loses the responsiveness to conditions that made the form work. The rule captures the regularity but not the judgment that produced it. Standard methodology preserves the interface but not the awareness that the interface was a choice.
In model transfer, this means: the interface that matures also hardens. Coding conventions that began as provisional choices — recognized as choices, open to revision — become defaults. The decisions that were once visible as assumptions become invisible as standard practice. The interface loses the flexibility that made it productive. A field that cannot revise its interface stalls — not because the template is wrong but because the connection between template and domain has become rigid.
This is the paradox of successful transfer. The better the model works — the more productive the interface, the more results it generates, the more careers it supports — the harder it becomes to question the assumptions the interface encodes. Success makes the interface invisible. And invisible assumptions are the ones that do the most reshaping.
The reflection
How do you know whether a borrowed model is tracking something real or seeing its own reflection?
Change the interface. Use different coding conventions. Apply different observation models. Choose a different fitness proxy. If the results hold — if the pattern reappears under different decisions about what counts — the model is tracking genuine structure. If the results collapse, the model was generating the pattern it appeared to discover.
Progress in fields that borrow heavily is not accumulating truer templates. It is building more reliable, explicit, and revisable interfaces — and knowing the difference between structure that was found and structure that was imposed. A field can advance while its core questions remain open, as long as the connection between its tools and its domain is becoming more honest about what it assumes.
The model changes the domain. The domain captures the model. What travels between them is never just structure. It is also the assumptions the structure carries — and the assumptions the new domain adds on the way in.
This argument will be presented at Model Transfer, Leibniz University Hannover, April 2026.


