How do societies retain models that are useful but not maximally legible?

David Austin
PhD Candidate in Computer Science
BLUE Fellowship
2026
BLUE Fellowship
2026

Background

Public discourse about AI systems is dominated by anthropomorphic explanations of model behavior. While these explanations offer limited mechanistic insight, they persist because they are highly cognitively legible and easily transmitted. This project explores the resulting tension between model utility and transmissibility.

What makes a model useful? What makes a model easy to understand, remember, and communicate? Where are these properties in tension and where are they mutually reinforcing? To what extent are they innate features of human cognition versus products of experience and institutional scaffolding? By examining these questions, I aim to understand how cultural and social institutions can preserve accumulated wisdom across changing contexts without becoming dogmatic or overcommitted to fragile abstractions.

More scholars