Alignment deconstructed: Questioning the Axioms of Machine Learning Systems

Simon Giustini
Undergrad. Student in Economics & Political Science
BLUE Fellowship
2026
BLUE Fellowship
Fall
2026

Background

Breaking down machine learning systems to analyze the “how” of their ability to generate information and examining some of the assumptions that are implied by these ML architectures. Using this understanding I can treat each of these assumptions as their own alignment problem to help better approach the problem as a whole in a more nuanced way.

This project looks at ML use of prediction, feedback, correlation, identification capabilities, and biases. It focuses on the implications of all of these concepts on alignment at large.

More scholars