Biological cognition depends on learning structured representations in ambiguous environments. Computational models of structure learning typically frame this as an inference problem, but often overlook the temporally extended dynamics that shape learning trajectories under ambiguity. In this paper, we reframe structure learning as an emergent consequence of constraint-based dynamics. Informed by a literature on the role of constraints in complex biological systems, we develop a constraint-based approach to computational cognitive modelling and provide a proof-of-concept model. The model consists of an ensemble of components, each comprising an individual learning process, whose internal updates are locally constrained by both external observations and system-level relational constraints. This is formalised using Bayesian probability as a description of constraint satisfaction rather than epistemic inference. Representational structure is not encoded directly in the model equations but emerges over time through the interaction, stabilisation, and elimination of components under these constraints. Through a series of simulations in environments with varying degrees of ambiguity, we demonstrate that the model reliably differentiates the observation space into stable representational categories. We further analyse how global parameters controlling internal constraint and initial component precision shape learning trajectories and long-term behavioural alignment with the environment. We discuss the formal relationship between the present approach and Bayesian inference accounts, and argue that a constraint-based approach offers a conceptually distinct foundation for relating computational models to biological systems.