Being able to express broad families of equivariant or invariant attributed graph functions is a popular measuring stick of whether graph neural networks should be employed in practical applications. However, it is equally important to learn deep local minima of losses (i.e., with much smaller loss values than other minima), even when architectures cannot express global minima. In this work we introduce the architectural property of attracting GNN optimization trajectories to local minima as a means of achieving smaller losses. We take first steps in satisfying this property for losses defined over attributed undirected unweighted graphs with a novel architecture, called Universal Local Attractor (ULA). The latter refines each dimension of end-to-end trained node feature embeddings based on graph structure to track the optimization trajectories of losses satisfying some mild conditions. The refined dimensions are then linearly pooled to create predictions. We experiment on 10 tasks, from node classification to clique detection, on which ULA is comparable with or significantly outperforms popular alternatives of similar or greater theoretical expressive power.