Natural selection encodes learned information in the genome. Learned solutions may be tuned specifically to past challenges, failing in altered environments. Or solutions can be general, capturing the essential structure of the challenge and performing well across variations within the abstract class. For example, a neural system might recognize the exact outlines of a rattlesnake but not other snakes, or it might recognize the essence of snakeness. The problem of how a system generalizes is a fundamental aspect of evolvability, the ability of a system to learn broad solutions to novel challenges. In recent years, machine learning has significantly advanced our understanding of when systems generalize their learned solutions and how they accomplish such generalization. One surprising discovery overturned conventional wisdom about learning. Large systems, with more adjustable parameters than the dimensions of the incoming data, do not merely memorize the data patterns in the way suggested by traditional theory. Instead, systems with more parameters generalize better than smaller systems. Because natural selection is a learning algorithm, the new theory of generalization applies to biological evolution. Specifically, increasing regulatory complexity and parameterization associates with increasing evolvability for the discovery of general solutions. This link between genomic complexity and generalization may have been a primary driving force in evolutionary history.