Frequency, as a physical quantity that describes the rate at which periodic events occur, is a crucial perspective and component, and can help observe and recognize the world via versatile frequency transforms. Initially, built on Fourier analysis theory, it played an important role primarily in the field of signal processing, and has gradually become an indispensable part of deep learning to solve complex problems. Deep learning in the frequency domain (a.k.a. Fourier domain), which we called \textbf{F}requency-principled \textbf{D}eep \textbf{L}earning (FDL), has been extensively employed in a wide range of scenarios owing to its compelling advantages, such as global receptive field, high computational efficiency, inherent data decomposition and explainability. Deep neural networks also exhibit certain properties from a frequency-domain perspective, which provides valuable insights for powerful model design and refinement. Despite growing attention to frequency-domain approaches in deep learning and computer vision, the absence of a systematic synthesis makes it difficult to grasp the current landscape, identify the core methodologies, perceive the challenge, and chart a course for future research. Moreover, a comprehensive explanation for why introducing frequency-domain methods contributes to problem-solving is still lacking. This survey aims to provide a comprehensive and structured overview of frequency-principled vision and learning to address this gap. Unlike previous reviews that may focus on isolated aspects, our work seeks to connect and systematize the field through a unified taxonomy. Specifically, we conduct a systematic survey and analysis of existing literature from multiple perspectives: frequency principle (theory), implementations (algorithms), applications, challenges and future frontiers of FDL across various tasks.