Foundation-model agents increasingly rely on reusable skills to support tool use, long-horizon planning, and cross-task adaptation. Yet the term remains ambiguous in the literature, where it may refer to prompt packages, executable workflows, learned routines, or repository artifacts. This ambiguity makes it difficult to compare methods, evaluate progress, and reason clearly about security and governance.We study agent skills as reusable and adaptive units of competence that sit between model capability and situated task execution. The survey first separates skills from nearby constructs such as prompts, tools, memory, and policies. It then organizes the literature around representation, lifecycle and orchestration, evaluation, security and governance, and application domains. The evidence suggests that skill quality alone is not enough: useful skills also depend on abstraction choices, retrieval and composition mechanisms, ecosystem structure, and infrastructure security. We therefore treat agent skills as a research object in their own right and identify open problems in automatic induction, cross-environment transfer, longitudinal evaluation, and trustworthy sharing in open agent ecosystems.