Machine learning and deep learning models trained on a source domain often suffer from performance degradation when deployed to new target domains due to domain shifts arising from differences in data distributions, acquisition conditions, or temporal variations. Domain adaptation addresses this issue by transferring knowledge from labeled source data to unlabeled target data. However, acquiring labels for target-domain samples is often costly or impractical in real-world applications. To improve label efficiency, active domain adaptation (ADA) and active continual learning (ACL) integrate active learning strategies into domain adaptation and continual learning frameworks. ADA selectively queries informative target samples to enhance adaptation performance, while ACL extends this paradigm to sequential settings, enabling models to adapt to evolving data streams while mitigating catastrophic forgetting. This survey provides a systematic review of ADA and ACL, focusing on their advances and applications. We further examine extensions of ADA such as source-free ADA, integration with semi-supervised learning, and advanced techniques for handling challenging adaptation scenarios. In addition, we summarize applications across computer vision, medical imaging, robotics, natural language processing, scientific and engineering tasks. Finally, we discuss open challenges and future directions, including robust adaptation under complex distribution shifts and reliable semi-supervised adaptation.