Decision tree ensembles, such as Random Forests and Gradient Boosting Machines, achieve high predictive accuracy but often suffer from limited transparency due to their structural complexity. Due to this lack, interpretability challenges arise in domains where model understanding, accountability, and trust are essential. So, many interpretability/explainability techniques have been proposed for tree-based ensembles. However, although there are enough surveys or overviews concerning interpretability/explainability in artificial intelligence or machine learning in general, there are very few surveys of overviews on interpretability/explainability for tree-based ensembles. This paper provides an overview of recent approaches to interpretability and explainability in decision tree ensembles. We present two categorizations; one based on the kind of technique/architecture used and the second based on the level of scope. The former is a unified taxonomy of acquired (or post-hoc) and inherent methods further analyzed in two more levels. The latter concerns the distinction between local (or instance-related) and global (or model-related) methods. We additionally provide a survey of the interpretability/explainability methods/techniques used in various domain applications, like healthcare, finance, law, privacy preserving. This overview clarifies the current landscape of interpretable/explainable ensemble learning, explicitly addressing emerging challenges. Ultimately, it aims to support researchers and practitioners in selecting and developing ensemble models that move beyond the traditional accuracy-interpretability trade-off, aligning predictive power with strict regulatory, operational, and domain-specific transparency requirements.