Preprint
Article

This version is not peer-reviewed.

Quantitative Evaluation of University Elective Sports Courses Using an AHP-FCE Framework: A Case Study of Badminton

Submitted:

13 September 2025

Posted:

15 September 2025

You are already at the latest version

Abstract
Background: Teaching evaluation of university badminton elective courses remains limited in terms of systematization, quantification, and feasibility, particularly given their skill-oriented nature and the challenges of handling fuzzy information.Methods: This study developed and applied an evaluation framework integrating the Analytic Hierarchy Process (AHP) with Fuzzy Comprehensive Evaluation (FCE). The framework included four dimensions—teaching resources, teaching process, student participation, and skill development—with weights determined through expert consultation. Skill development received the highest weight (35.22%). A total of 175 valid student questionnaires were analyzed using AHP-derived weights and the FCE method.Results: Findings indicated that the overall evaluation of the sampled courses reached a “good” level. Among the dimensions, skill development was identified as the most influential factor, while teaching resources served as a critical foundation.Conclusions: The AHP-FCE framework offers a reliable quantitative tool for assessing the teaching quality of badminton elective courses. It underscores the central role of skill development and supports the advancement of evaluation methods toward greater scientific rigor and precision in physical education.
Keywords: 
;  ;  ;  ;  

1. Introduction

Badminton, as an integral component of university physical education curricula, plays a critical role in enhancing students’ physical literacy and skill development(Jing, 2024). In recent years, the demand for badminton elective courses has increased with the advancement of sports education reform and the implementation of the “Healthy China” strategy(Gao, 2019). However, existing evaluation systems often lack theoretical rigor and contain overly generalized indicators, making it difficult to capture the specific features of badminton courses(Ye et al., 2024). Traditional evaluations largely rely on subjective judgment and lack robust quantitative tools, resulting in low reliability and limited practical guidance. Moreover, as a skill-dominant course, badminton instruction is highly dependent on facilities, training methods, and student participation. This underscores the urgent need for a scientific and systematic evaluation framework that is aligned with course characteristics and capable of improving teaching practices(Casebolt & Zhang, 2020). This study addresses these challenges by focusing on the evaluation of badminton elective courses and aims to promote greater standardization and professionalization in sports education assessment.
From a theoretical perspective, research on physical education evaluation is grounded in educational measurement and management principles, emphasizing multidimensional and quantitative analysis. The Analytic Hierarchy Process (AHP) and Fuzzy Comprehensive Evaluation (FCE) are widely recognized decision-making tools that have been validated in sports contexts(Li & Wang, 2022; Pengju, 2015; Wang et al., 2023). For instance, Wang & Lu ( 2018) applied them to assess university physical education quality, while Cai et al (2022) employed AHP to determine skill-related weights in tennis courses. Policy documents such as the National Medium- and Long-Term Education Reform and Development Plan highlight the importance of strengthening physical education and call for innovative evaluation mechanisms. Guided by these policies and the learning characteristics of badminton, this study constructs a four-dimensional framework—teaching resources, teaching process, student participation, and skill development. Building on Li & Wang’s (2024) four-dimensional model, the framework introduces badminton-specific indicators, such as equipment sufficiency and frequency of multi-shuttle training, thereby enhancing disciplinary relevance. By integrating cross-disciplinary methods, expert interviews, and statistical analysis, the study ensures both theoretical grounding and empirical validity.
The literature indicates that physical education evaluation research is shifting from general to specialized frameworks, yet badminton courses remain underexplored(Tang, 2024). Most existing studies focus on macro-level indicators such as student satisfaction or course design, with insufficient attention to skill-specific characteristics and process management(Peng & Liu, 2024). For example, Özkan et al ( 2021) achieved quantification in tennis but did not address badminton, while Li & Wang’s (2024) framework, though valuable, suffers from high generality and weak specificity. The gaps are evident in three areas: (1) disciplinary boundaries that hinder integration of educational and management sciences; (2) methodological limitations, with reliance on subjective scoring rather than integrated models such as AHP-FCE; and (3) lack of empirical validation, as indicator weights are often based on expert opinion without consistency testing. This study seeks to address these deficiencies by constructing a specialized evaluation system for badminton courses that emphasizes both operational feasibility and empirical rigor.
The research adopts an AHP-FCE hybrid model applied to university badminton elective courses. The design proceeds in three stages: (1) literature analysis and semi-structured interviews with 12 experts to establish a four-level indicator hierarchy; (2) weight assignment using AHP, with consistency testing (CR < 0.1) to ensure mathematical rigor; and (3) fuzzy comprehensive evaluation based on questionnaire data (N = 175). Core findings reveal a skill-oriented weighting structure, with skill development carrying the highest weight (35.2%). Within this dimension, clear indicators emerged, including clear rate of overhead clears (44.16%), serve success rate (29.84%), and doubles coordination (45.78%). In the teaching process dimension, the proportion of multi-shuttle training was particularly influential (56.92%). Overall, the courses evaluated achieved a “good” level. These findings respond directly to identified research gaps, validating both the reliability and practical value of the proposed indicator system.
Theoretically, this study advances the field of physical education evaluation by developing a badminton-specific AHP-FCE model, shifting from general to specialized paradigms. Its contributions include: (1) model innovation by integrating AHP weighting with FCE membership functions to better handle fuzzy data; (2) mechanism insights by highlighting patterns of skill acquisition and the importance of process management; and (3) methodological advancement by providing a replicable framework for other skill-oriented courses. Practically, the system offers a quantitative tool for physical education instructors, supporting the optimization of teaching design, resource allocation, and student motivation. For administrators, weight results (e.g., the threshold effect of facility adequacy at 59.24%) offer evidence-based insights for policy and resource planning, thereby enhancing course quality and decision-making.
The structure of this paper is as follows: Chapter 2 details the selection of evaluation indicators and the AHP weighting procedure; Chapter 3 presents the empirical analysis using FCE; Chapter 4 discusses the findings, theoretical contributions, and limitations; and Chapter 5 concludes with practical recommendations and directions for future research. This organization ensures a coherent narrative that enables readers to grasp the study comprehensively.

2. Literature Review

The evaluation of teaching quality in university physical education courses has long been a central issue at the intersection of sports pedagogy and management science(Han & Wan, 2025). Badminton, as a widely popular and comprehensive exercise, has become a common elective in universities(Shen & Dr. Lorna Espeso, 2025). However, systematic evaluation of its instructional effectiveness remains limited(W. Cai et al., 2025). Existing studies often rely on qualitative descriptions or single-dimension satisfaction surveys, lacking multi-indicator, hierarchical, and quantifiable evaluation systems(Bu, 2024). This deficiency not only reduces the accuracy of teaching feedback but also impedes the optimization of course design and the rational allocation of resources.
Constructing a scientific, structured, and operable evaluation model is therefore essential for improving the quality of badminton elective courses. This study focuses on integrating the Analytic Hierarchy Process (AHP) with Fuzzy Comprehensive Evaluation (FCE) to systematically measure multidimensional performance by combining qualitative and quantitative approaches. This integration addresses existing limitations related to comprehensiveness, rationality of weight assignment, and clarity of result interpretation(Wei, 2025).
In teaching evaluation, AHP has been widely adopted for its systematic and hierarchical advantages in handling multi-criteria decision-making problems(Fiqih Satria, 2022). Proposed by Saaty (1980), AHP constructs judgment matrices, calculates weight vectors, and applies consistency tests, enabling the quantification of qualitative indicators and prioritization of their importance(Mendoza et al., 2019). It is particularly suited for evaluations involving hierarchical relationships and subjective judgments. Recent studies have applied AHP in assessing sports teaching environments, evaluating teacher competencies, and optimizing course design, confirming its reliability and adaptability in structured decision-making(Jianbo & Dandan, 2020).
Despite these strengths, AHP is limited in addressing fuzziness and uncertainty, which are inherent in educational assessments. FCE, grounded in fuzzy mathematics, complements AHP by converting imprecise or linguistic evaluations into quantitative results (Emrouznejad & Ho, 2017). Its application in education has demonstrated strong explanatory power, especially in contexts with multiple layers, numerous factors, and high subjectivity(Ayca & Hasan, 2017). Recently, scholars have combined AHP with FCE in fields such as physical education course evaluation (Press, 2024) and practical training management(Wu, 2025), confirming the integrated model’s ability to enhance comprehensiveness and robustness of results.
This study builds on these established approaches by employing AHP to determine indicator weights and FCE for comprehensive evaluation. The modular design of this framework not only provides a solid theoretical basis and wide empirical validation but also allows for adjustment and extension according to specific course characteristics.
Although AHP and FCE originated in operations research and management science, their cross-disciplinary applications have proven both adaptable and insightful. In engineering management, environmental assessment, and service quality evaluation, the AHP-FCE model has been used to address complex multi-attribute decision-making challenges characterized by diverse indicators, fuzzy judgments, and hierarchical structures(Xu et al., 2025). These challenges are highly analogous to those in educational evaluation.
Such successful practices from adjacent fields offer two key lessons for this study: (1) systematic and hierarchical design can ensure that evaluation dimensions are comprehensive without redundancy; and (2) established methods of designing fuzzy membership functions and evaluation sets can improve the model’s validity and credibility in handling uncertainty and linguistic variables.
In addition, emerging trends in educational data science, such as multi-model fusion approaches (e.g., machine learning for weight optimization or result validation), present future opportunities for extending this research. While the current study is grounded in the classical AHP-FCE framework, these perspectives underscore its forward-looking and flexible potential.
In summary, the literature reveals a lack of systematic, quantitative, and practical evaluation models for badminton elective courses. The integration of AHP and FCE offers both strong theoretical grounding and extensive cross-disciplinary validation. Its hierarchical weighting and fuzzy evaluation capabilities are particularly well suited to the multifactorial, multi-level, and uncertainty-rich nature of educational decision-making. Thus, the adoption of the AHP-FCE model in this study is both reasonable and reliable, aiming to advance the evaluation of public sports courses toward greater scientific rigor, precision, and systematic development.

3. Research Methods

3.1. Construction of the Evaluation Index System Based on AHP

3.1.1. Selection and Determination of Evaluation Indicators

To address the lack of theoretical frameworks for evaluating badminton-specific courses, this study focused on four core dimensions of university badminton elective courses: teaching resources, teaching process, student participation, and skill development. Indicators with weak relevance to badminton, such as “online teaching quality,” were excluded. The rationale lies in the skill-oriented nature of badminton instruction, where teaching effectiveness depends heavily on facilities and equipment, training methods, student engagement, and mastery of techniques. These four dimensions collectively capture the critical aspects of course implementation.
This study adopts the AHP-FCE model to integrate both subjective and objective data, a method that has demonstrated notable advantages in physical education evaluation. For example, Zhao(2024) employed the model to quantitatively assess the quality of physical education in universities in Shaanxi Province, confirming its ability to operationalize fuzzy concepts such as “teaching quality.” Similarly, Xing ( 2018) applied AHP to a tennis course, assigning weights to skill assessment indicators (e.g., a 30% weight for “serve success rate”), offering a quantifiable paradigm that informed the evaluation of technical performance in this study.
The construction of the indicator system drew upon established frameworks in the field. Li (2024) proposed a four-dimensional model consisting of teaching team, teaching content, teaching implementation, and teaching outcomes, which provided an important reference for the hierarchical structure of this study. Building on this foundation and considering the unique characteristics of badminton, the present study added sport-specific indicators such as equipment adequacy and the frequency of multi-shuttle training, thereby enhancing the specificity and operability of the evaluation system.
The limitations of current evaluation systems—including disciplinary fragmentation and methodological constraints—underscore the need for an evidence-based paradigm tailored to badminton courses. This research integrates perspectives from sports pedagogy, management science, and educational technology to design an AHP-FCE evaluation model. This approach extends the trend of applying quantitative tools in physical education assessment while innovatively broadening its applicability through cross-disciplinary methods.
To ensure both theoretical and empirical robustness, the research team conducted semi-structured interviews (via email and face-to-face communication) with 12 university physical education experts. Through multiple rounds of indicator screening, feedback collection, and scoring, a systematic set of evaluation indicators for university badminton elective courses was finalized (Table 1). This process ensured the organic integration of theoretical construction and practical validation.

3.1.2. Determination of Weight Coefficients for Evaluation Indicators

This study employed the Analytic Hierarchy Process (AHP) to assign scientific weights to the evaluation index system. By constructing a hierarchical structure model, expert judgments were systematically transformed into an analytical process. Specifically, a complete index system consisting of primary, secondary, and tertiary indicators was established. Based on expert scoring, pairwise comparison matrices were constructed, from which the relative importance among indicators at each level was calculated. Consistency tests were then conducted to ensure logical soundness. This approach integrates qualitative analysis with quantitative computation, using the eigenvector method to iteratively derive indicator weights. The final output is a mathematically rigorous comprehensive evaluation model that provides a quantitative basis for development prediction and process control.
(1) Construction of the judgment matrix
The judgment matrix is a fundamental component of AHP. In this matrix, elements at the same level are compared in pairs, and numerical values are assigned to reflect their relative importance. This study adopted Saaty’s 1–9 scale method for expert scoring. The scale of relative importance and its interpretation are presented in Table 2.
By comparing each pair of elements according to the scale in Table 2, an n-order pairwise comparison matrix (A) can be constructed.
Judgment matrix of order ( n )
A n × n = a 11 a 12 a 1 . . a 1 n a 21 a 22 a 2 . . a 2 n a . . a . . a . . a . . a n 1 a n 2 a n . . a n n
(2) Aggregation of Expert Matrices
The geometric mean method was employed to aggregate the scoring matrices provided by mmm experts (m=1, 2, …, k). Specifically, the corresponding elements of the individual expert matrices were multiplied element-wise, and then the m-th root of each product was taken to obtain a single integrated matrix A ¯ , The formula is as follows:
A ¯ = ( k = 1 m a i j k ) 1 m
(3) Calculation of Relative Weights of the Judgment Matrix
The geometric mean method (root method) is applied to the integrated unique matrix to calculate the weights, with the formula as follows:
W i = ( j = 1 n a i j ) 1 n i = 1 n ( j = 1 n a i j ) 1 n ,   i = 1 ,   2 ,   3 . . ,   n
(4) Consistency of the Judgment Matrix
In practice, experts may produce inconsistent judgments when performing pairwise comparisons of indicators. Therefore, it is necessary to conduct a consistency test on the constructed judgment matrix to ensure the rationality of the assigned weights. In the literature, the Consistency Ratio (CR) is commonly used as the standard for evaluating matrix consistency. CR is defined as the ratio of the Consistency Index (CI) to the Random Consistency Index (RI). If CR<0.1, the matrix is considered consistent, and no modifications are required. Otherwise, experts are asked to revise the judgments, and the process is repeated until CR<0.1 is achieved, ensuring reliable weight estimation. The calculation of CR is expressed as follows:
C R = C I R I = λ m a x n ( n 1 ) R I
λmax is the maximum eigenvalue of the judgment matrix, and its calculation formula is shown in Equation (5). λmax is the maximum eigenvalue of the matrix, where A ¯ is the judgment aggregation matrix, W is the weight vector, and [ A ¯ W]i is the i-th component of the matrix [ A ¯ W].
λ m a x = i = 1 n [ A ¯ W ] i n W i
The Random Consistency Index (RI) depends on the order of the judgment matrix. The corresponding RI values for matrices of different orders are presented in Table 3.

3.1.3. Analysis of Indicator Weights

For the evaluation of university badminton elective courses, the four primary indicators—Teaching Resources (A), Teaching Process (B), Student Participation (C), and Skill Development (D)—were aggregated into a matrix. The geometric mean of the pairwise comparison scores provided by 12 experts for each primary indicator was calculated to obtain the aggregated matrix, as shown in Table 4.
Subsequently, using formulas (3), (4), and (5), the relative weights of the primary indicators were computed, and the consistency of the judgment matrix was tested. The resulting weight values and consistency indices are presented in Table 5.
Similarly, following the same approach and computational procedures used to determine the weights of the first-level indicators, the sub-weights of all secondary and tertiary indicators were calculated. This process yielded the final integrated weight system for the evaluation of badminton elective courses in universities. The results, including the sub-weights and overall weights of each indicator, are presented in Table 6.
The distribution of weights demonstrates a clear gradient. Specifically, the skill improvement dimension accounts for 35.2%, followed by teaching process (24.9%), student engagement (22.1%), and teaching resources (17.8%), highlighting the skill-oriented nature of the evaluation system for university badminton elective courses. Within the skill improvement dimension, technical indicators represent the core component (49.06%), with key observation points including clear shot accuracy (44.16%), serve success rate (29.84%), and doubles coordination (45.78%). This weight distribution aligns closely with the open-skill acquisition characteristics inherent to badminton.
In the teaching process dimension, innovative methods such as multi-shuttle training (56.92%) and scientific grouping strategies (43.08%) strengthen dynamic management, with methodological weight (45.42%) significantly exceeding that of course design (39.69%) and classroom management (14.88%). The student engagement mechanism is structured as a three-dimensional observation model: emotional engagement (40.22%) emphasizing competition participation (40.05%), behavioral engagement (35.15%) focusing on extracurricular practice duration (25.32%), and cognitive engagement (24.63%) centered on tactical understanding (44.85%).
It is noteworthy that although the teaching resources dimension carries the lowest overall weight, foundational elements such as teacher professional qualifications (44.99%) and court compliance rate (59.24%) exert threshold effects critical to system operation. By integrating process-oriented and outcome-oriented indicators, the evaluation system establishes a dynamic monitoring model that combines the specific characteristics of badminton with broader educational objectives.

3.2. Empirical Study of Fuzzy Comprehensive Evaluation (FCE)

3.2.1. Principles of Fuzzy Comprehensive Evaluation

The fuzzy comprehensive evaluation method (FCE), grounded in fuzzy mathematics, is a multi-criteria decision-making approach that reduces multidimensional indicators through the principle of maximum membership degree. By applying fuzzy relational synthesis, FCE transforms qualitative assessments into quantitative analyses. It should be emphasized that the scientific determination of the weight set is a prerequisite for ensuring the validity of FCE.

3.2.2. Empirical Process of Fuzzy Comprehensive Evaluation

(1) Establishment of the Fuzzy Comprehensive Evaluation
Let the first-level indicator factor set be U={UA, UB, UC, UD}, the second-level factor set U1={UA1, UA2, UA3}, and the third-level factor set U11={UA11, UA12, UA13}, The evaluation set is defined as V={Excellent, Good, Medium, Pass, Fail}, and the weight set is defined as Wi={a1, a2, a3, a4...}.
(2) Construction of the Fuzzy Judgment Matrix and Determination of Membership Degrees
A questionnaire survey was conducted using a random sampling method among students enrolled in badminton elective courses at a local university. A total of 200 questionnaires were distributed, and 175 valid responses were recovered, yielding an effective response rate of 87.5%. The frequency distribution of each tertiary indicator against the evaluation set V was calculated to obtain the initial quantified evaluation values. These initial evaluation scores are presented in Table 7.
Based on the evaluation set, each tertiary indicator was assessed using the membership degree F, defined as the proportion of respondents assigning a given rating to the total number of valid responses. After quantification, the membership degrees were used to construct the corresponding fuzzy relation matrix.
F i j = F i 1 j 1 F i 1 j 2 F i 1 j 5 F i 2 j 1 F i 2 j 2 F i 2 j 5 F i n j 1 F i n j 2 F i n j 5
(3) Calculation of Comprehensive Evaluation Results
The first step of evaluation was to calculate the membership degree of each indicator by dividing the number of respondents selecting each rating in Table 7 by the total sample size. The aggregated results are presented in Table 8.
The second stage of the fuzzy evaluation was conducted by applying a weighted average operator to the membership degrees obtained for the tertiary indicators. Combining these results with the AHP-determined weights, the secondary indicators were evaluated through fuzzy aggregation. In the third stage, following the principle of maximum membership degree, the largest membership value in the fuzzy evaluation results was taken as the final evaluation.
U 1 = 0.2141 0.4881 0.1923 0.1044 0.0 0.3779 0.2687 0.2419 0.1115 0.0 0.359 0.3097 0.2366 0.0968 0.007
U 2 = 0.4454 0.2331 0.2263 0.0952 0.0027 0.1743 0.4743 0.2143 0.1371 0.0 0.3136 0.346 0.2129 0.1229 0.0064
U 3 = 0.3307 0.3142 0.2162 0.1323 0.0034 0.265 0.379 0.226 0.1283 0.0026 0.1621 0.4868 0.2195 0.1266 0.0055
U 4 = 0.1623 0.4862 0.2203 0.1268 0.0044 0.3141 0.3264 0.2224 0.1246 0.0065 0.4226 0.2194 0.2215 0.1365 0.0
Analysis of the UA matrix indicates that, across the dimensions of teaching resources, teaching process, student engagement, and skill improvement, the evaluation of university badminton elective courses consistently reached a “Good” level or higher. Furthermore, examination of the overall U matrix demonstrates that the entire evaluation indicator system can be characterized as operating at a “Good” level.
U A = 0.3076 0.3669 0.2207 0.1049 0.0018 0.3026 0.3594 0.2188 0.1183 0.002 0.2627 0.3828 0.2554 0.1522 0.0036 0.2672 0.3776 0.2211 0.1536 0.0639
U = 0.2823 0.3476 0.228 0.1358 0.0753

4. Results and Discussion

4.1. AHP Weight Analysis

The AHP method was used to assign scientific weights to the evaluation indicators. Based on scoring data from 12 experts, the judgment matrix was constructed and weights were calculated. The weights for the first-level indicators are as follows (Table 5): Teaching Resources (A) 0.1788, Teaching Process (B) 0.2486, Student Engagement (C) 0.2205, and Skill Improvement (D) 0.3522. The consistency ratio (CR) was 0.0046 (<0.1), indicating logical consistency and a reasonable distribution of weights.
Weights for secondary and tertiary indicators were calculated using the same methodology, with detailed results shown in Table 6. The distribution indicates that the Skill Improvement dimension carries the highest weight (35.22%), with technical indicators (D1) accounting for 0.4906. Key tertiary indicators include Clear Shot Accuracy (D12) with a global weight of 0.0763, Serve Success Rate (D11) at 0.0516, and Doubles Coordination (D33) at 0.0410. Within the Teaching Process dimension, Teaching Methods (B2) weigh 0.4542, and the Proportion of Multi-Shuttle Training (B21) carries a global weight of 0.0643. For Student Engagement, Affective Engagement (C2) has a weight of 0.4022, with Competition Participation (C22) contributing 0.0355. Although Teaching Resources has the lowest weight overall, foundational indicators such as Teacher Professional Qualification (A11, 0.0322) and Court Compliance Rate (A21, 0.0367) play a critical threshold role.

4.2. Fuzzy Comprehensive Evaluation Results

Data were collected via questionnaire (N=200; valid responses = 175; 87.5% response rate) to establish the fuzzy judgment matrix. Membership degrees for tertiary indicators are summarized in Table 8, reflecting distributions across the evaluation set V={Excellent, Good, Fair, Pass, Fail}. For example, Teacher Professional Qualification (A11) has membership degrees: Excellent 0.23, Good 0.47, Fair 0.19, Pass 0.11, Fail 0.00; Clear Shot Accuracy (D12) shows: Excellent 0.14, Good 0.50, Fair 0.24, Pass 0.11, Fail 0.01.
Secondary and first-level indicators were then evaluated using a weighted average operator, combining the membership degrees with AHP-derived weights. The final fuzzy comprehensive evaluation matrix UUU indicates overall results: Excellent 0.2823, Good 0.3476, Fair 0.2280, Pass 0.1358, Fail 0.0753. According to the maximum membership principle, the overall teaching evaluation for the course falls at a “Good” level (highest membership degree = 0.3476). Each first-level indicator also exhibits a dominant “Good” rating: Teaching Resources (UA) 0.3669, Teaching Process (UB) 0.3594, Student Engagement (UC) 0.3828, Skill Improvement (UD) 0.3776.

4.3. Discussion

Weight analysis reveals that Skill Improvement (35.22%) holds the highest weight, consistent with the skill-dominant nature of badminton courses (Singer, 2000). Within technical indicators, Clear Shot Accuracy and Doubles Coordination stand out, reflecting the open-skill learning principle where mastery of technique and tactical execution is emphasized (Xing, 2017). The Teaching Process dimension is second in weight (24.86%), with multi-shuttle training and grouping strategy carrying substantial influence, underscoring the importance of dynamic teaching methods in skill acquisition (Li, 2022). Student Engagement (22.05%) is modeled in three dimensions—behavioral, affective, and cognitive—highlighting student agency in accordance with constructivist learning theory. Teaching Resources is the lowest weighted dimension (17.88%), yet critical thresholds in teacher qualifications and facility compliance suggest that resource insufficiency can significantly affect overall outcomes (Zhao, 2023).
Fuzzy comprehensive evaluation indicates an overall “Good” rating, with variations across dimensions: Student Engagement and Skill Improvement exhibit higher “Good” membership degrees (0.3828 and 0.3776), whereas Teaching Resources and Teaching Process are slightly lower (0.3669 and 0.3594). This discrepancy may reflect uneven resource allocation or differences in instructional implementation, highlighting areas for course optimization in resource support and process management.
The results show partial consistency and divergence with existing literature. Weight distributions align with Zhao (2023), emphasizing skill output, but this study incorporates badminton-specific indicators such as multi-shuttle training frequency, enhancing domain specificity. Compared with Xing (2017) in tennis, skill indicators in this study are weighted higher (35.22% vs. 30%), reflecting badminton’s higher technical precision demands. Li’s (2022) four-dimension framework (Course Team, Content, Implementation, Effectiveness) aligns well with this study’s dimensions (Teaching Resources, Teaching Process, Student Engagement, Skill Improvement), supporting the generalizability of educational evaluation frameworks. However, by integrating AHP-FCE, this study surpasses traditional qualitative approaches, achieving quantitative depth.
The “Good” evaluation aligns with most empirical studies of university physical education courses (Wang, 2020), while a low “Fail” membership (0.0753) indicates overall high course quality, potentially due to resource advantages in the sampled institution. A notable divergence is the relatively low weight for Teaching Resources, whereas some studies (Smith, 2018) emphasize foundational resources, likely due to differences in cultural context or course type (elective vs. major courses).
The findings support the initial hypothesis that the AHP-FCE model can effectively evaluate badminton elective course teaching quality. Weight assignments passed consistency checks, fuzzy evaluation outputs were reasonable, and results aligned with expert opinions and student feedback. The model successfully integrates multidisciplinary methods (Physical Education, Management) and provides an evidence-based evaluation paradigm.
In practice, the weight results can guide course design priorities: Skill Improvement should remain the core focus, increasing the emphasis on Clear Shot and Doubles training; the Teaching Process should optimize multi-shuttle training and grouping strategies. Policymakers can use the system for resource allocation, such as strengthening teacher training and facility maintenance. Theoretically, the study extends AHP-FCE applications in physical education, demonstrating its efficacy in handling fuzzy concepts and offering reference for evaluating other skill-based courses.
Limitations include a sample from a single local university (N=175), which may affect generalizability, and a limited number of experts (12), though consistency checks were performed. Future studies could expand the sample across multiple institutions, include longitudinal data to track course effects, and explore machine learning methods to optimize weight calculations and improve model dynamism.

Funding

This research received joint funding from the Bijie Federation of Social Sciences Circles and Guizhou University of Engineering Science [Grant Number BSLB-202402].

Conflicts of Interest

The authors declare that there are no competing interests.

References

  1. Ayca, C., & Hasan, K. (2017). An application of fuzzy analytic hierarchy process (FAHP) for evaluating students project. Educational Research and Reviews, 12(3), 120–132. [CrossRef]
  2. Bu, W. (2024). Construction of multi-object evaluation index tutoring system for physical education and teaching based on intelligent CAD. Computer-Aided Design and Applications, 211–223. [CrossRef]
  3. Cai, Q., Cheng, Y., & Ke, Y.-J. (2022). Construction of evaluation index system for training quality of high-level tennis team. Frontiers in Psychology, 13. [CrossRef]
  4. Cai, W., Hengsuko, E., & Sukityarn, P. (2025). A study on the current situation for the badminton elective course at minnan normal university. UBRU International Journal Ubon Ratchathani Rajabhat University, 5(2), 209–220.
  5. Casebolt, K., & Zhang, P. (2020). An authentic badminton game performance assessment rubric. Strategies, 33(1), 8–13. [CrossRef]
  6. Emrouznejad, A., & Ho, W. (2017). Fuzzy analytic hierarchy process. Chapman and Hall/CRC. [CrossRef]
  7. Fiqih Satria, M. T. I. (2022). Sistem pendukung keputusan penilaian kinerja guru terbaik pada min kedondong menggunakan ahp (analytic hierarchy process). https://www.academia.edu/86668317/Sistem_Pendukung_Keputusan_Penilaian_Kinerja_Guru_Terbaik_Pada_Min_Kedondong_Menggunakan_Ahp_Analytic_Hierarchy_Process_.
  8. Gao, H. (2019). Research on the reform and development of public physical education in colleges and universities under the background of ‘healthy china’ strategy. Advances in Higher Education, 3(2), 55–58. [CrossRef]
  9. Han, K., & Wan, J. (2025). Evaluation of sports teaching quality in universities based on fuzzy decision support system. Scientific Reports, 15, 30392. [CrossRef]
  10. Jianbo, W., & Dandan, Y. (2020). Evaluation method of physical education teaching in higher vocational colleges based on analytic hierarchy process. 2020 13th International Conference on Intelligent Computation Technology and Automation (ICICTA), 104–107. [CrossRef]
  11. Jing, F. (2024). Испoльзoвание средств бадминтoна в физическoм вoспитании студентoв кнр. Вестник Пoлoцкoгo Гoсударственнoгo Университета. Серия E. Педагoгические Науки, 1, 54–57. [CrossRef]
  12. Li, D., & Wang, X. (2022). Risk assessment of large-scale sports events based on fuzzy analytic hierarchy process. Journal of Computational Methods in Sciences and Engineering, 22(3), 777–790. [CrossRef]
  13. Li Tian & Wang Jiang. (2024). Research on the Evaluation of Red Culture Teaching in College Physical Education Courses Based on the AHP-FCE Model. Bulletin of Sport Science & Technology, 32(7), 191–194, 215. [CrossRef]
  14. Mendoza, A., Solano, C., Palencia, D., & Garcia, D. (2019). Aplicación del proceso de jerarquía analítica (AHP) para la toma de decisión con juicios de expertos. Ingeniare. Revista Chilena De Ingeniería, 27(3), 348–360. [CrossRef]
  15. Özkan, B., Karasan, A., & Kaya, İ. (2021, July 1). A fuzzy based performance model for the assessment of individual sport branches: A case study for tennis players. EBSCOhost. https://openurl.ebsco.com/contentitem/gcd:152390009?sid=ebsco:plink:crawler&id=ebsco:gcd:152390009.
  16. Peng, J., & Liu, L. (2024). An empirical analysis of a theoretical model of satisfaction with university physical education courses. Proceedings of the 2024 Guangdong-Hong Kong-Macao Greater Bay Area International Conference on Education Digitalization and Computer Science, 69–74. [CrossRef]
  17. Pengju, M. (2015). Study on the badminton athlete selection evaluation index system based on AHP. Hubei Sports Science. https://consensus.app/papers/study-on-the-badminton-athlete-selection-evaluation-index-pengju/119d1d4f0ac85238b7a614680f21bc4d/.
  18. Press, C. (2024). A methodological study on the quality assessment of online-offline blended teaching in physical education courses using the assignment approach. Journal of Combinatorial Mathematics and Combinatorial Computing, Volume 127b, 2439–2454. [CrossRef]
  19. Shen, Y. & Dr. Lorna Espeso. (2025). Impact of badminton course on the physical, emotional and social wellness status of the college students. International Journal of Education and Humanities, 18(1), 132–135. [CrossRef]
  20. Tang, K. (2024). Construction of quality evaluation system for blended physical education teaching in colleges and universities. Applied Mathematics and Nonlinear Sciences, 9(1). [CrossRef]
  21. Wang, D., & Lu, W. (2018, June 9). Research on the development of university sports industry based on fuzzy comprehensive evaluation method. 2017 3rd International Conference on Innovation Development of E-commerce and Logistics. https://webofproceedings.org/proceedings_series/article/artId/1953.html.
  22. Wang, D., Wang, S., Hou, J., & Yin, M. (2023, September 30). Construction of a sport-specific strength and conditioning evaluation index system for elite male wheelchair badminton athletes by the delphi method. [CrossRef]
  23. Wei, Z. (2025). AHP and fuzzy evaluation methods for improving cangzhou honey date supplier performance management. International Journal of Advanced Computer Science and Applications, 16(4). [CrossRef]
  24. Wu, L. (2025). Evaluation model construction of civic and political teaching quality of physical education courses oriented to lifelong learning–based on hierarchical analysis method and fuzzy comprehensive evaluation. Journal of Combinatorial Mathematics and Combinatorial Computing, Volume 127b. [CrossRef]
  25. XingYuXia, Ma Yinyin, Cui Jiafeng, & Wang Lichen. (2018). Construction of the Evaluation System for Tennis Teaching in Colleges and Universities. Sports and Cultural Goods and Technology, 13, 125–126.
  26. Xu, R., Zhong, G., Li, L., Wang, G., & Xie, C. (2025). Evaluation of Smart Highway Operation and Maintenance Risk: Based on AHP-FCE Model. Journal of Advanced Transportation, 2025(1), 2036525. [CrossRef]
  27. Ye, B., Zhu, H., Yang, Z., He, Z., Liu, G., Pan, H., & Guo, H. (2024). Construction and analysis of the physical fitness evaluation index system for elite male singles badminton players: Based on delphi and AHP methods. Life, 14(8), 944. [CrossRef]
  28. Zhao Jianghong, Liu Jiawei, Zhou Ziyue, & Liu Zhiqiang. (2024). Construction of Physical Education Teaching Evaluation Indicators in Universities under the Background of High-Quality Development: Based on the AHP-Fuzzy Comprehensive Evaluation Method. contemporary sports technology, 14(17), 55–59. [CrossRef]
Table 1. Evaluation Index System for University Badminton Elective Courses.
Table 1. Evaluation Index System for University Badminton Elective Courses.
First-level Indicator Second-level Indicator Third-level
Indicator
Evaluation
Standard
Teaching Resources (A) Faculty Allocation (A1) A11 Teacher qualifications Level of badminton teaching certificate
A12 Student–teacher ratio Number of students per class / number of teachers
A13 Teaching experience Years of badminton teaching experience
Facilities (A2) A21 Facility compliance rate Compliance with GB/T 22517-2008 standards
A22 Equipment adequacy rate Rackets per student ≥ 1:2
A23 Maintenance frequency Number of inspections per week
Teaching Materials (A3) A31 Textbook suitability Degree of match (self-compiled/adopted textbook)
A32 Availability of video resources Number of demonstration videos for skills
A33 Lesson plan completeness Includes warm-up, skills, and cool-down
Teaching Process (B) Course Design (B1) B11 Rationality of schedule Progression of technical instruction
B12 Integration of ideological elements Number of ideological/political elements per class
B13 Completeness of safety measures Availability of injury prevention protocols
Teaching Methods (B2) B21 Proportion of multi-shuttle drills Class time allocation ratio
B22 Scientific grouping strategy Appropriateness of homogeneous/heterogeneous grouping
Classroom Management (B3) B31 Time utilization rate Proportion of effective teaching time
B32 Timeliness of error correction Response time to incorrect movements
B33 Classroom atmosphere Frequency of student laughter observed
Student Participation (C) Behavioral Engagement (C1) C11 Attendance rate Actual attendance / expected attendance
C12 Practice intensity Number of shots per minute
C13 Extracurricular practice time Weekly hours of self-training
Emotional Engagement (C2) C21 Frequency of class interaction Number of teacher–student Q&A exchanges per class
C22 Competition participation Registration rate for class tournaments
C23 Course satisfaction Positive response rate on end-of-course surveys
Cognitive Engagement (C3) C31 Rule comprehension Score on referee knowledge test
C32 Tactical understanding Accuracy rate in simulated match decisions
C33 Accuracy of self-evaluation Difference between self-evaluation and teacher evaluation
Skill Development (D) Technical Skills (D1) D11 Serve success rate Number of successful serves out of 10 attempts
D12 Overhead clear success rate Completion rate from baseline to baseline
D13 Net shot quality Percentage of shots within ≤10 cm of the net
Physical Fitness (D2) D21 Shuttle run improvement Difference between beginning and end of semester
D22 Agility test improvement Reduced time in hexagon jump
D23 Endurance performance Ability to sustain multi-rally exchanges
Match Competence (D3) D31 Tactical execution Proportion of pre-set tactics successfully applied
D32 Psychological stability Success rate in critical points
D33 Doubles coordination Rating of rotational coordination and teamwork
Table 2. Scale of Relative Importance in the Analytic Hierarchy Process (AHP).
Table 2. Scale of Relative Importance in the Analytic Hierarchy Process (AHP).
No. Meaning of the Scale Value
1 Elements i and j are equally important aij=1
2 Element i is slightly more important than element j aij=3
3 Element i is obviously more important than element j aij=5
4 Element i is strongly more important than element j aij=7
5 Element i is absolutely more important than element j aij=9
6 The importance of i vs. j falls between the above judgments. aij=2,4,6,8
7 If the relative importance of element i to element j is aij, then the relative importance of j to i is the reciprocal Reciprocal
Table 3. Average Random Consistency Index (RI) for Judgment Matrices of Different Orders.
Table 3. Average Random Consistency Index (RI) for Judgment Matrices of Different Orders.
Matrix Order (n) 1 2 3 4 5 6 7 8 9 10 11 12
RI Value 0 0 0.52 0.89 1.12 1.26 1.36 1.41 1.46 1.49 1.52 1.54
Table 4. Pairwise Comparison Matrix of Primary Indicators.
Table 4. Pairwise Comparison Matrix of Primary Indicators.
Primary
Indicator
Teaching Resources (A) Teaching Process (B) Student
Participation (C)
Skill
Development (D)
Teaching Resources (A) 1.0 0.6552 0.7628 0.5925
Teaching Process (B) 1.5263 1.0 1.1247 0.6444
Student Participation (C) 1.311 0.8891 1.0 0.5875
Skill Development (D) 1.6877 1.5518 1.7021 1.0
Table 5. Weighting Results and Consistency Test for First-Level Indicators.
Table 5. Weighting Results and Consistency Test for First-Level Indicators.
Primary Indicator Relative Weight λmax CI CR
Teaching Resources (A) 0.1788 4.0122 0.00406 0.0046<0.1
consistency test passed.
Teaching Process (B) 0.2486
Student Participation (C) 0.2205
Skill Development (D) 0.3522
Table 6. Weights of Indicators at All Levels and Their Integrated Values.
Table 6. Weights of Indicators at All Levels and Their Integrated Values.
First-Level Indicator Weight Second-Level Indicator Weight Third-Level Indicator Weight Integrated Weight
A 0.1788 A1 0.4001 A11 0.4499 0.0322
A12 0.2956 0.0211
A13 0.2546 0.0182
A2 0.3465 A21 0.5924 0.0367
A22 0.248 0.0154
A23 0.1596 0.0099
A3 0.2534 A31 0.5091 0.0231
A32 0.2979 0.0135
A33 0.193 0.0087
B 0.2486 B1​ 0.3969 B11 0.3943 0.0389
B12 0.2671 0.0264
B13 0.3386 0.0334
B2 0.4542 B21 0.5692 0.0643
B22 0.4308 0.0486
B3 0.1488 B31 0.3574 0.0132
B32 0.1742 0.0064
B33 0.4685 0.0173
C 0.2205 C1 0.3515 C11 0.4043 0.0313
C12 0.3425 0.0265
C13 0.2532 0.0196
C2 0.4022 C21 0.343 0.0304
C22 0.4005 0.0355
C23 0.2565 0.0227
C3 0.2463 C31 0.3939 0.0214
C32 0.4485 0.0244
C33 0.1576 0.0086
D 0.3522 D1 0.4906 D11 0.2984 0.0516
D12 0.4416 0.0763
D13 0.26 0.0449
D2 0.2551 D21 0.4051 0.0364
D22 0.351 0.0315
D23 0.2439 0.0219
D3 0.2542 D31 0.3452 0.0309
D32 0.197 0.0176
D33 0.4578 0.0410
Table 7. Initial Quantified Evaluation Values.
Table 7. Initial Quantified Evaluation Values.
Indicator Evaluation Outcome
Excellent Good Medium Pass Fail
A11 Teacher Professional Qualification 41 82 33 19 0
A12 Student–Teacher Ratio 38 86 34 17 0
A13 Teaching Experience (Years) 31 91 35 18 0
A21 Court Compliance Rate 76 37 45 17 0
A22 Equipment Availability Rate 70 45 35 25 0
A23 Facility Maintenance Frequency 26 87 42 20 0
A31 Textbook Appropriateness 73 50 36 15 1
A32 Richness of Video Resources 69 39 49 18 0
A33 Lesson Plan Completeness 26 87 42 19 1
B11 Reasonableness of Progress 65 45 46 19 0
B12 Integration of Ideological Education 87 36 37 14 1
B13 Completeness of Safety Plan 85 39 35 16 0
B21 Proportion of Multi-Shuttle Training 29 82 37 27 0
B22 Scientific Grouping Strategy 32 84 38 21 0
B31 Classroom Time Utilization 79 42 35 19 0
B32 Timeliness of Error Correction 74 40 38 22 1
B33 Classroom Atmosphere 29 83 38 23 2
C11 Attendance Rate 32 83 35 25 0
C12 Practice Density 78 35 42 19 1
C13 Extracurricular Practice Duration 72 39 37 27 0
C21 Frequency of Classroom Interaction 29 82 40 24 0
C22 Competition Participation Rate 73 41 38 23 0
C23 Course Satisfaction 27 86 41 20 1
C31 Mastery of Rules 27 86 41 20 1
C32 Tactical Understanding 29 85 37 24 0
C33 Accuracy of Self-Evaluation 29 83 38 23 2
D11 Serve Success Rate 31 84 35 25 0
D12 Clear Shot Accuracy 25 88 42 19 1
D13 Net Shot Quality 32 82 37 24 0
D21 Shuttle Run Improvement 30 84 38 21 2
D22 Agility Test Improvement 70 41 39 25 0
D23 Endurance Performance 76 37 41 20 1
D31 Tactical Execution 74 38 40 23 0
D32 Psychological Stability 72 42 37 24 0
D33 Doubles Coordination 75 37 39 24 0
Table 8. Evaluation Results of Tertiary Indicators.
Table 8. Evaluation Results of Tertiary Indicators.
First-Level Indicator Weight Second-Level Indicator Weight Third-Level Indicator Evaluation Results
Excellent Good Fair Pass Fail
A 0.1788 A1 0.4001 A11 0.23 0.47 0.19 0.11 0.00
A12 0.22 0.49 0.19 0.10 0.00
A13 0.18 0.52 0.20 0.10 0.00
A2 0.3465 A21 0.43 0.21 0.26 0.10 0.00
A22 0.40 0.26 0.20 0.14 0.00
A23 0.15 0.50 0.24 0.11 0.00
A3 0.2534 A31 0.42 0.29 0.21 0.09 0.01
A32 0.39 0.22 0.28 0.10 0.00
A33 0.15 0.50 0.24 0.11 0.01
B 0.2486 B1​ 0.3969 B11 0.37 0.26 0.26 0.11 0.00
B12 0.50 0.21 0.21 0.08 0.01
B13 0.49 0.22 0.20 0.09 0.00
B2 0.4542 B21 0.17 0.47 0.21 0.15 0.00
B22 0.18 0.48 0.22 0.12 0.00
B3 0.1488 B31 0.45 0.24 0.20 0.11 0.00
B32 0.42 0.23 0.22 0.13 0.01
B33 0.17 0.47 0.22 0.13 0.01
C 0.2205 C1 0.3515 C11 0.18 0.47 0.20 0.14 0.00
C12 0.45 0.20 0.24 0.11 0.01
C13 0.41 0.22 0.21 0.15 0.00
C2 0.4022 C21 0.17 0.47 0.23 0.14 0.00
C22 0.42 0.23 0.22 0.13 0.00
C23 0.15 0.49 0.23 0.11 0.01
C3 0.2463 C31 0.15 0.49 0.23 0.11 0.01
C32 0.17 0.49 0.21 0.14 0.00
C33 0.17 0.47 0.22 0.13 0.01
D 0.3522 D1 0.4906 D11 0.18 0.48 0.20 0.14 0.00
D12 0.14 0.50 0.24 0.11 0.01
D13 0.18 0.47 0.21 0.14 0.00
D2 0.2551 D21 0.17 0.48 0.22 0.12 0.01
D22 0.40 0.23 0.22 0.14 0.00
D23 0.43 0.21 0.23 0.11 0.01
D3 0.2542 D31 0.42 0.22 0.23 0.13 0.00
D32 0.41 0.24 0.21 0.14 0.00
D33 0.43 0.21 0.22 0.14 0.00
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated