1. Introduction
Assessment plays a pivotal role in competence-based education, serving as a critical tool for enhancing the quality of teaching and learning. Trainers benefit from the ability to evaluate trainees' proficiency in professional tasks, offering valuable insights for improvement and providing mutual feedback to both learners and teachers (Gulikers et al., 2010). In the specific context of Technical and Vocational Education and Training (TVET), assessment is integral to processes such as obtaining college credit and ensuring access to further education (Black & William, 2005; Napanoy & Peckley, 2020; Ojerinde, 2009).
In addition, becoming a TVET institution trainer requires competence across diverse domains, encompassing institutional, national occupational competence, and recruitment assessments (Alemayehu, 2010; FTA. 2014a). However, the current landscape for private TVET teachers in Ethiopia exposes a gap in their education, particularly in the fields of business and health. Graduates often lack exposure to both pre-service and in-service training related to students’ educational assessments, along with essential teaching methodologies. Interestingly, governmental emphasis on evaluating teachers' competence leans more towards national occupational assessments than on their proficiency in teaching methodology and assessment (FTA. (2014a).). This discrepancy underscores the necessity for a holistic approach to address teacher competence, bridging the gap between subject-specific expertise and effective teaching and assessment practices.
Assessors' methods and tools axle on the assessment model employed, potentially influencing trainees' competence positively or negatively (Bellotti et al., 2013). Recognizing the fundamental role of quality assessments in effective teaching and the learning process, this study emphasizes the need for a comprehensive evaluation of teacher competence. The quality of assessment practices within the classroom directly correlates with the quality of learning experiences for students (Rezaei & Bayat, 2015). As such, a significant understanding of assessment practices becomes imperative in making a more effective and holistic approach to teacher training and development in the TVET context.
This study addresses the critical need for teachers to possess adequate assessment competence, acknowledging its profound influence on assessment quality and student competency (Mertler, 2002; Popham, 2006; Stiggins, 1999). Teachers' assessment literacy, encompassing the understanding of different assessment types, is essential for interpreting assessment data, engaging with students about their understanding, and setting meaningful learning goals (Siegel & Wissehr, 2011; Gottheiner & Siegel, 2012).
Teachers are expected to demonstrate competence in various assessment-related responsibilities, including evaluating learners' understanding, adapting curricula, organizing classroom layouts, and selecting suitable organizations for technical training (NEA, 2008; Hailaya, 2014). However, a significant number of educators lack the necessary information, skills, and effective assessment practices, leading to misconceptions about evaluating students' academic achievement (McMillan, 2001).
Research consistently indicates a general weakness in teachers' assessment competence, resulting in inaccurate assessments and potentially hindering students from realizing their full potential (Brookhart, 2001; Campbell, 2002; Murphy & Beggs, 2007). Studies in the Amhara National Regional State have revealed that secondary school teachers often struggle to demonstrate competence in educational assessments, particularly across seven essential areas of educational assessment standards (Lake, 2014). Similarly, preliminary investigations in Bahir Dar underscored disappointing and poor-quality educational assessments by teachers (Asenake & Lake, 2017). Research by Mertler & Campbell (2005) has highlighted the lack of teacher competence in practicing a variety of classroom assessment tasks. These findings underscore the critical need to address teachers' assessment competence to ensure accurate and effective assessment of students' learning outcomes (Stiggins, 2001).
Ethiopia, specifically in private TVET colleges within the Amara Regional State, is characterized by a notable scarcity of extensive studies on the effectiveness of teachers in assessing students' educational competence. Despite the abundant global research on this subject across primary, secondary, and higher education levels, Ethiopia faces a distinct lack of similar investigations. This gap is underscored by a report from the Ministry of Education (MOE, 2008), which emphasizes the insufficient attention given to teachers' educational assessment of students within TVET colleges. In response to this critical gap, a preliminary study has been initiated in private TVET colleges within the Amara Regional State. This study focuses on evaluating teachers' assessment competence while delving into their perceived and observed practices within classrooms.
As a result, recognizing the inadequacy of addressing teachers' competency in educational assessment within a competency-based education system, the researcher was inspired to undertake a study. The focus of this investigation was to evaluate teacher competence in competency-based assessment specifically within Amhara private colleges, with a concentrated emphasis on the South Gonder Administrative Zone. The study aimed to comprehensively assess teachers' assessment literacy, and their actual and perceived practices related to competency-based assessment, and explore the relationship between these dimensions. Additionally, the study emphasizes the importance of understanding how teachers perceive, enact, and improve their students' educational assessments.
The information derived from this research is anticipated to offer valuable insights and assistance to students, teachers, assessors, colleges, as well as educational authorities such as AOCACA, ATVETB, and FTA, among others. The emphasis is on directing special attention to enhance teacher assessment competence. By addressing this competency gap, the researcher envisions a positive impact on the overall educational assessment practices in private colleges.
1.2. Research Questions:
In light of the background information and the problem statement presented above, this study aims to investigate the following research questions:
What are the perceived practices of teachers in educational assessments of students?
What is the level of assessment literacy among teachers in the educational assessment of students?
What are the observed practices of teachers in educational assessments of students?
Is there a significant correlation between assessment literacy, perceived practices, and observed practices among private TVET teachers?
In what ways do teachers perceive, practice, and enhance their educational assessment methods for students?
1.3. Methods
Survey research was used to gather data to answer the research objectives. Questionnaires, interviews, and observation checklists were used to collect data from the participants. An assessment competence inventory was used to determine the assessment literacy levels of teachers at private TVET colleges. In addition, a five-point Likert scale questionnaire was used to analyze teachers' perceptions of their assessment practice concerning the basic competency-based assessment premise. The same five-point rating Likert scale observation checklist was used to assess the sample institutions' observed assessment practices. The last semi-structured interview and observation were used to triangulate the results of the questionnaire.
1.4. Study Group of the Research
The study group for the quantitative study consisted of 49 private TVET teachers working in different colleges and 14 assessors who came from the industry and worked in different private and governmental institutions in Amara regional state, Ethiopia. In the other case, five teachers, four assessors, five students two deans, and one assessment coordinator were used for qualitative research. The convenience sampling method was used to select teachers and assessors.
1.5. Instrumentation
The study utilized questionnaires, interviews, and observations as research instruments. Three distinct types of questionnaires were employed: the Teacher Assessment Literacy Questionnaire, comprising 35 closed-ended items to measure teachers' knowledge across seven competency areas; the Teachers' Self-Perceived Assessment Practice Questionnaire, with 35 closed-ended items to assess teachers' perceived assessment practices in the seven competency areas; and the Teachers' Assessment Practice Observational Checklists Questionnaire, including 49 closed-ended items to evaluate teachers' actual assessment practices in the classroom. The design of the questionnaires drew inspiration from the "Standards for Teacher Competence in the Educational Assessment of Students," collaboratively developed by the American Federation of Teachers (AFT), the National Education Association (NEA), and the National Council on Measurement in Education (NCME) in 1990. The AFT, NCME, and NEA (1990) standards served as the foundation for creating three distinct measures. Four multiple-choice questions were generated for each competency area, resulting in a 35-item multiple-choice test for assessing teacher assessment literacy. In adapting these items, 15 were sourced from Phye's (1996) "Handbook of Classroom Assessment: Learning, Adjustment, and Achievement," while an additional 20 were drawn from Gray's (1997) and Brookhart’s (2011) lists of Educational Assessment Knowledge and Skills for Teachers. Furthermore, the Assessment Literacy Inventory (ALI): Test, developed by Mertler and Campbell in 2005, was adapted into Turkish by Bütüner et al. and utilized by Al-Bahlani in 2019. The questionnaires featured a four-option multiple-choice response format for competence assessment, a five-point Likert scale for self-perceived assessment practices (ranging from 1 for not observed to 5 for excellent), and a five-point Likert scale for the assessment practice observation checklist (ranging from 1 for not observed to 5 for always). The reliability coefficients for these instruments, assessed through the Spearman-Brown method, were .59, .67, and .89, respectively, surpassing values reported by Plake, Impara, and Fager (1993) and Lake (2014). To strengthen the questionnaires' validity, interviews were conducted. Firstly, the questions, along with the interview query, were reviewed by six experts to ensure appropriateness. Secondly, a pilot test involving ten teachers was conducted. The interviews sought to enhance the questionnaires' validity and gain valuable insights into teachers' opinions on assessment practices.
2. Data Collection Procedures
According to the respondents' readiness to do so and English language proficiency, the tools were offered in either English or Amharic. Each respondent received the questionnaire and the interview in person at their college or workplace. To observe the whole assessment process, structured observation checklist questionnaires and interviews were administered to each teacher and each class during daily instruction and classroom assessments. When the tools are delivered, the study's objectives, directions, and tool goals are discussed. Also, respondents were made aware that their answers would be kept private and used exclusively for research purposes.
3. Data Analysis
In the first teacher assessment literacy questionnaire, the responses to the 35 multiple-choice questions were coded as 0 if they were incorrectly answered and 1 if they were correctly answered. Second teachers self-perceived assessment practice survey questioner of a five-point rating Likert scale, one for never used and five for always used, to rate teachers perceived practice. Thirdly, we used a structured observation checklist with a five-point Likert scale, one for not being observed and five for excellent, to rate teachers' assessment practices. The data were entered into a computer and analyzed using SPSS version 26 for Windows. Thematic and content analysis were used for follow-up interviews. Missing-value cases were ignored throughout the procedure and left out of the analysis. Descriptive analyses were used to describe the result of the teacher's Assessment Literacy Inventory questionnaire test to identify correctly responded students and those who did not correctly respond to and understand the overall competence of the whole group of respondents. The results of each of the seven standards or levels of competency were examined using descriptive analysis and statistical test analysis for the assessment inventory of teacher competence questioners, perceived assessment practice questioners, and structured observation checklists. The Pearson correlation coefficient was used to assess the relationship between the teacher assessment competence inventory test, the teacher's self-perceived assessment practice survey questionnaire, and the teacher's observed assessment practice in the classroom.
4. Ethical Considerations:
The study protocol, with approval number 06/IRB/23 from the Bahir Dar University Institutional Review Board, was implemented with the informed consent of the participants. A comprehensive consent form, outlining the research's objectives, the researcher's contact information, as well as the potential benefits and risks involved, was prepared and distributed to all participants. Rigorous measures were undertaken to ensure safety, such as encoding participants' names and crafting study reports to uphold privacy and confidentiality standards.
5. Results
5.1. Characteristics of Respondents
The study group for the qualitative study was made up of 63 private TVET teachers and assessors who were employed in various colleges and industries. In the other instance, the study was conducted using five teachers, four assessors, five students, two deans, and one coordinator of the assessments center.
In
Table 1, socio-background information, including sex, experience, educational level, and department of the participants, is presented. The data were obtained from a total of 63 participants, and the analysis was conducted based on the complete dataset from all 63 participants. The socio-demographic findings indicate that the largest proportion of respondents (33%) belonged to the nursing department. Regarding gender distribution, the majority of participants were males (84.13%). In terms of educational qualifications, an equal percentage (44.44%) of participants held Level 4 and BA/BSC levels. Concerning teaching experience, the majority (60.32%) reported having four to nine years of service in educational institutes.
5.2. Teachers’ Perceived Practice of Assessment Literacy Competency Of Students’ Educational Assessment
The first objective of the study was to examine teachers’ perceived assessment practice of students’ educational assessments. The assessment contained seven assessment competency areas of self-perceived practice of students’ educational assessments. Thirty-five questionnaire items with a five-point Likert scale were designed and presented to the teachers to assess their self-perceived practices in the educational assessment of students. To do this t-test was computed and the results are presented in
Table 2.
In
Table 2, the results of a one-sample t-test indicated that, although the mean value of teachers' competency in choosing assessment methods was lower than the mean test value (M=11.7, SD=5.23, t=0.46, p > .05), the t-value was not significant. This indicated that teachers perceive their competency in choosing assessment methods at an average level in performing assessment competence. Similarly, the mean value of teachers' competency in communicating assessment results was lower than the mean test value (M=11.84, SD=3.37, t=0.373, p > .05), and the t-statistic was not significant, indicating an average level of perception in performed assessment competence in this competency area.
On the other hand, the mean value of teachers' competency in administering, scoring, and interpreting assessments was slightly higher than the test value (M=19.11, SD=6.01, t=1.47, p > .05), and the t-test was not significant. This indicated that teachers perceive their competency in this area at a moderate level in performing assessment competence. Similarly, the mean value of teachers' competency in using assessments for grading was slightly higher than the test value (M=13.59, SD=7.01, t=1.49, p > .05), and the t-value was not significant, indicating a moderate level of perception in performed assessment competence in this competency area.
The results of a one-sample t-test demonstrated that the mean value of teachers' competency in developing assessment methods was significantly lower than the mean test value (M=18.33, SD=10.16, t=2.88, p < .05), with a significant t-value. This shows that teachers perceive their competency in developing assessment methods at a lower level in accomplishing assessment competence. The mean value of teachers' competency in using assessments for decision-making was higher than the mean test value (M=17.52, SD=7.64, t=2.62, p > .05), and the t-statistic was found to be significant, indicating a relatively higher level of perception in performing assessment competence in this competency area. Similarly, the mean value of teachers' competency in recognizing unethical practices was higher than the mean test value (M=16.21, SD=4.46, t=2.15, p > .05), with a significant t-statistic, showing a moderate level of perception in implemented assessment competence.
In general, although the mean value of teachers' total perceived practice of assessment was relatively higher than the mean test value (M=108.97, SD=4.46, t=1.13, p > .05), the t-statistic was not significant. This suggests that teachers' overall perceived practice of CBA in their college is at a moderate level.
5.3. Teachers’ Literacy in the Seven Competency Areas of Educational Assessment of Students
The second objective of the study was to inspect teachers’ assessment of literacy competency knowledge. The assessment contained seven assessment competency areas. The test consisted of 35 multiple-choice items, and there was only one correct answer from the four alternatives given. Each teacher received one point for the correct answer to each question. The number of teachers who participated in this study was 63. The results were computed using one sample t-test and presented in
Table 3.
In
Table 3, the results of a one-sample t-test confirmed that although the mean value of teachers' competency in choosing assessment methods was moderately higher than the mean test value (M=1.60, SD=0.93, t=0.89, p > .05), the t-value was not significant. This suggests that teachers perceive their competency in choosing assessment methods as average in performing assessment literacy competence.
On the other hand, the results of a one-sample t-test determined that the mean value of teachers' competency in developing assessment methods was significantly lower than the mean test value (M=18.33, SD=10.16, t=2.88, p < .05). Similarly, the mean value of teachers' competency in administering, scoring, and interpreting assessments was found to be significantly lower than the mean test value (M=1.92, SD=1.24, t=10.15, p < .001). The mean value of teachers' competency in using assessments for decision-making was also significantly lower than the mean test value (M=1.38, SD=1.24, t=13.6, p < .001). Correspondingly, the mean value of teachers' competency in Using Assessments for Grading was significantly lower than the mean test value (M=0.90, SD=0.98, t=12.93, p < .001). Likewise, the mean value of teachers' competency in communicating assessment results was significantly lower than the mean test value (M=0.92, SD=0.92, t=4.90, p < .001). In the same way, the mean value of teachers' competency in recognizing unethical practices was significantly lower than the mean test value (M=1.11, SD=1.18, t=12.71, p < .001). The mean value of teachers' overall cumulative assessment literacy was lower than the mean test value (M=8.81, SD=3.57, t=19.30, p < .001). The t-statistic was found to be significant. This indicates that teachers' overall assessment literacy is at a lower knowledge phase in employing assessment competence in private TVET colleges.
From the results above, it is possible to demonstrate that teachers of TVET unexpectedly have low assessment literacy in the country.
5.4. Teachers’ Observed Practices of Students’ Educational Assessment
The third objective of this study was to assess trainers’ observed practice in seven areas of students’ educational assessment. To attain this one sample t-test was run and the results are presented in
Table 4. 49 items of the checklist ranging from 1= being not observed to 5 = being excellent in the observation session to see how effectively teachers conduct assessments of the actual practice of students’ educational assessment in the classroom.
In
Table 4, the results of a one-sample t-test revealed that the mean value of teachers' observed practice in choosing assessment methods was significantly lower than the mean test value (M=10.59, SD=2.88, t=0.89, p < .001). This indicates that teachers' observed practice of choosing assessment methods for students' educational assessment was low in the actual practice of assessment. Similarly, the results of a one-sample t-test discovered that the mean value of teachers' observed practice in developing assessment methods was significantly lower than the mean test value (M=8.32, SD=1.52, t=66.13, p < .001). Likewise, the mean value of teachers' observed practice in administering, scoring, and interpreting assessments was found to be significantly lower than the mean test value (M=19.75, SD=3.59, t=35.96, p < .001). Correspondingly, the mean value of teachers' observed practice in using assessments for decision-making was also significantly lower than the mean test value (M=13.22, SD=6.33, t=13.50, p < .001). Respectively, the mean value of teachers' observed practice in using assessments for grading was significantly lower than the mean test value (M=6.80, SD=3.52, t=11.70, p < .001). Consistently, the mean value of teachers' observed practice in communicating assessment results was significantly lower than the mean test value (M=13.97, SD=7.04, t=35.97, p < .001). Equally, the mean value of teachers' observed practice in recognizing unethical practices was significantly lower than the mean test value (M=8.97, SD=4.45, t=10.77, p < .001). Consistently, the mean value of teachers' overall cumulative observed practice of students' educational assessment was significantly lower than the mean test value (M=81.62, SD=23.32, t=22.26, p < .001). This shows that teachers' cumulative students' education assessment is at a lower actual practice level.
The findings dictate that it is conceivable to corroborate that teachers of TVET have an astonishingly low actual practice of students' educational assessment in TVET institutes in the country.
5.5. Relationship Between Teachers’ Perceived Assessment Practice, Assessment Literacy and Observed Assessment Practices of Students’ Educational Assessment.
The fourth objective of this study was to examine whether there exists a significant relationship between teachers’ perceived assessment literacy competence, assessment literacy competence knowledge, and observed practices of students’ educational assessment of TVET teachers. To attain this Pearson correlation coefficient was conducted and the results are presented in
Table 5
As indicated in
Table 5, there was statistically significant correlation between assessment literacy and perceived assessment practice (ϒ = .358, p < .05). Likewise, there existed statistically significant positive correlation between assessment literacy and observed assessment practice (ϒ =.648, p < .05) and statistically significant correlation observed between perceived assessment practice and observed assessment practice (ϒ = .376, p < .05). These findings show that the three facets of teachers’ assessment literacy competence are interrelated.
6. Interview Result
The interview aimed to assess teachers' knowledge, perceived practice, and actual practice in a CBA among teachers at private TVET colleges. Additionally, it sought to understand how teachers perceive, practice, and enhance students' educational assessments. The thematic and content analysis of the interview findings brings to light several crucial aspects of teachers' assessment practices and competence.
The results reveal differences between the self-perceived practices of TVET teachers and the outcomes of an assessment knowledge inventory test. Moreover, a disparity is evident between these self-perceived practices and the observed practices of these teachers. Both the knowledge inventory test and observed practices point to a poor level of teachers' competence, whereas the perceived practice indicated a modest level of competence.
Teachers in the study exhibit a lack of competence in educational assessments, encompassing both traditional and competency-based methods. They engaged in traditional assessments like assignments, group projects, true-false, matching, multiple-choice, and quizzes without a proper assessment approach, but often overlooked essential methods such as continuous assessments, practical assessments, observation, 360-degree feedback, final exams, and midterms. This deficiency is attributed to the absence of pre-service, in-service, and continuous professional development training in teaching methodology and educational assessment of students. Furthermore, the reuse of test items over multiple academic years was a predominant practice, aggravated by the absence of regular assessment-related meetings and a lack of criteria and support from the college.
Interviewees, including deans, students, instructors, assessors, and assessment center coordinators highlight a significant misconception regarding competence-based assessment principles. During the interview, an instructor shared,
"I believe I misunderstood competence-based assessment, deviating significantly from its principles. It seems that my colleagues, college administrators, and even TVET professionals held similar misconceptions. We never received any guidance on assessment or instructional strategies. I sincerely apologize for the numerous mistakes I've made in this regard."
Participants acknowledge a lack of instruction on assessment or instructional strategies, leading to various mistakes in understanding and implementing the competence-based assessment. Students report not being graded based on adherence to CBA principles, further emphasizing the deviation from CBA beliefs in assessment procedures.
Furthermore, the study reveals a disconnect between the missions and goals of colleges, which predominantly prioritize fundraising, expanding capacity to accommodate more students, and promotion, rather than addressing the imperative need to enhance teachers' assessment competence. Despite this awareness, no concrete actions are taken to improve the assessment competence of teachers.
According to a dean's opinion, "The institution's mission and objectives were primarily focused on fundraising and expanding the college's capacity to admit more students, rather than prioritizing efforts to enhance teachers' assessment competency.
Mistakes in student assessments were predominant, with all learners being considered competent regardless of their competence levels. Teachers' lack of understanding of assessment literacy, valid and reliable assessments, and quality assessment characteristics negatively impact students' motivation, engagement, and overall competence. The study demonstrates instances of students resorting to unethical practices, such as copying answers, to navigate assessments.
An assessor respondent stated as follows.
" Six years ago, there was an applicant for the NOCA Level Four accounting and finance department. Although she applied for the position, she lacked the necessary competence for Level Four accounting and finance at NOCA. In an attempt to obtain the NOCA assessment tool through unethical means, she resorted to copying answers provided by someone else. Despite three unsuccessful attempts to obtain her version of the assessment tools, on the fourth try, she managed to secure the necessary tools and successfully cheated her way through the assessment, ultimately gaining the certificate. This illegal certificate provided her with the opportunity to continue her undergraduate education. Today, she has successfully earned her first degree in accounting and finance and is gainfully employed in an Ethiopian commercial bank."
Participants stressed the need for a multifaceted approach to improve teachers' assessment competence, encompassing pre-service, in-service, and continuous professional development. Raising awareness among college owners, leaders, and regulatory bodies about potential misunderstandings regarding teachers' assessment competence is crucial. Nevertheless, an alternative perspective suggests adopting the assessment methods utilized by teachers in Ethiopian orthodox church education. Additionally, it advocates for cultivating a robust ethical framework for assessments among TVET teachers to enhance assessment competence in the context of Competency-Based Education.
A teacher respondent presented in the next manner "In the Ethiopian Ortodox church’s educational system, students actively engage in learning, and teacher-student interaction is predominantly centered around assessment purposes. The learning pace varies, with some students completing their studies in a year, while others may take up to ten years. Self-assessment and peer assessment are common practices, and public criticism of incompetent students is considered part of the learning process. Remarkably, teachers can evaluate up to 50 students individually per day without compromising integrity, making it economically feasible to teach up to four thousand students. To enhance the assessment competence of TVET teachers in Competency-Based Education (CBE), it is recommended to embrace the assessment methods employed by church education teachers, foster a robust ethical framework for assessments among TVET educators, and adapt the church’s educational system for TVET education."
In conclusion, the interview findings emphasize the urgent need for comprehensive interventions to address the shortcomings in TVET teachers' assessment practices and competence. The study highlights the potential consequences for students' competency and the overall quality of private TVET education in the region. It calls for prioritizing assessment competence in professional development courses to alleviate these challenges and ensure the alignment of assessment practices with competency-based principles.
7. Discussion
The significance of educational assessment in the classroom, as emphasized by Mertler (2003) and Tanujaya (2017), emphasizes its essential role in influencing student academic motivation and performance. Teachers' knowledge and beliefs are crucial influencers of their assessment practices, with undesirable attitudes potentially compromising the quality of assessment results (Green, 1991; Coombe et al., 2012; Bloome; Popham, 2009). Therefore, it’s crucial to fully understand teachers’ perceptions, competence, knowledge, and practices about educational assessment.
The study's results indicated that teachers generally perceived themselves as moderately competent in student assessment (M = 3.11, SD = .84). This study is consistent with the results from earlier research by Alkharusi (2011b, 2011c), Alkharusi et al. (2011), Lyon (2013), Mertler (2003), Mertler & Campbell (2005), Ogan-Bekiroglu, & Suzuk, (2014), Plake & Impara (1992), Zhang, & Burry-Stock, (2003). However, there was a clear disparity between self-perception and observed practice, revealing lower competence levels in actual student assessment practices (M = 1.67, SD = .48). This corroborated with findings from studies by Al-Bahlani (2017), Fan et al. (2011), Koh et al. (2011), and Quilter & Gallini (2000).
Assessment knowledge is a crucial competency for teachers, essential in the preparation and implementation of assessments that not only align with learning objectives but also offer students meaningful feedback. According to Amua-Sekyi et al. (2016), this literacy involves the ability to develop reliable assessments, administer them, and score them, thereby facilitating valid instructional decisions by state or provincial educational standards (Yan & Pastore, 2022). Unfortunately, our study reveals a deficiency in teachers' knowledge of assessment, with an average score of 8.8 out of 35 items (25% correct) across all seven areas measuring students' educational competence.
These findings align with prior research (Al-Bahlani, 2017; Alkharusi, 2011, 2012; Asenake and Lake, 2017; Campbell et al., 2002; Davidheiser, 2013; Büyükkarcı, 2016; Lake, 2014; Mertler, 2003; Perry, 2013; Plake, 1993; Plake et al., 1993) that consistently reports subpar levels of teacher assessment literacy.
The thematic analysis strengthens the quantitative outcomes, emphasizing teachers' shortcomings in educational assessments. This encompasses a lack of comprehension in assessment literacy, valid and reliable assessments, and the end quality assessments. Particularly, there is a shortage of awareness regarding inappropriate assessment practices, deficiencies in designing effective assessment instruments, selecting suitable assessment techniques, and effectively communicating results to students. Moreover, the analysis highlights an inadequate understanding of CBA principles.
The relationship between TVET teacher assessment knowledge, practices, and perceived skills. The results indicate that there is a statistically significant positive correlation between assessment literacy and perceived assessment practice (ϒ = .358, p < .05), between assessment literacy and observed assessment practice (ϒ =.648, p < .05), and between perceived assessment practice and observed assement practice (ϒ = .376, p < .05) . These findings indicated an interrelation among the three aspects of teachers’ assessment competence. The interview findings suggest that teachers’ assessment competence is perceived to be modest, with low levels of assessment literacy and practice, and it noted that this assessment literacy impacts students’ motivation to learn, and engage, and affects their overall competence. This finding is consistent the results of other studies (Al-Bahlani,2017;Alkharusi, 2011b; 2011c; Alkharusi et al., 2011, Alkharusi et al., 2012; Lyon, 2011; Mertler and Campbell, 2005). The study also found that there The overall assessment literacy was deemed low with an average score of 8.81/35 (25%) compared to the overall teachers’ perceived practice (competence total mean score 3.11 (moderately competent), practice total mean score 1.66).
In the last research question interview findings supported the quantitative data, indicating that teachers' assessment competence is perceived as modest, impacting students' motivation, engagement, and overall competence. This aligns with previous studies (Al-Bahlani, 2017; Alkharusi, 2011b, 2011c; Alkharusi et al., 2011, 2012; Lyon, 2011; Mertler & Campbell, 2005). The study recommended a multifaceted approach, incorporating pre-service, in-service, and continuous professional development to enhance teachers' assessment competence. The low overall assessment literacy emphasizes interventions, highlighting their essential role in teachers' assessment competence.
Teachers' assessment practices significantly influence the student learning experience, affecting not only their perceptions of learning but also future learning efforts (Airasian, 2001; Mellati & Khademi, 2018; Stiggins & Chappius, 2005; Napanoy & Peckley, 2020). The study suggests that adopting assessment methods used by Ethiopian Orthodox Church education teachers and fostering an ethical framework for assessments among TVET teachers could enhance their competence in Competency-Based Education (CBE). It stresses the importance of assessment competence for teachers to ensure quality education, prevent inadequately designed assessments, and promote effective use of assessment data for improved student outcomes (Mellati & Khademi, 2018; Wood et al., 2021). In conclusion, the study calls for a concentrated effort to address the deficiencies in teachers' assessment competence for the benefit of student learning experiences and outcomes.
8. Conclusions
The study's results emphasize a significant disparity among the assessment literacy inventory test, observed practice, and teachers' self-reported perceived practice outcomes. Private TVET teachers in the Amhara National Regional State fall short of meeting the seven standards for both test results and observed practice in student assessment. These findings suggest limitations in teachers' abilities to effectively select, develop, implement, and validate appropriate classroom assessments. Notably, there was a notable misalignment in teachers' literacy, perception, and practice in competency-based assessment. Although teachers' perceived practice was considered moderate, the actual practice and literacy in the seven standards of assessment were found to be inadequate. The study employed semi-structured interviews and observation questionnaires to assess teachers' perceptions and practices in educational assessment. Theme analysis revealed that only a small percentage of teachers demonstrated proficiency in specific assessment ideas, while the majority performed poorly on most assessment principles. Consequently, teachers appear to require professional assistance in educational assessment methods, as the majority expose their students to insufficient skill development. These findings highlight the need for targeted interventions and professional development initiatives to bridge the existing gaps in teachers' assessment literacy and practices.
9. Recommendations
The following recommendations are made to enhance the general competency of teachers in student assessment. The ATVETB is urged to offer training on the educational assessment of students and issue license certificates before assigning teachers and assessors to institutions and assessment centers. Emphasizing pre-service and in-service training, along with continuous professional development initiatives, should be a crucial point for private TVET colleges to ensure teachers are well-equipped for effective student educational assessment.
Furthermore, to widely understand the nationwide landscape and complexity of the issue, additional studies should be conducted. These studies should encompass a diverse subgroup and involve a substantial sample size, providing a more in-depth exploration of the competency of teachers and assessors in the educational assessment of students at the national level.
References
- Airasian, P. W. (2001). Classroom assessment: Concepts and applications. McGraw-Hill, PO Box 548, Blacklick, OH 43003. [https://eric.ed.gov/?id=ED446097].
- Al-Bahlani, S. M. (2019). Assessment literacy: A study of EFL teachers' assessment knowledge, perspectives, and classroom behaviors (Doctoral dissertation). The University of Arizona. [https://www.proquest.com/].
- Alkharusi, H. (2011b). A logistic regression model predicting assessment literacy among in-service teachers. Journal of Theory and Practice in Education, 7, 280-291. [https://www.researchgate.net/].
- Alkharusi, H. (2011c). Teachers’ classroom assessment skills: Influence of gender, subject area, grade level, teaching experience, and in-service assessment training. Journal of Turkish Science Education, 8, 39-48.
- Alkharusi, H. (2012). A Generalizability approach to the measurement of score reliability of the Teacher Assessment Literacy Questionnaire. Journal of Studies in Education, 2(2), 157-164. [CrossRef]
- American Federation of Teachers (AFT), National Council on Measurement in Education (NCME), & National Education Association (NEA). (1990). Standards for teacher competence in educational assessment of students. Lincoln, NE: Buros Center for Testing.
- Amua-Sekyi, E. T. (2016). Assessment, Student Learning, and Classroom Practice: A Review. Journal of Education and Practice, 7(21), 1-6. [https://eric.ed.gov/?id=EJ1109385].
- Bayat, K., & Rezaei, (2015). Importance of teachers’ assessment literacy. International Journal of English Language Education, 3(1). [CrossRef]
- Bellotti, F., Kapralos, B., Lee, K., Moreno-Ger, P., & Berta, R. (2013). Assessment in and o serious games: an overview. Advances in human-computer interaction, 2013. [CrossRef]
- Black, P., & Wiliam, D. (2005). Inside the black box: Raising standards through classroom assessment. Granada Learning. [https://books.google.com.et/].
- Bloome, D., & Green, J. L. (1991). Educational contexts of literacy. Annual review of applied linguistics, 12, 49-70. [CrossRef]
- (. Brookhart, S. M. (2001). Successful students' formative and summative uses of assessment information. Assessment in Education: Principles, Policy & Practice, 8(2), 153-169. [CrossRef]
- Brookhart, S. M. (2011). Educational assessment knowledge and skills for teachers. Educational Measurement: Issues and Practice, 30(1), 3-12. [CrossRef]
- Büyükkarcı, K. (2016). Identifying the areas for English language teacher development: A study of assessment literacy. Pegem Journal of Education and Instruction, 6(3), 333-346. [https://www.researchgate.net/]. [CrossRef]
- Campbell, C., Murphy, J. A., & Holt, J. K. (2002). Assessment literacy instrument: Applicability to press presented at the annual meeting of the Mid-Western E Association. Columbus, OH. [https://files.eric.ed.gov/].
- Coombe, C., Troudi, S., & Al-Hamly, M. (2012). Foreign and second language teacher assessment literacy: Issues, challenges, and recommendations. The Cambridge guide to second language assessment, 1, 20-29. [https://books.google.com.et/]. 1.
- Darling-Hammond, L. (2000). Teacher quality and student achievement: A review of state policy evidence. Education Policy Analysis Archives, 8(1). Retrieved from http://epaa.asu.edu/epaa/vol8.html.
- Davidheiser, S. A. (2013). Identifying Areas for High School Teacher Development: A Study of Assessment Literacy in the Central Bucks School District. ProQuest LLC. 789 East Eisenhower Parkway, PO Box 1346, Ann Arbor, MI 48106. [https://eric.ed.gov/?id=ED554020].
- Eckhout, T., Davis, S., Mickelson, K., & Goodburn, A. (2005). A method for providing assessment training to in-service and pre-service teachers. Paper presented at the annual meeting of the Southwestern Educational Research Association, New Orleans, LA. [https://www.researchgate.net/].
- Fan, Y. C., Wang, T. H., & Wang, K. H. (2011). A web-based model for developing assessment literacy of secondary in-service teachers. Computers & Education, 57(2), 1727-1740. [CrossRef]
- FTA. (2014a). Occupational assessment and certification directive, Federal Democratic Republic of Ethiopia, Federal TVET Agency. [https://www.academia.edu/35492129/.
- Gottheiner D. M., & Siegel M. A. (2012). Experienced Middle School Science Teachers’ Assessment Literacy: Investigating Knowledge of Students’ Conceptions in Genetics and Ways to Shape Instruction. The Association for Science Teacher Education, USA J Sci Teacher Educ 23, 531–557. [CrossRef]
- Gray and T. W. Banta (eds.), The Campus-Level Impact of Assessment: Progress, Problems, and Possibilities. New Directions for Higher Education, no. 100. San Francisco: Jossey-Bass, 1997. [http://eduq.info/xmlui/handle/11515/15792].
- Gulikers, J. T., Baartman, L. K., & Biemans, H. J. (2010). Facilitating evaluations of innovative, competence-based assessments: Creating understanding and involving multiple stakeholders. Evaluation and Program Planning, 33(2), 120-127. [CrossRef]
- Hailaya, W. M. (2014). Teacher assessment literacy and student outcomes in the province of Tawi-Tawi, Philippines (Doctoral dissertation). Retrieved from [https://hdl.handle.net/2440/9909].
- Hussain, S., Idris, M., & Akhtar, Z. (2021). Perceptions of teacher educators and prospective teachers on the assessment literacy and practices. Gomal University Journal of Research, 37(1), 71-83. [http://www.gujr.com.pk/index.php/GUJR/article/view/1341].
- Koh, K., Burke, L. E. C. A., Luke, A., Gong, W., & Tan, C. (2018). Developing the assessment literacy of teachers in Chinese language classrooms: A focus on assessment task design. Language Teaching Research, 22 (3), 264-288. [CrossRef]
- Lake, B. (2014). Secondary School Teachers' Competence in Educational Assessment of Students in Bahir Dar Town. Bahir Dar Journal of Education, 14 (2), 54-63. [https://www.ajol.info/index.php/bdje/article/view/249060.
- Lukin, L. E., Bandalos, D. L., Eckhout, T. J., & Mickelson, K. (2004). Facilitating the development of assessment literacy. Educational Measurement: Issues and Practice, 23(2), 26–32. [CrossRef]
- Lyon, E. G. (2013). “Assessment as Discourse”: A Pre-Service Physics Teacher’s Evolving Capacity to Support an Equitable Pedagogy. Education Sciences, 3(3), 279-299. [CrossRef]
- McMillan, J. H. (2001). Secondary teachers' classroom assessment and grading practices. Educational Measurement: Issues and Practice, 20 (1), 20-32. [CrossRef]
- Mellati, M., & Khademi, M. (2018). Exploring teachers' assessment literacy: Impact on learners' writing achievements and implications for teacher development. Australian Journal of Teacher Education (Online), 43(6), 1-18. [https://search.informit.org/doi/abs/10.3316/informit.689580391330186]. [CrossRef]
- Mertler, C. A. (2002). Classroom assessment literacy inventory. (Adapted from the Teacher Assessment Literacy Questionnaire (1993), by Barbara S. [CrossRef]
- Mertler, C. A. (2003, October 15-18). Pre-service versus in-service teachers’ assessment literacy: Does classroom experience make a difference [Paper presentation]? Annual meeting of the Mid-Western Educational Research Association, Columbus, Ohio. [https://eric.ed.gov/?id=ED482277].
- Mertler, C. A., & Campbell, C., (2005, April 11-15), Measuring teachers’ knowledge and application of classroom assessment concepts: Development of the assessment literacy inventory [Paper presentation]. Annual meeting of the AERA, Montreal, Quebec, Canada. [https://eric.ed.gov/?id=ED490355].
- Mertler, C. A. (2005). Secondary teachers' assessment literacy: Does classroom experience make a difference? American Secondary Education, 33, 76-92. [https://www.jstor.org/stable/41064623].
- MOE. (2008). National Technical and Vocational Education and Training (TVET) Strategy. Addis Ababa: Federal Democratic Republic of Ethiopia, Ministry of Education. [https://www.google.com/].
- Murphy, C., Neil, P., & Beggs, J. (2007). Primary science teacher confidence revisited: Ten years on. Educational research, 49 (4), 415-430. [CrossRef]
- Napanoy, J. B., & Peckley, M. K. (2020). Assessment literacy of public elementary school teachers in the indigenous communities in Northern Philippines. Universal Journal of Educational Research, 8 (11B), 5693-5703. [CrossRef]
- Ogan-Bekiroglu, F., & Suzuk, E. (2014). Pre-service teachers’ assessment literacy and its implementation into practice. The Curriculum Journal, 25(3), 344-371. [CrossRef]
- Ojerinde, D. (2009). Using assessment for the improvement of tertiary education in Nigeria: The Joint Admissions and Matriculation Board (JAMB) role. In 35th IAEA Conference, Brisbane, Australia.
- Perry, M. (2013). Teacher and Principal Assessment Literacy. The University of Montana. [https://www.proquest.com/].
- Phye, G. D. (1996). Handbook of classroom assessment: Learning, achievement, and adjustment. Academic Press. [https://scholar.google.com/].
- Plake, B. S., & Impara, J. C. (1992). Teacher competencies questionnaire description. Lincoln, NE: University of Nebraska, 312. [https://files.eric.ed.gov/fulltext/ED358131.pdf].
- Plake, B. S., Impara, J. C., & Fager, J. J. (1993). Assessment competencies of teachers: A national survey. Educational Measurement: Issues and Practice, 12 (4), 10-12. [CrossRef]
- Popham, W. J. (2006). Needed: A dose of assessment literacy. Educational Leadership, 63, 84-85. [https://eric.ed.gov/?id=EJ745569].
- Popham, W. J. (2009). Assessment literacy for teachers: Faddish or fundamental. Theory into practice, 48(1), 4-11. [CrossRef]
- Quitter, S. M. (1999). Assessment literacy for teachers: Making a case for the study of test validity.The Teacher Educator, 34(4), 235-243. [CrossRef]
- Quilter, S. M., & Gallini, J. K. (2000). Teachers’ assessment literacy and attitudes. The Teacher Educator, 36 (2), 115-131. [CrossRef]
- Siegel, M. A., & Wissehr, C. (2011). Preparing for the plunge: Preservice teachers’ assessment literacy. Journal of Science Teacher Education, 22 (4), 371–391. [CrossRef]
- Stiggins, R. J. (1995). Assessment literacy for the 21st century. Phi Delta Kappan, 77 (3), 238. [https://www.proquest.com/] (https://www.proquest.com/).
- Stiggins, R. J. (1999). Evaluating classroom assessment training in teacher education programs. Educational measurement: Issues and practice, 18 (1), 23-27. [CrossRef]
- Stiggins, R. J. (2001). The unfulfilled promise of classroom assessment. Educational Measurement: Issues and Practice, 20(3), 5-15. [CrossRef]
- Tagele, A., & Bedilu, L. (2015). Teachers competence in the educational assessment of students: The case of secondary school teachers in the Amhara National Regional State. The Ethiopian Journal of Education, 35(2), 163-191. [http://ejol.ethernet.edu.et/index.php/EJE/article/view/666].
- Tanujaya, B. (2017). Application of assessment as learning in mathematics instruction. Proceedings of the 5th South East Asia Development International Conference 2017, pp. 140-143. Atlantis Press. [http://repository.unipa.ac.id:8080/xmlui/handle/123456789/276].
- Wood, E., Bhalloo, I., McCaig, B., Feraru, C., & Molnar, M. (2021). Towards the development of guidelines for virtual administration of pediatric standardized language and literacy assessments: Considerations for clinicians and researchers. SAGE Open Medicine, 9, 20503121211050510. [CrossRef]
- Yan, Z., & Pastore, S. (2022). Are teachers literate in formative assessment? The development and validation of the Teacher Formative Assessment Literacy Scale. Studies in Educational Evaluation, 74, 101183. [CrossRef]
- Zhang, Z., & Burry-Stock, J. A. (2003). Classroom assessment practices and teachers' self-perceived assessment skills. Applied measurement in education, 16(4). [CrossRef]
Table 1.
Demographic characteristics of the quantitative study participants.
Table 1.
Demographic characteristics of the quantitative study participants.
| Variable |
Category |
Frequency |
Percentage |
| Gender |
Female |
10 |
15.87 |
| Male |
53 |
84.13 |
| Experience |
1 ̶ 3 |
9 |
14.29 |
| 4 ̶ 6 |
19 |
30.16 |
| 7 ̶ 9 |
19 |
30.16 |
| 10 - 12 |
16 |
25,4 |
| Level of education |
Level 4 |
28 |
44,44 |
| BA/BSC |
28 |
44.44 |
| MA/MSC |
7 |
11.11 |
| Department |
Nursing |
21 |
33.33 |
| Accounting |
10 |
15.87 |
| Pharmacy |
9 |
14.29 |
| Laboratory |
9 |
14.29 |
| Database |
14 |
22.22 |
Table 2.
Means, standard deviations, and one sample t-test of teachers’ perceived assessment practice of students’ educational assessment.
Table 2.
Means, standard deviations, and one sample t-test of teachers’ perceived assessment practice of students’ educational assessment.
| Competency area |
Mean |
SD |
Test value |
t-test |
sig |
| Choosing assessment methods |
11.7 |
5.23 |
12 |
.46 |
.649 |
| Developing assessment methods |
18.33 |
10.16 |
21 |
2.88 |
.041 |
| Administering, scoring, and interpreting Assessments |
19.11 |
6.01 |
18 |
1.47 |
.148 |
| Using Assessments for Decision-Making |
17.52 |
7.64 |
15 |
2.62 |
.011 |
| Using assessments for grading |
13.59 |
7.49 |
12 |
1.69 |
.097 |
| Communicating assessment results |
11.84 |
3.37 |
12 |
.373 |
.710 |
| Recognizing unethical practices |
16.21 |
4.46 |
15 |
2.15 |
.036 |
| Total perceived practice |
108.97 |
27.7 |
105 |
1, 13 |
.263 |
Table 3.
Means, standard deviations, and one sample t-test of teachers’ assessment literacy competency knowledge test.
Table 3.
Means, standard deviations, and one sample t-test of teachers’ assessment literacy competency knowledge test.
| Assessment Literacy Competency Knowledge |
Mean |
SD |
Test value |
t-test |
sig |
| Choosing Assessment Methods |
1.60 |
.93 |
1.5 |
.885 |
.380 |
| Developing Assessment Methods |
.97 |
.76 |
2 |
10.76 |
.001 |
| Administering, Scoring, and Interpreting Assessments |
1.92 |
1.24 |
3.5 |
10.15 |
.001 |
| Using Assessments for Decision-Making |
1.38 |
1.24 |
3.5 |
13.6 |
.001 |
| Using Assessments for Grading |
.90 |
.98 |
2.5 |
12.93 |
.001 |
| Communicating Assessment Results |
.92 |
.92 |
1.5 |
4. 90 |
.001 |
Recognizing Unethical Practices Total Assessment Literacy |
1.11 8.81 |
1.18 3.57 |
3 17.5 |
12.71 19.30 |
.001 .001 |
Table 4.
Trainers’ observed practice in seven areas of students’ educational assessment.
Table 4.
Trainers’ observed practice in seven areas of students’ educational assessment.
| |
Mean |
SD |
Test value |
t value |
|
sig |
| Choosing Assessment Methods |
10.59 |
2.88 |
15 |
12.15 |
|
.001 |
| Developing Assessment Methods |
8.32 |
1.52 |
21 |
66.13 |
|
.001 |
| Administering, Scoring, and Interpreting Assessments |
19.75 |
3.59 |
36 |
35.96 |
|
.001 |
| Using Assessments for Decision-Making |
13.22 |
6.33 |
24 |
13.50 |
|
.001 |
| Using Assessments for Grading |
6.80 |
3.52 |
12 |
11.70 |
|
.001 |
| Communicating Assessment Results |
13.97 |
7.04 |
24 |
35.97 |
|
.001 |
| Recognizing Unethical Practices |
8.97 |
4.45 |
15 |
10.77 |
|
.001 |
| Total observed |
81.62 |
23.32 |
147 |
22.26 |
|
.001 |
Table 5.
Correlation coefficient between teachers’ assessment test scores and perceived practices and observed practice.
Table 5.
Correlation coefficient between teachers’ assessment test scores and perceived practices and observed practice.
| |
Variables |
|
Mean |
SD |
Correlation coefficient |
| 1 |
2 |
3 |
| 1 |
Assessment literacy |
|
8.81 |
3.57 |
1 |
|
|
| 2 |
Perceived practice |
|
108. 94 |
27.67 |
358* |
1 |
|
| 3 |
Observed practice |
|
81.62 |
23.32 |
.648* |
.376* |
1 |
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).