Submitted:
22 November 2025
Posted:
24 November 2025
You are already at the latest version
Abstract
Keywords:
- Does the implementation of digital self-assessment quizzes improve student performance compared to traditional assessments?
Literature Review
Enhancing Engagement and Interactivity
Contradictory Evidence: Tech Fatigue and Cognitive Overload
Immediate Feedback and Learning Outcomes
Adaptive Learning and Personalization
Efficiency, Accessibility, and Digital Readiness
South Asian Contextual Barriers
Synthesis and Gaps
Methodology
Reliability, Validity, and Ethical Considerations
Results
| Effect | SS | Df | MS | F | P | Partial η² |
| Group | 44.10 | 1 | 44.10 | 13.26 | <.001 | .145 |
| Error | 259.40 | 78 | 3.33 | --- | --- | --- |
| Contrast | Mean Diff | SE | P | 95% CI |
| Test 1 → Test 2 | -0.95* | .131 | <.001 | -1.21—0.69 |
Discussion
Interpretation of Core Findings
Contextual Considerations
Comparison with Previous Research
Limitations
- Sample size and scope: only 80 students from a single girls-only college limits generalizability.
- Limited item bank: restricted the adaptive or diagnostic power of digital quizzes.
- Practice effects: repeated-measures design may have inflated performance improvements despite slight difficulty adjustments.
Pedagogical Implications
Conclusion
Recommendations and Future Implementation
- Educators should attend workshops and seminars on digital tools such as MS Forms. Training should focus on best practices in quiz design, interactivity, and formative assessment strategies to maximize student engagement (Graham, Borup, & Smith, 2012).
- Implementing a system to gather students’ feedback on digital assessments can help identify usability issues, technical challenges, and the perceived value of instant feedback (Nicol & Macfarlane-Dick, 2006).
- Encouraging educators to engage in regular discussions, forums, or digital communities can promote sharing of effective strategies, co-development of assessments, and continuous improvement (Pryor & Crossouard, 2008).
- Orientation sessions and workshops should be provided to improve students’ confidence with online platforms, particularly in contexts where access to technology is limited.
References
- Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education: Principles, Policy & Practice, 5(1), 7–75.
- García-Peñalvo, F. J., Corell, A., Abella-García, V., & Grande, M. (2020). Online assessment in higher education in the time of COVID-19. Education in the Knowledge Society (EKS), 21, Article 12. [CrossRef]
- Godsk, M., & Møller, K. L. (2024). Engaging students in higher education with educational technology. Education and Information Technologies, 30(6), 2941–2976. [CrossRef]
- Graham, C. R., Borup, J., & Smith, N. B. (2012). Using TPACK as a framework to understand teacher candidates’ technology integration decisions. Journal of Computer Assisted Learning, 28(6), 530–546. [CrossRef]
- Hamari, J., Shernoff, D. J., Rowe, E., Coller, B., Asbell-Clarke, J., & Edwards, T. (2016). Challenging games help students learn: An empirical study on engagement, flow and immersion in game-based learning. Computers in Human Behavior, 54, 170–179. [CrossRef]
- Huang, W., Stephens, J. M., & Brown, G. T. L. (2025). Feedback assisted by technology: A systematic review of empirical research. International Journal of Technology in Education (IJTE), 8(2), 421–444. [CrossRef]
- Jamil, S., & Muschert, G. (2023). The COVID-19 pandemic and E-learning: The digital divide and educational crises in Pakistan’s universities. American Behavioral Scientist, 68(9), 1161-1179. [CrossRef]
- Nicol, D. J., & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199–218. [CrossRef]
- Nikou, S., & Aavakare, M. (2021). An assessment of the interplay between literacy and digital technology in higher education. Education and Information Technologies, 26, 3893–3915. [CrossRef]
- Pryor, J., & Crossouard, B. (2008). A socio-cultural theorisation of formative assessment. Oxford Review of Education, 34(1), 1–20. [CrossRef]
- Saleem, F., Chikhaoui, E., & Malik, M. I. (2024). Technostress in students and quality of online learning: Role of instructor and university support. Frontiers in Education, 9, Article 1309642. [CrossRef]
- Shute, V. J. (2008). Focus on formative feedback. Review of Educational Research, 78 (1), 153–189. [CrossRef]
- Simon, P. D., & Zeng, L. M. (2024). Behind the scenes of adaptive learning: A scoping review of teachers’ perspectives on the use of adaptive learning technologies. Education Sciences, 14 (12), 1413. [CrossRef]
- Sweller, J. (2020). Cognitive load theory and educational technology. Educational Technology Research and Development, 68, 1-16. [CrossRef]
- Tomlinson, C. A. (2014). The differentiated classroom: Responding to the needs of all learners (2nd ed.). ASCD.
- Waqar, Y., Rashid, S., Anis, F., & Muhammad, Y. (2024). Digital divide & inclusive education: Examining how unequal access to technology affects educational inclusivity in urban versus rural Pakistan. Journal of Social & Organizational Matters, 3(3), 1-13. [CrossRef]
- Zhang, D., Zhao, J. L., Zhou, L., & Nunamaker, J. F. Jr. (2004). Can e-learning replace classroom learning? Communications of the ACM, 47(5), 75–79. [CrossRef]
- Zheng, M., Bender, D., & Lyon, C. (2021). Online learning during COVID-19 produced equivalent or better student course performance as compared with pre-pandemic: empirical evidence from a school-wide comparative study. BMC Medical Education, 21(1), Article 495. [CrossRef]
| Group | Mean | Std. Deviation | N | |
| Test 1 | Control | 11.10 | 1.44 | 40 |
| Experimental | 12.20 | 1.41 | 40 | |
| Test 2 | Control | 12.10 | 1.50 | 40 |
| Experimental | 13.10 | 1.29 | 40 |
| Factor | Level | Mean | SE | CI |
| Time | Test 1 | 11.65 | .165 | 11.32- 11.98 |
| Time | Test 2 | 12.60 | .151 | 12.30- 12.90 |
| Group | Control | 11.60 | .204 | 11.19- 12.00 |
| Group | Experimental | 12.65 | .204 | 12.24- 13.06 |
| Effect | SS | Df | MS | F | P | Partial η² |
| Time | 36.10 | 1 | 36.10 | 52.34 | <.001 | .402 |
| Time × Group | 0.10 | 1 | 0.10 | 0.15 | .704 | .002 |
| Error (Time) | 53.80 | 78 | 0.69 | --- | --- | --- |
| Group |
Test 1 Mean | Test 2 Mean | Improvement |
| Control | 11.10 | 12.10 | +1.00 |
| Experimental | 12.20 | 13.10 | +0.90 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).