Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Do Written Responses to Open-Ended Questions on Fourthgrade Online Formative Assessments in Mathematics Help Predict Scores on End-of-Year Standardized Tests?

*,‡ ORCID logo and *,‡ ORCID logo
These authors contributed equally to this work.
Version 1 : Received: 3 July 2022 / Approved: 12 July 2022 / Online: 12 July 2022 (04:14:37 CEST)

A peer-reviewed article of this Preprint also exists.

Urrutia, F.; Araya, R. Do Written Responses to Open-Ended Questions on Fourth-Grade Online Formative Assessments in Mathematics Help Predict Scores on End-of-Year Standardized Tests? J. Intell. 2022, 10, 82. Urrutia, F.; Araya, R. Do Written Responses to Open-Ended Questions on Fourth-Grade Online Formative Assessments in Mathematics Help Predict Scores on End-of-Year Standardized Tests? J. Intell. 2022, 10, 82.

Abstract

Predicting long-term student achievement is a critical task for teachers and for educational data mining. However, most of the models do not consider two typical situations in real-life classrooms. The first is that teachers develop their own questions for online formative assessment. Therefore, there are a huge number of possible questions, each of which is answered by only a few students. Second, online formative assessment often involves open-ended questions that students answer in writing. These types of questions in online formative assessment are highly valuable. However, analyzing the responses automatically can be a complex process. In this paper, we address these two challenges. We analyzed 621,575 answers to closed-ended questions and 16,618 answers to open-ended questions by 464 fourth-graders from 24 low-SES schools. Using linguistic features of the answers and an automatic incoherent response classifier, we built a linear model that predicts the score on and end-of-year national standardized test. We found that despite answering 36.4 times fewer open-ended questions than closed questions, including features of the students’ open responses in our model improved our prediction of their end-of-year test scores. To the best of our knowledge, this is the first time that a predictor of end-of-year test scores has been improved by using automatically detected features of answers to open-ended questions on online formative assessments.

Keywords

Computational Linguistics; Online Learning; Student Model; Online Formative Assessments; Student Achievement

Subject

Social Sciences, Education

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.