Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Do Written Responses to Open-Ended Questions on Fourth-Grade Formative Assessments in Mathematics Help Predict Scores on End-of-Year Standardized Tests?

Version 1 : Received: 31 March 2022 / Approved: 1 April 2022 / Online: 1 April 2022 (04:37:03 CEST)

How to cite: Urrutia, F.; Araya, R. Do Written Responses to Open-Ended Questions on Fourth-Grade Formative Assessments in Mathematics Help Predict Scores on End-of-Year Standardized Tests?. Preprints 2022, 2022040002. https://doi.org/10.20944/preprints202204.0002.v1 Urrutia, F.; Araya, R. Do Written Responses to Open-Ended Questions on Fourth-Grade Formative Assessments in Mathematics Help Predict Scores on End-of-Year Standardized Tests?. Preprints 2022, 2022040002. https://doi.org/10.20944/preprints202204.0002.v1

Abstract

: Predicting long-term student learning is a critical task for teachers and for educational data mining. However, most of the models do not consider two typical situations in real-life classrooms. The first is that teachers develop their own questions for formative assessment. Therefore, there are a huge number of possible questions, each of which is answered by only a few students. Second, formative assessment often involved open-ended questions that students answer in writing. These types of questions in formative assessment are highly valuable. However, analyzing the responses automatically can be a complex process. In this paper, we address these two challenges. We analyzed 621,575 answers to closed-ended questions and 16,618 answers to open-ended questions by 464 fourth-graders from 24 low-SES schools. We constructed a classifier to detect incoherent responses to open-ended mathematics questions. We then used it in a model to predict scores on an end-of-year national standardized test. We found that despite answering 36.4 times fewer open-ended questions than closed questions, including features of the students’ open responses in our model improved our prediction of their end-of-year test scores. To the best of our knowledge, this is the first time that a predictor of end-of-year test scores has been improved by using automatically detected features of answers to open-ended questions on formative assessments.

Keywords

Computational linguistics; elementary mathematics; formative assessments; student models

Subject

Social Sciences, Education

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.