Preprint
Article

This version is not peer-reviewed.

Interactional Competence in a Videoconferencing-Based Speaking Test: Conceptualised from Teachers’ Perception and Rating

Submitted:

07 March 2025

Posted:

11 March 2025

You are already at the latest version

Abstract
Despite the growing acknowledgement of interactional competence (IC) in speaking tests, there has been teachers’ uncertainty about their views on and understanding of IC assessment, especially when the rapid incorporation of online tests in tertiary institutions is involved. The way IC has been long-conceptualised and how teachers rate IC in speaking tests, therefore, might have to be revisited and viewed with a critical eye to be suited to new circumstances. To fill these two gaps, a qualitative study was conducted at a university in Vietnam to explore EFL teachers’ understanding of IC and IC assessment in videoconferencing speaking tests in the context of transforming from face-to-face to virtual learning. Semi-structured interviews were conducted with five teachers who were involved in the assessment of Zoom-based dialogues between second-year English majors. Thematic analysis of the interviews show that the teachers had their own definitions and conceptualisations of IC features, had several misconceptions of IC and found it challenging to assess IC in the videoconferencing test as the IC features they observed in face-to-face and Zoom-based tests were not entirely the same. The teacher participants also reported to mainly employ Questions and responses, Asking for clarification, Cooperating, Turn-taking, and Body language while never or minimally utilising Planning, Compensating, and Monitoring and Repair. These findings offer insights into context-specific videoconferencing-based assessment of speaking competence and provide useful information for teacher training as well as the incorporation of IC into assessment scales.
Keywords: 
;  ;  ;  
Subject: 
Social Sciences  -   Education

1. Background to the Study

The concept of interactional competence (IC) has evolved significantly since Kramsch (1986) introduced it to address gaps in existing language proficiency models that overlooked co-constructed communication skills. IC is now widely recognized as a critical component of communicative proficiency, requiring individuals to engage in meaningful and purposeful interactions while adapting to specific socio-cultural and pragmatic contexts (Hall & Doehler, 2011; May et al., 2020; Roever & Kasper, 2018). This involves employing strategies such as turn-taking, non-verbal cues, and interactive listening to facilitate effective communication. These features have informed classroom activities and speaking assessments, highlighting the need for task-specific opportunities to evaluate IC.
Research on IC has primarily focused on face-to-face interactions, exploring how features such as topic development, turn-taking, and pragmatic competence can be assessed and integrated into rating scales (Galaczi, 2014; S. J. Young, 2015). Efforts to develop valid IC rating criteria include role-play analyses (S. J. Young, 2015) and examiner reports (May et al., 2020), both of which emphasize the context-sensitive nature of IC assessment and the need for adaptable frameworks. While these studies have advanced understanding of IC in traditional settings, research on IC in videoconferencing-based assessments remains sparse. Studies comparing face-to-face and virtual interactions reveal notable differences in interactional dynamics, such as increased use of clarification requests in video-mediated tasks due to technical limitations (Nakatsuhara et al., 2017) and reduced non-verbal cues impacting interactional behaviors (L. Davis et al., 2018; Kim & Craig, 2012). These findings call for further exploration of how other IC features such as turn-taking and engagement manifest in virtual environments.
Given the critical role of IC in developing students' communication skills (Vurdien, 2019), understanding how teachers conceptualize and assess IC is essential for creating a comprehensive and context-sensitive list of IC features. This becomes particularly urgent in light of the recent shift from face-to-face to online teaching, learning, and assessment modes due to the rapid technologial advancement. At the university in Vietnam where the researchers work, speaking tests previously conducted in person have been recently administered via Zoom, a videoconferencing tool. This might be the case for numerous other institutions considering the global proliferation in educational advanced technology. This shift not only presents new challenges for assessing IC in a virtual context, but also underscores the need to critically revisit traditional conceptualizations of IC and adapt them to new assessment scenarios (Salaberry & Burch, 2021).
Additionally, much of the existing research on IC relies on insights from trained raters of commercial tests (e.g., May, 2009, 2011; Vo, 2021) or language students in Applied Linguistics or TESOL programs (e.g., L. Davis et al., 2018; Vurdien, 2019), leaving classroom teachers—who play a pivotal role in assessing speaking performance—largely absent from the discussion. Scarce studies have examined IC-related insight from teachers, especially those university teachers who have never been formally coached in IC. Also, few studies have employed follow-up interviews to explore teachers' perceptions of IC, particularly in Zoom-mediated speaking tests. These gaps highlight the need for more context-specific research that examines how classroom teachers conceptualize and evaluate IC in both face-to-face and virtual environments.
This study aims to address these gaps by investigating EFL teachers’ understanding of IC and their rating practices in videoconferencing-based speaking tests at the tertiary level. Specifically, it seeks to answer the following research questions:
1. What are teachers’ perceptions of interactional competence?
2. How do teachers rate interactional competence in Zoom-based speaking tests?
By addressing these questions, the study has contributed to the development of more effective IC assessment criteria tailored to virtual environments. It also offers crucial insights for teacher training and the design of speaking assessment frameworks that align with the evolving needs of both face-to-face and online teaching contexts.

2. Literature Review

2.1. Definitions of Interactional Competence

Nearly 40 years ago, Kramsch (1986) introduced the concept of interactive competence (IC) within applied linguistics, noting a need for more existing proficiency models that overlooked communication skills. She argued that interaction is fundamentally co-constructed, suggesting that effective communication arises from participants assisting each other in conveying their intended meanings through questioning, paraphrasing, and revising their original statements. Building upon Kramsch's call, scholars and language educators have since explored IC for classroom activities and assessment tasks. Nearly four decades after Kramsch's call, it is widely acknowledged that effective communication hinges on individuals' consideration of the communication context and their interlocutors (Ross, 2018; R. F. Young, 2011). This implies that adept interaction necessitates a comprehensive understanding of language in general, the specific linguistic features suitable for particular contexts, the context itself (including individual roles within interactions), and the skills to utilise this knowledge effectively for communication (Hall & Doehler, 2011; R. F. Young, 2008). For instance, individuals may adjust their speech, such as formality levels, based on their familiarity with their conversation partner. Also, the conversational style may vary depending on the setting (e.g., a classroom discussion versus a casual coffee shop chat). According to Mehan (1979), IC encompasses two types of competence: "the competence necessary for effective interaction" and "the competence that is available in the interaction between participants" (p. 130). Scholars have further elucidated the concept of IC and explored its features, evident in classroom activities and speaking assessment tasks.
Roever and Kasper (2018) identify IC features as generic interactional properties, including turn-taking for action performance, open/close interactions, and conversational repair. Participants deploy various conversational strategies according toz specific interactional contexts involving their interlocutors. The ability to employ appropriate turn-taking and open/close interactions is contingent upon the interactional behaviours of participants, underscoring the inherently co-constructed nature of the interaction. Some scholars propose that successful communication should entail speakers initiating topics, employing non-verbal behaviours (e.g., facial expressions, postures, and eye gaze), and employing interactive listening strategies (e.g., signalling comprehension, requesting clarifications, negotiating meanings) (Lam, 2018; Plough et al., 2018). These conceptualisations of IC could significantly influence task design and scoring, and inferences are drawn about test-takers' abilities from their scores (Roever & Kasper, 2018). For example, if learners' turn-taking behaviours are to be assessed as part of their IC, assessment tasks should be crafted to facilitate opportunities for participants to take turns. Classroom activities like information-gap jigsaw tasks, where participants possess partial information required for task completion and are expected to share it, exemplify contexts where test-takers can demonstrate turn-taking, topic initiation, and negotiation of meaning strategies. To comprehend IC as a component of speaking proficiency and to devise assessment/classroom tasks, accordingly, examining the descriptions of rating scales or frameworks is beneficial to discern how IC features are integrated into existing speaking scales (see Iwashita, 2022).

2.2. Research on Interactional Competence

Research on interactional competence (IC) has been primarily conducted through the analysis of test-taker discourse and raters' verbal protocols. Initially, studies focused on identifying IC features in face-to-face interactions, particularly in classroom paired or group assessments (e.g., Galaczi, 2008; Gan, 2010; He & Dai, 2006; Luk, 2010; Nakatsuhara, 2010; Vo, 2021). Subsequent research explored the integration of IC features into existing rating scales (e.g., Galaczi, 2014; Ikeda, 2017; Lam, 2018; Roever & Kasper, 2018; S. J. Young, 2015) to better understand their scalability and applicability. For instance, Galaczi (2014) identified vital interactional features such as topic development, turn-taking, and listener support across different proficiency levels in pair speaking tests. Lam (2018) observed variations in students' responses based on their proficiency levels during group assessment tasks in Hong Kong. Roever and Kasper (2018) found quantitative and qualitative differences in using preliminaries among higher and lower-level second language (L2) speakers. In an examination of pragmatic competence within the interaction, Ikeda (2017) and Young (2015) conducted validation studies within the context of a university English for Academic Purposes (EAP) setting. Through detailed discourse analysis of speaking performances during roleplays, these researchers developed rating scales that specifically targeted the features of pragmatic competence, which were subsequently employed for assessment purposes. The outcomes of both studies unveiled that these rating scales effectively distinguished performance features across different proficiency levels, demonstrating their discriminative capacity in evaluating tasks designed for the studies.
Several studies analysed verbal reports of raters collected during rating sessions to identify the IC features in test performance. For instance, May (2009) uncovered three key features of IC proposed by Jacoby and Ochs (1995) in her examination of verbal reports from 12 raters. Similarly, Ducasse and Brown (2009) highlighted non-verbal interpersonal communication (such as the use of body language and gaze), interactive listening (referring to the test-takers manner of displaying attention or engagement), and interactional management (concerning the management of topics and turns) as the principal features of interaction. These studies underscore that the aspect of interactional performance included in raters’ verbal reports is often overlooked in general scales and advocate for the development of scales that accurately reflect the complexities of IC in paired speaking tests to assess test-takers’ IC. Furthermore, the research aligns with the proposition of Fulcher et al. (2011) that a performance data-driven approach (PDA), which prioritises observations of language performance, could furnish a more comprehensive description of test-taker performance. Expanding on prior research, May et al. (2020) undertook a thematic analysis of examiner comments extracted from stimulated verbal reports on paired interactions. This analysis revealed nine main categories and over 50 sub-categories, with the aim of providing scales tailored for classroom assessment purposes.

2.3. Studies in Virtual Environments

Many studies have been conducted in virtual environments utilising video-conferencing technologies to expand upon the research on interactional competence (IC) in face-to-face interaction. For instance, Kim and Craig (2012) compared the quality of oral interviews conducted in face-to-face and video-conference environments. They observed that the online interview mode enhanced interactivity between interviewers and test-takers. However, they also noted that test-takers' interactive behaviours were influenced by factors such as low familiarity with computers, interviewer effects, and limited non-linguistic cues in communication. Similarly, Nakatsuhara et al. (2017) sought to explore whether video-conferencing technology could be a viable alternative to face-to-face interactions. They compared test-taker performance in both face-to-face and video-conference modes, examining various aspects, including test scores, linguistic output, notes on raters' test administration, and verbal reports on rating behaviours across the two modes. Although they found no significant differences in test scores, they reported substantial variations in language functions. For example, asking for clarification was more common in the video-conference mode due to factors such as sound quality and time lag. In contrast, activities like comparing and suggesting were more prevalent in face-to-face conditions, possibly because of the ease of relating to the rater. Additionally, they observed more challenges in examiners' interactional behaviours in video-conferencing conditions, such as limited nodding and back-channelling, reduced use of body language, and difficulties in turn management.
Building on the earlier study, Nakatsuhara et al. (2021) compared test-takers' performances under different rating conditions: live, audio, and video. They analysed examiners' test scores, verbal reports, and written notes to understand how these conditions affected the assessment process and outcomes. The analysis revealed that audio rating scores were consistently the lowest across all criteria compared to live and video rating conditions. This indicates that the absence of visual cues in the audio-only condition might have led to lower performance evaluations by examiners. Also, examiners identified similar negative performance features under audio and video rating conditions. However, these negative features were translated into lower scores, specifically in the audio-rating condition. This suggests that the lack of visual information in audio-only assessments might amplify perceived performance issues. Verbal reports from raters indicated that visual information available in both live and video rating conditions aided their comprehension of test-taker performance. This highlights the importance of non-verbal cues in enhancing raters' understanding and evaluation of performances. The study found comparable scores between face-to-face and video-conference modes of speaking assessment. However, it noted that while the scores were similar, the characteristics of interactions were not entirely identical. This suggests that although the outcomes may be comparable, there may still be differences in the dynamics and nuances of interaction between these modes.
In addition to examining the quality of interactions in virtual environments, other studies have explored the feasibility of assessment tasks conducted via video-conferencing modes and the potential use of new technology to replace human interlocutors. For instance, Davis et al. (2018) investigated the feasibility and usability of prototype speaking tasks delivered via Skype, encompassing both monologic and dialogic speeches and participants' perceptions of video-mediated speaking tests. Their findings revealed three primary challenges associated with video-mediated speaking tests: concerns regarding test reliability, the test-takers' familiarity with accessing technology, and the limited visual input provided by video compared to face-to-face conversations. Furthermore, Ockey et al. (2017) delved into virtual environments (VEs) to enable contextualised group discussions and facilitate interactive speaking assessments. While the use of avatars (computer-animated human characterisations) aimed to enhance nonverbal representation, it was found to be inadequate for facilitating full verbal communication, including conveying facial gestures and hand motions. Nevertheless, virtual environments were found capable of eliciting aspects of IC, such as turn-taking, responding, and building on other participants' contributions (Iwashita et al., 2021). Ockey et al. (2017) emphasised the importance of incorporating non-verbal cues into interaction to foster a more authentically social presence.
In summary, research on IC has identified various features essential for effective communication, both in face-to-face and virtual environments. Continued exploration and refinement of IC assessment criteria are crucial for accurately evaluating test-takers communicative abilities across different contexts and proficiency levels. Although the studies reviewed above contribute to the incorporation of interactional competence in speaking assessments in various contexts, raters involved in the studies are mostly trained raters of commercial tests and/or graduate students in the Applied Linguistics/TESOL Program. Further, a study by May et al. (2020) provides a comprehensive list of IC for classroom teachers to adapt to their context. To date, there is a paucity of research about the assessment of IC by classroom teachers. Despite the wide recognition of IC as a part of speaking proficiency, how IC is assessed in a classroom context is not clear. At the same time, little research has been done on teachers’ perceptions of IC in the context of web-based speaking assessment, especially at higher education. In order to fill the gaps, this paper addressed two following research questions:
1. What are teachers’ perceptions of interactional competence?
2. How do teachers rate interactional competence in VCF-based speaking tests?

3. Methodology

This qualitative exploratory study was conducted at the English Department of a large urban foreign language-specialised university in Vietnam when it had just transformed from face-to-face assessment to totally online videoconferencing-based assessment, especially for second-year English majors whose end-of-term B1 speaking tests were conducted and marked in pairs via Zoom or Google Meets.
Five out of 10 Language Foundation Division teacher participants were recruited on a voluntary and purposive basis. Inclusion decision was made on the foundation of whether the teachers taught the Term-2 speaking unit and rated their students’ end-of-term pair dialogues. They were all female, aged between 25 and 37, had three to 12 years of teaching experience, held a MA degree in TESOL, and had not been formally trained in IC before.
A semi-structured interview protocol was developed to explore the teachers' perceptions of IC, including its definition, constructs, role in communication, and their practices in rating IC during the speaking test. Exploratory interviews were deemed suitable for this purpose, as they allow for the emergence of new topics that could enrich the study’s findings (Gill et al., 2008).
A pilot interview was conducted with two teachers to test the clarity and effectiveness of the interview questions. This 25-minute Zoom-based interview session included 10 questions, with three subsequently reworded for better clarity. The main interviews were then conducted online via Zoom, lasting 20 to 30 minutes, and scheduled based on the availability and convenience of the five participating teachers.
Following the initial data analysis, follow-up interviews of 5 to 10 minutes were conducted with all participants. These follow-ups served to clarify specific statements and gather additional relevant information regarding the teacher participants’ examples of IC constructs, ensuring the data captured a comprehensive understanding of the teachers’ perceptions and practices regarding IC.
The interviews were audio-recorded, transcribed, and analysed by the research team. The six-phased thematic analysis abductive approach (Braun & Clarke, 2006) was adopted. Central to the analysis was the development of a coding framework derived from the interview questions and the identification of themes that emerged from the interview transcripts. Initial overarching themes from the literature review (for example, IC operationalisations) formed a baseline coding framework, from which further categories/codes or subcategories or sub-codes were added on.
Each interview was double coded by two researchers who discussed any discrepancies and made necessary refinements of code definitions until agreement was reached. Great efforts were made to enhance reliability and validity of findings by conforming to trustworthiness maintaining strategies recommended by Nowell et al. (2017) such as thorough documentation, frequent meetings, constant comparisons and reflections and ongoing triangulation.

4. Findings

4.1. Teacher Perceptions of IC in Videoconferencing-Based Speaking Tests

One purpose of this study was to discover what teachers’ perceptions of IC in videoconferencing-based oral assessment were. Findings show that the interviewed teachers appeared to be aware of the importance of IC in communication, be able to define IC in their own ways, had some sort of implicit understanding of IC, misconceived its components and be conscious of the differences between videoconferencing-based and face-to-face IC assessment.

4.1.1. Importance of IC

It was consistently found that all the interviewed participants acknowledged the importance of IC in communication in different settings where natural conversations happen, and at different levels, from internationally, nationally to institutionally, as Teacher 1 said,
We are in the 21st century, and in order to become a global citizen, students should be equipped with the ability to interact, not just in a team, but maybe nationally and internationally.
One interviewee, Teacher 4, suggested that IC would play a big role in a natural setting where people speak in pairs or have a bigger group discussion.
Most of the time in a natural setting, you would need to interact with other people speaking in pairs or having a bigger group discussion. So interactional competence would play a big role in those settings.
This last quote indicates that the teacher had initial fundamental understandings of the conditions for IC to be captured, which is presented in more detail in the next findings.

4.1.2. Definitions of IC

When asked how they defined IC, each teacher offered their own definition of IC, some of which were broad while others were narrower. For instance, Teacher 4 broadly referred to IC as
a pattern of asking questions, responding to questions, and using verbal and nonverbal feedback to show that you're really paying attention. It could be in the form of using some backchannels like, ‘Uh huh! Yeah! Right!’ just to show that you're listening to the partner... So everything. And of course, language too.
This definition apparently covers quite a number of components such as verbal and nonverbal language, asking and answering questions, and back channelling. Like Teacher 4, Teacher 5 also mentioned body language as among what constituted IC; however, she added one more component, which was turn-taking.
It [IC] means they are aware of how to take turns in order to maintain the conversations as well as to start the conversation, [and] how they are going to use body language if they want to mean something. (Teacher 5)
Both Teacher 5 and Teacher 4 attached importance to the two-way nature of IC reflected via how the flow of communication was kept through “paying attention” and “maintaining the conversation.” Likewise, Teacher 2 briefly defined IC as "the ability of students to respond to other’s opinions and develop the content together.”

4.1.3. Implicit Understanding of IC

One interesting finding is that during the interviews, the majority of participants (four out of five) seemed to understand what underlay IC without really explicitly mentioning IC. For instance, Teacher 3 described her expectation of students’ communication competence in collaborative discussions being unaware that she was mentioning the presence of IC in conversations. She said, “In discussion, students are expected to showcase their turn-taking skills and their ability to express their opinions and defend their arguments.”
Some teachers considered IC components while marking students’ pair dialogues without knowing that they were doing it, as Teacher 4 accounted,
I didn't really have in mind a list of all the things that I needed to pay attention to outside of the rubric. But during the assessment if I observed certain patterns, then I'd be like, 'Oh, this is a good sign. The person is a pretty good conversationalist.' I remember thinking when someone could clarify or ask for more information to build on what the other partner said, I'd be like, 'Okay, this is a good thing. One extra point.'

4.1.4. Misconceptions of IC Components

An expected finding is that all the teacher participants somehow misconceived one IC component as another. To illustrate, Teacher 2 mistook Compensating for Repair.
This one [Compensating] usually comes in the subsequent part of the discussion, and I did mark them [the students] on whether they knew how to correct themselves. (Teacher 1)
The two constructs of Compensating and Repair are distinctive in nature. While part of the former refers to finding substitutes or paraphrasing, part of the latter refers to correcting mix-ups in tenses and expressions.
Additionally, Teacher 3 mis-categorised Monitoring and Repair as Grammar (“Monitoring and Repair falls under Grammar.”). Monitoring and Repair is partially related to self correction of wrong use of tenses during dialogues to avoid miscommunication, which might have possibly led to this false classification because tense usage is part of grammar. However, unlike Monitoring and Repair which is among the constructs used to assess IC, Grammar is among the marking criteria for speaking competency in general. Also, not all grammatical points except for tense correction are included in Monitoring and Repair.
Another similar instance concerns a teacher participant’s misunderstanding of Compensating and Monitoring and Repair as language use (“I personally think that Compensating and Monitoring and Repair reflect more of language use ability.”, Teacher 2). To clarify, Compensating and Monitoring and Repair is more used to assess how responsive conversationalists are toward different statements or different situations via using the verbal language, which is different from evaluating their language use ability.
Furthermore, Teacher 4 confused Planning with Content when she stated that It [Planning] is more about the content and whether she or he can find any evidence to support his or her arguments.” Planning is more about how to get one’s points (content) across. It is not the content itself.

4.1.5. Videoconferencing-Based vs. Face-to-Face Assessment

An interesting theme that emerged from the data is that the teacher participants were both able and unable to rate several IC components in a videoconferencing tool. As expected, two participants (N=5) admitted that they had challenges in rating IC via Zoom. Teacher 2 said that she did not have much chance to rate Monitoring and Repair via Zoom since “the sound quality via Zoom cannot be as good as in face-to-face conversation. The listener might have to exert more effort to comprehend the overall message first. As a result, less attention would be paid to detailed errors.”
Similarly, Teacher 4 expressed her concern that “Compensating was hard to be assessed online because students were given time and topics to prepare and, therefore, they might have scripted the dialogues and knew what vocabulary they should use.”
This is not the case for rating Cooperating, Asking for Clarification, and Planning. Apparently, it is not hard to observe these components via Zoom because without them, “the flow of ideas will be interrupted.” (Teacher 1)

4.2. How Teachers Rated IC in Videoconferencing-Based Speaking Tests

Another purpose of the current study was to explore how the EFL teachers rated IC; specifically, whether they used the IC criteria designated for their speaking assessments and the examples of IC components they observed in their students' performances. The sections that follow present the relevant findings.

4.2.1. How Teachers Rated IC According to the Specified Criteria

This section provides the teacher participants’ feedback on the designated IC criteria utilised in their speaking assessment. The findings indicate that four criteria - Questions and Responses, Asking for clarification, Cooperating, and Turn taking - are reportedly employed in their speaking assessment. Notably, Questions and Responses are deemed the most significant, while the others receive comparatively less emphasis.
Questions and Responses
All five participants reported that Questions and Responses are the most prioritised criterion however, the assessment approach to this criterion was not always the same. Both Teacher 1 and 4 emphasise the relationship of asking questions and responses. For Teacher 1 this criterion “weights the most compared to other criteria”, which falls under “discussion and wording”. She further explained that she paid attention to the students’ ability in making “logical questions and also providing responses with good rationales when students exchange conversation or engage in a discussion”. It means that the students “have logical thinking, and they use language resources in order to express their logical thinking and as their inner thoughts”. In addition, she said she would “ask the students to respond to their friends or their partners’ utterances immediately”. She would assess “whether they have the ability for improvisation in order to respond to the peer’s questions”. Similarly, Teacher 4 valued “how students formed questions and how well they responded to those questions”. Her reported priority was “whether the questions were meaningful because the discussion was more focused on sharing ideas rather than reaching an agreement”. She expressed the expectation that her students would pose authentic questions, genuinely respond to their peers’ ideas, and inquire out of their genuine interest. Teacher 2 and 3 “focused more on the way the two students interact with each other”. Teacher 3 classified Questions and Responses as part of 'Interaction with partner so she examined to see whether the two students “interacted naturally and efficiently with each other”. She noted that her focus was specifically on "the types of phrases they used in their discussion with each other. Teacher 2 expressed that “I checked whether they understand what the other person talks about, and whether they can raise relevant questions to the content presented by the other person”. Teacher 5, however, valued the students’ ability in maintaining the conversation. She said she would like to see “how they are going to raise the questions, interact with each other, and answer the question, like maintaining the conversation”.
Asking for clarification
Among the five participants, Teachers 1, 2, 3, and 4 reported that they utilised the Asking for clarification criterion. Teacher 1 said that as she had taught her students various phrases, carrying different functions and clarifications, she rated the students’ ability to use “proper use of phrases” “carrying different functions and clarifications, and elaborating”, e.g., “Could you please clarify what you meant by ...?; I'm afraid I'm not following”. Likewise, Teacher 3 expressed that she rated this component “based on the accuracy of the questions used for clarification”. Teacher 2 “checked whether they (the students) know how to clarify the question in case this is too general or whether the question is ambiguous”. She would give the students “higher marks for those who can restate and ask for clarity for confirmations”. Teacher 4 also said she would give the students one extra point when “someone could clarify or ask for more information to build on what the other partner said”.
Cooperating
Regarding the Cooperating, Teachers 1, 3 and 4 valued this criterion. Teacher 4 believed that this criterion is important in keeping the conversation going. She expected her students to use “appropriate expressions to keep the conversation going and ask genuine, meaningful questions”. Although Teacher 3 paid attention to this criterion when rating, she confessed that “it’s counted in the assessment, but not much”, and the students did not do very well. In fact, they “just say something very short, like Yeah, agree with you, and they move on to their opinion without really or actually cooperating with their partner”. Similarly, Teacher 1 expected her students to “explore basic repertoire of language and strategies to keep the conversation and discussion going”. For instance, they could use expressions to react to their friends’ sayings like “Oh, I see what you mean”, “That’s interesting” or they could ask follow-up questions to certain matters. An illustrated conversation could be:
“ I’ve read an article about how wonderful human’s memory is.
Oh that’s interesting. What was it about?”
Another suggested strategy could be the use paraphrasing a partner’s previous point. For example,
“So you're saying that humans’ memory is now negatively affected by technology, aren’t you?
Yeah, that’s what I mean.
Well, I couldn’t agree with you more as we could see the evidence given not only in our lives but also in many research studies”.
Turning-taking
In terms of Turning-taking, Teachers 1 and 4 agreed that the criterion was employed in their assessment. Teacher 1 admitted that she included this criterion in the assessment. She expected the students to “use phrases in order to take turns and to interrupt where necessary”. Teacher 4 expected her students to use language phrases when intervening or interrupting, such as “Let me go first; Do you mind if I add something here; Sorry for interrupting but ...; That’s really interesting. What else do you think is the reason for that?” Teacher 2 did not pay attention to this criterion because she explained that “the students were provided with a topic and then they rehearsed with each other before”.

4.2.2. Additional Noteworthy Aspects in the Assessment of Speaking

As mentioned in the Methodology section, upon finding out that while marking students’ dialogues via Zoom, the teacher participants unconsciously took into account several IC components which were not present in the specified rubrics. This section illustrates additional prominent aspects in the assessment of speaking.
Students’ body language
Besides the emphasis on the students’ verbal interaction, Teacher 4 examined the students’ non-verbal interaction. She looked for “how they used body language like they would normally do in a real conversation”. For example, she figured out whether the students were reading from a script or looking directly into the webcam when they talked using hand gestures.
Natural flow of conversation
According to Teacher 4, making a natural flow of the conversation is important. She argued that when students were provided with a list of questions, they knew what kind of questions their friends would ask, and they had a chance to prepare and practise the answers in advance. Thus, the teacher expected for a real conversation to be delivered in a natural manner, but not a “memorised one”.
Initiating and maintaining the conversation
Teacher 1 appreciated the students’ ability in initiating and maintaining the conversation. She rated these criteria according to “the ability to use such phrases and initiate, maintain and close simple conversation on topics that are familiar and personal interest”. She illustrated her point with an example. For instance, when discussing the topic “the importance of good memory”, the leader should have the ability to initiate the discussion by either telling a story or mentioning a few lead-in statements, instead of just saying “Let’s discuss the topic of Memory”. Moreover, when an interlocutor has finished one of their points, the other should know how to build on their partner’s previous response and maintain the flow of the discussion (to prevent the discussion from falling into silence).

4.2.3. Teachers’ Feedback on the Criteria That Were Either Less Utilised or Not Employed

Apart from the provided criteria, the teachers were shown a full version of IC assessing rubrics used by CEFR and asked if they considered those components while marking and provided relevant examples. The following part presents the participants’ reflection on the criteria that were either less utilised or not employed in their speaking assessment.
Planning
This criterion appears unfamiliar to or less utilised by the participants. Among the five respondents, only Teacher 1 acknowledged planning as part of students' prior preparation for their responses. This involved allocating one minute to prepare for the test and “exploiting any resources available” to “work out how to expand the main points of the discussion”. For instance, “in discussing the topic of Environment, students were usually asked to do extensive research before class”. “Students should have the skill to select, categorise those pieces of uncluttered information and work out the main points. Specifically, when reading about the causes of climate change, students must need to categorise and break the information down into two main causes, namely humans’ activities and the climate’s natural shifts”.
Compensating
Compensating is also new to the teachers. Even though only two teachers acknowledged these criteria, their interpretations are not the same. Teacher 4 said that compensating is a form of correction of others and self-correction, and “that is one important feature of discussion”. She said in the first part, “the main discussion is scripted, and they did not have that many opportunities to correct each other or to self-correct”. However, she paid attention to compensation in the subsequent part of the discussion where they should know “how to paraphrase, whether they know how to correct themselves”. In her point of view, she “gives them extra marks if they have the ability to compensate themselves”. However, Teacher 2 categorised this descriptor under the “interactional category”, which “reflects more of language use ability” or “students’ effort to express ideas despite limited vocabulary”. She further elaborated that when students try to “define the features of something concrete for which he or she can't remember the words”, they “can convey meanings by qualifying a word meaning something similar”, or can “use simple words meaning, something similar to the concept he or she wants to convey”. Although she was aware of this criterion, she “did not see much of the skills in the students”. The techniques the students could use such as “invite corrections” and “ask for confirmation”.
Monitoring and repair
Out of the five participants, only Teacher 3 recognised the importance of this criterion, claiming that the students’ ability in monitoring and repair demonstrate that “they are very natural and spontaneous in their speech”. She normally looked for “the tenses or expressions” they used.

5. Discussion

The findings from this study reveal important insights into teachers’ conceptual understanding of interactional competence (IC) and their application of IC-related criteria in videoconferencing-based speaking assessments. While teachers demonstrated a strong intuitive grasp of key IC components, several misconceptions were observed, and distinct challenges emerged in the context of video-mediated assessments. These results have significant implications for IC assessment practice and theory.
Teachers’ conceptual understanding vs. practical application of IC
Despite not having formal training in IC, the definitions provided by teachers closely aligned with those proposed by experts such as Galaczi and Taylor (2018) and Vurdien (2019), who emphasize both background knowledge and interactional components like turn-taking, cooperating, and body language. This alignment suggests that teachers, through their experience in teaching and assessing speaking skills, have developed an implicit understanding of IC that corresponds well with established theoretical frameworks.
However, the participating teachers’ lack of formal IC-specific training likely contributed to the misconceptions observed during the study. These misconceptions may also be attributed to the fact that this was the first time teachers encountered such a detailed IC rubric, and they were given limited time during the interview to familiarize themselves with each component. Although all participants held an MA in TESOL and had been trained in general speaking assessment, they lacked specialized training in IC assessment, which could explain the conflation of some IC constructs, such as compensating and repair, or monitoring and grammar. These findings underscore the need for IC-specific professional development to help teachers better distinguish between closely related components and apply them consistently in practice.
Implicit and explicit use of IC criteria
Another theme across the findings is the implicit use of IC components. Teachers often assessed criteria such as turn-taking, asking for clarification, and cooperating without explicitly recognizing them as distinct IC constructs. This mirrors findings from previous studies (May et al., 2020; Nakatsuhara et al., 2017), which report that teachers frequently rely on intuitive judgment when evaluating interactional behaviors. While this implicit approach indicates that teachers have a natural awareness of IC elements, it also points to variability in assessment practices, suggesting that standardized rubrics with clear descriptions could enhance consistency.
Additionally, teachers reported evaluating aspects not formally included in the rubric, such as body language and the natural flow of conversation, indicating a broader understanding of interactional competence beyond what is explicitly outlined. Teacher 4, for instance, examined students’ use of non-verbal cues, including eye contact and hand gestures, to determine whether they were engaging naturally or relying on scripted responses. This finding aligns with research emphasizing the importance of non-verbal communication in authentic interaction (Roever & Kasper, 2018). However, without standardized guidelines for evaluating non-verbal behaviors, the assessment of such aspects may remain subjective.
Misconceptions of IC components
The observed misconceptions regarding certain IC components, such as compensating, planning, and monitoring and repair, point to potential gaps in teachers’ understanding. For example, compensating, which involves strategies for overcoming communication breakdowns (e.g., paraphrasing or substituting unknown words), was often conflated with correction strategies. Similarly, monitoring and repair was interpreted by some teachers as a general grammatical skill rather than a specific interactional strategy. These findings align with prior research highlighting the complexity of IC assessment and the need for clearer differentiation between components (May et al., 2020; S. J. Young, 2015).
It is worth noting that these misconceptions likely arose due to teachers’ limited exposure to detailed IC rubrics. Given their background in general speaking assessment rather than IC-specific assessment, it is understandable that they may have initially struggled to differentiate between closely related constructs. This highlights the importance of providing teachers with sufficient time, training, and hands-on practice to familiarize themselves with IC rubrics and develop a clearer understanding of the various components.
Challenges in videoconferencing-based assessment
The findings also highlight unique challenges in assessing IC via videoconferencing platforms. Teachers reported difficulties in evaluating certain components, particularly compensating and monitoring and repair, due to the nature of online communication. For instance, poor sound quality during Zoom sessions often made it harder to observe students’ repair strategies, as noted by Teacher 2. Additionally, the structured nature of online tasks, where students were given time to prepare and rehearse responses, reduced opportunities to assess spontaneous compensating behaviors, as pointed out by Teacher 4.
These findings suggest that while videoconferencing platforms offer valuable opportunities for remote language assessment, they may necessitate adaptations to existing IC rubrics. Specifically, criteria like compensating and monitoring and repair may need to be revised or supplemented with additional descriptors suited to video-mediated contexts. This recommendation aligns with Salaberry and Burch’s (2021) argument that long-standing conceptualizations of IC must be critically revisited to account for the evolving nature of communication in digital environments.
Despite these challenges, teachers reported that certain IC components, such as cooperating, asking for clarification, and planning, were easier to assess via Zoom because their absence would disrupt the natural flow of conversation. This suggests that while online assessments may limit access to non-verbal cues and spontaneous speech, they can still effectively capture collaborative interactional behaviors if tasks are designed appropriately.

6. Conclusion

This study set out to explore EFL teachers' perceptions of interactional competence (IC) and their application of IC-related criteria in videoconferencing-based speaking assessments at a university in Vietnam. The findings revealed that while teachers intuitively grasped the importance of key IC components, such as Questions and Responses, Asking for Clarification, Cooperating, and Turn-taking, their conceptual understanding was not always consistent, and misconceptions about certain criteria, such as Compensating, Planning, and Monitoring and Repair, were evident. Additionally, teachers encountered unique challenges in assessing IC via videoconferencing platforms, particularly in evaluating spontaneous interaction and non-verbal cues due to technological constraints and scripted responses.
Implications for practice
The findings underscore several key recommendations for improving IC assessment in both face-to-face and online contexts. First, there is a need for a more detailed rubric than the existing one, with clear descriptions and examples of IC components to guide both teachers and students (May et al., 2020; S. J. Young, 2015). A detailed rubric can help reduce variability in assessment and ensure that important interactional behaviors are consistently evaluated. Second, training workshops should be organized to familiarize teachers with IC rubrics, address common errors in rating, and provide exposure to sample student responses. This recommendation aligns with Fulcher et al. (2011), who emphasize the importance of rater training in ensuring valid and reliable assessments. Workshops could also focus on helping teachers distinguish closely related components, such as Compensating, Repair, and Monitoring, which were often misunderstood in this study.
Furthermore, test preparation strategies should be designed to prevent memorized or rehearsed dialogues in speaking tests. As noted by Nakatsuhara et al. (2017), scripted responses undermine the authenticity of interaction, making it difficult to assess spontaneous conversational skills. Task designs could include prompts that require on-the-spot thinking, real-time problem-solving, or dynamic role-plays to better elicit genuine interaction.
In videoconferencing-based assessments, task design should aim to elicit authentic, unscripted interaction by incorporating such strategies. Additionally, technological improvements, such as better audio-visual quality and real-time feedback tools, could support more accurate assessments of IC in online environments.
Implications for theory and future research
This current study contributes to the theoretical understanding of IC by highlighting the context-specific nature of its assessment and the variability in teachers' perceptions and practices. The findings support the growing recognition that IC assessment must be adapted to different modalities, such as face-to-face and videoconferencing contexts. Future research should further investigate the impact of different task types on IC performance in online environments, focusing on how various interactional behaviors are elicited and assessed. Longitudinal studies could also explore how teachers’ understanding and assessment of IC evolve over time with continued practice and training.
Moreover, given that most existing research has focused on trained raters of commercial tests, more studies are needed to explore IC assessment by classroom teachers in real-world educational settings. Comparative studies across different institutions and cultural contexts could provide additional insights into how IC is conceptualized and assessed globally, contributing to the development of universally applicable IC frameworks.
Limitation
One limitation of this study is the small number of participants, which makes generalization practically impossible. While the insights gathered offer valuable contributions to understanding IC assessment in this specific context, future research with larger and more diverse samples is necessary to validate these findings and increase their generalizability.
Final thoughts
As communication skills become increasingly essential in both academic and professional settings, fostering and assessing IC in language learners has never been more critical. This study highlights the complexities teachers face in assessing IC and the urgent need for clearer guidelines, tailored training, and improved task designs. By addressing these challenges, educators can more effectively prepare students to engage in authentic, meaningful communication in diverse contexts—both locally and globally. Moving forward, collaboration between researchers, educators, and policymakers will be key to ensuring that IC assessment continues to evolve in line with the changing educational landscape.

References

  1. Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. [CrossRef]
  2. Davis, L., Timpe-Laughlin, V., & Gu, L. (2018). Face-to-face speaking assessment in the digital age: Interactive speaking tasks online. In J. M. Davis, J. M. Norris, M. E. Malone, T. H. McKay, & Y.-A. Son (Eds.), Useful assessment and evaluation in language education (pp. 115–130). Georgetown University Press. [CrossRef]
  3. Ducasse, A. M., & Brown, A. (2009). Assessing paired orals: Raters’ orientation to interaction. Language Testing, 26(3), 423–443. [CrossRef]
  4. Fulcher, G., Davidson, F., & Kemp, J. (2011). Effective rating scale development for speaking tests: Performance decision trees. Language Testing, 28(1), 5–29. [CrossRef]
  5. Galaczi, E. D. (2008). Peer–Peer Interaction in a Speaking Test: The Case of the First Certificate in English Examination. Language Assessment Quarterly, 5(2), 89–119. [CrossRef]
  6. Galaczi, E. D. (2014). Interactional competence across proficiency levels: How do learners manage interaction in paired speaking tests? Applied Linguistics, 35(5), 553–574. [CrossRef]
  7. Gan, Z. (2010). Interaction in group oral assessment: A case study of higher- and lower-scoring students. Language Testing - LANG TEST, 27. [CrossRef]
  8. Gill, P., Stewart, K., Treasure, E., & Chadwick, B. (2008). Methods of data collection in qualitative research: Interviews and focus groups. British Dental Journal, 204(6), 291–295. [CrossRef]
  9. Hall, J. K., & Doehler, S. P. (2011). L2 interactional competence and development. In J. K. Hall, J. Hellermann, & S. Pekarek Doehler (Eds.), L2 Interactional Competence and Development (pp. 1–15). Multilingual Matters. [CrossRef]
  10. He, L., & Dai, Y. (2006). A corpus-based investigation into the validity of the CET-SET group discussion. Language Testing, 23(3), 370–401. [CrossRef]
  11. Ikeda, N. (2017). Measuring L2 oral pragmatic abilities for use in social contexts: Development and validation of an assessment instrument for L2 pragmatics performance in university settings. http://hdl.handle.net/11343/191879.
  12. Iwashita, N. (2022). Speaking Assessment. In T. M. Derwing, M. J. Munro, & R. I. Thomson, The Routledge Handbook of Second Language Acquisition and Speaking (1st ed., pp. 130–143). Routledge. [CrossRef]
  13. Iwashita, N., May, L., & Moore, P. (2021). Operationalising interactional competence in computer-mediated speaking tests. In M. R. Salaberry & A. R. Burch (Eds.), Assessing speaking in context: Expanding the construct and its applications (pp. 283–320). Multilingual Matters.
  14. Jacoby, S., & Ochs, E. (1995). Co-Construction: An Introduction. Research on Language and Social Interaction, 28(3), 171–183. [CrossRef]
  15. Kim, J., & Craig, D. A. (2012). Validation of a videoconferenced speaking test. Computer Assisted Language Learning, 25(3), 257–275. [CrossRef]
  16. Kramsch, C. (1986). From language proficiency to interactional competence. The Modern Language Journal, 70(4), 366–372. [CrossRef]
  17. Lam, D. M. K. (2018). What counts as “responding”? Contingency on previous speaker contribution as a feature of interactional competence. Language Testing, 35(3), 377–401. [CrossRef]
  18. Luk, J. (2010). Talking to Score: Impression Management in L2 Oral Assessment and the Co-Construction of a Test Discourse Genre. Language Assessment Quarterly, 7(1), 25–53. [CrossRef]
  19. May, L. (2009). Co-constructed interaction in a paired speaking test: The rater’s perspective. Language Testing, 26(3), 397–421. [CrossRef]
  20. May, L. (2011). Interactional competence in a paired speaking Test: Features salient to raters. Language Assessment Quarterly, 8(2), 127–145. [CrossRef]
  21. May, L., Nakatsuhara, F., Lam, D., & Galaczi, E. (2020). Developing tools for learning oriented assessment of interactional competence: Bridging theory and practice. Language Testing, 37(2), 165–188. [CrossRef]
  22. Mehan, H. (1979). Learning lessons: Social organization in the classroom. Harvard University Press.
  23. Nakatsuhara, F. (2010). Interactional Competence measured in group oral tests: How do test-taker characteristics, task types and group sizes affect co-constructed discourse in groups? https://uobrep.openrepository.com/handle/10547/623717?show=full.
  24. Nakatsuhara, F., Inoue, C., Berry, V., & Galaczi, E. (2017). Exploring the use of video-conferencing technology in the assessment of spoken language: A mixed-methods study. Language Assessment Quarterly, 14(1), 1–18. [CrossRef]
  25. Nakatsuhara, F., Inoue, C., Berry, V., & Galaczi, E. (2021). Video-conferencing speaking tests: Do they measure the same construct as face-to-face tests? Assessment in Education: Principles, Policy & Practice, 1–20. [CrossRef]
  26. Nguyen, G. N. H. (2021). Thematic analysis of pre-service teachers’ design talks. Journal of Foreign Language Studies, 67, 26–43.
  27. Nowell, L. S., Norris, J. M., White, D. E., & Moules, N. J. (2017). Thematic analysis: Striving to meet the trustworthiness criteria. International Journal of Qualitative Methods, 16(1), 1609406917733847. [CrossRef]
  28. Ockey, G. J., Gu, L., & Keehner, M. (2017). Web-Based Virtual Environments for Facilitating Assessment of L2 Oral Communication Ability. Language Assessment Quarterly, 14(4), 346–359. [CrossRef]
  29. Plough, I., Banerjee, J., & Iwashita, N. (2018). Interactional competence: Genie out of the bottle. Language Testing, 35(3), 427–445. [CrossRef]
  30. Roever, C., & Kasper, G. (2018). Speaking in turns and sequences: Interactional competence as a target construct in testing speaking. Language Testing, 35(3), 331–355. [CrossRef]
  31. Ross, S. (2018). Listener response as a facet of interactional competence. Language Testing, 35(3), 357–375. [CrossRef]
  32. Salaberry, M. R., & Burch, A. R. (2021). Assessing speaking in the post-Covid-19 era: A look towards the future. In M. R. Salaberry & A. R. Burch (Eds.), Assessing speaking in context: Expanding the construct and its applications (pp. 303–310). Multilingual Matters.
  33. Vo, S. (2021). Evaluating interactional competence in interview and paired discussion tasks: A rater cognition study. TESOL Journal, 12(2), e563. [CrossRef]
  34. Vurdien, R. (2019). Videoconferencing: Developing students’ communicative competence. 4(2), 30.
  35. Young, R. F. (2008). Language and interaction: An advanced resource book. Routledge.
  36. Young, R. F. (2011). Interactional competence in language learning, teaching, and testing. In Handbook of Research in Second Language Teaching and Learning. Routledge.
  37. Young, S. J. (2015). Validity argument for assessing L2 pragmatics in interaction using mixed methods. Language Testing, 32(2), 199–225. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated