Preprint
Article

This version is not peer-reviewed.

Contextual Influence on Pattern Separation During Encoding

A peer-reviewed article of this preprint also exists.

Submitted:

20 November 2024

Posted:

21 November 2024

You are already at the latest version

Abstract

Pattern separation has been studied in relation to both the retrieval and encoding processesis considered a crucial process that allow humans to store and remember allows us to distinguish among the highly similar items. Within this body of research,and overlapping experiences which constitute our episodic memory. Not only different episodes share common features, but it is often the role of case that they share the context in which those similar items are found becomes highly relevant. One hypothesis assertsthey occurred. While there has been a great number of studies investigating pattern separation, and its behavioural counterpart, a process known as mnemonic discrimination, surprisingly, research exploring the influence of context on pattern separation or mnemonic discrimination has been less common. The available evidence showed that similar items with similar context leadled to a failure in pattern separation due to high similarity that triggers overlap between events. In contrast, another hypothesis statesOn the other hand, others have shown that pattern separation can take place even under these conditions, allowing humans to distinguish between events with similar items and contexts, as different hippocampal subfields would play complementary roles in enabling both pattern separation and pattern completion. In the present study, we were interested in testing how stability in context influenced pattern separation. WeDespite the fact that pattern separation is by definition an encoding computation the existing literature has focused on the retrieval phase. Here, we used a subsequent memory paradigm in which we manipulated the similarity of context during encoding. We of visual objects selected from diverse categories. Thus, we manipulated the encoded context of each object category (four items within a category), so that some categories had the same intercategory context (same context) and others had different intercategory contexts (different context).context. This approach allowed us to test not only the items presented, but also include the conditions that entail the greatest demand on pattern separation. After a 20-minute period, participants performed a visual mnemonic discrimination task in which they had to differentiate between old, similar, and new items by providing one of the three options for each item tested. According item. Similarly to the second hypothesis describedprevious studies, we found no interaction between judgments and contexts, and participants were able to discriminate between old and lure items at the behavioural level in both conditions. Moreover, when averaging the ERPs of all the items presented within a category, a significant SME emerged between hits and new misses, but not between hits and old false alarms or similar false alarms. These results suggest that item recognition emerges from the interaction with subsequently encoded information, and not just between item memory strength and retrieval processes.

Keywords: 
;  ;  ;  ;  ;  

1. Introduction

Episodic memory refers to the ability to form and retrieve memories of specific past events [1]. This simple description, however, hides a highly complex process. Human beings encode an immense amount of life episodes, each of them associated with a spatial and temporal moment [2], which constitute the context. Commonly, context refers to the general information that is associated with a specific event at the time of encoding [3]. The use of this contextual information is a defining feature of episodic memory [1,4,5,6]. Although each event is unique, it is often the case that it shares common features with previous episodes, and even shares the context in which they occurred. When we encode similar information, there is a risk of interference between memories due to overlapping neural representations [7,8]. How, then, can humans distinguish among these memories? Our brain uses a process known as pattern separation to form unique representations for similar experiences, allowing us to recollect specific details [9,10,11,12,13,14]. Pattern separation, thus, implies the capacity to resolve interference from an overlap in stimulus features and associated neural responses [15].
In the last decade there has been a great number of studies investigating the neural correlates of pattern separation, and its behavioral counterpart, a process known as mnemonic discrimination [16,17]. Surprisingly, research exploring the influence of context on pattern separation or mnemonic discrimination has been less common [18,19,20,21,22,23,24,25]. Thus, differentially from previous pattern separation research, these studies used the integration of pictured objects in contexts. Although with certain variations, they commonly presented images of objects in the background of a unique scene at encoding. Interestingly, they showed that background context facilitated target recognition but also increased the rate of false recognition of lure similar items [19,23,24,25], reflecting a reduced mnemonic discrimination. Crucially, Libby, Reagh, Bouffard, Ragland, and Ranganath [22] found that hippocampal activity generalized across similar objects that were encoded in the same context [see also [20]].
In the present study we were interested in investigating the effect of context on mnemonic discrimination and its neural counterpart (i.e., pattern separation). Several aspects differentiate our study from previous research. First, even though pattern separation is by definition an encoding computation [26], the above mentioned studies have focused on the retrieval phase. Second, several studies have shown that interference between memory traces is frequently accounted by their associations to similar contexts [20]. Thus, differentially from previous research, we explored whether successful target identification and mnemonic discrimination can be predicted by ERPs during encoding [21,27,28], when the object-context binding is formed [29]. Finally, previous research has demonstrated that mnemonic discrimination decreased when studying an increased number of related items [30,31]. Thus, visual objects were selected from diverse categories, and, in contrast to previous studies, for each of these categories four exemplars were presented. With this aim, we used EEG to register the neural activity during a mnemonic discrimination task [10,33], and applied a subsequent memory approach.
Since most of the previous studies have used fMRI, very little is known about the temporal dynamics of pattern separation [21,28,33]. Thus, by recording ERPs during our mnemonic discrimination task we can increase the understanding of the temporal dimension of pattern separation. Furthermore, as recently stated by Amer and Davachi [15], there is an increased interest in the contribution of extra-hippocampal regions to the process of pattern separation, which is best considered as a process supported by a network of brain regions [15,34,35,36]. Finally, ERPs are considered a valuable tool to investigate the neural activity at encoding in the subsequent memory paradigm due to fast and brief nature of neurocognitive processes allowing to separate different subprocesses [37].
Since we were interested in studying the influence of contextual stability at encoding [38], in this task, participants learned visual objects from a category which were presented either on the same or on a different background context. Following previous studies [38,39], in the recognition phase visual objects were presented without their correspondent background context [23,29] and participants had to differentiate between old (studied object), similar (new items from a previously seen category) and new items (new objects from a new category). Considering the available evidence, we predict that accuracy for target identification will increase for objects presented in the same background context relative to objects presented in different contexts. On the other hand, correct identification of a similar lure (i.e., mnemonic discrimination) will decrease for objects presented in the same background context relative to objects presented in different contexts, because they match both object and context information to a greater degree [30,40]. This can be explained by a generalization process that occurs at the expense of detailed memory for those objects [41]. Regarding the ERP correlates, a recent review [37] proposed a functional organization of ERP SMEs into three main components: two early frontal and parietal components starting at about 300ms after stimulus onset, reflecting semantic processing and the binding of multiple features of the event, respectively; and a third component, described as a sustained late frontal (starting at around 550 ms) reflecting associative and conceptual encoding. Previous studies have also suggested that mnemonic operation promoting later memorability can be divided into subprocesses, one starting at 300ms and the other at around 500ms [42,43,44]. Accordingly, we hypothesized that correct target identification would be related to a greater positive amplitude starting at around 300 and around 500ms in frontal and centroparietal electrodes at encoding phase. Similar lures incorrectly identified as old objects will exhibit a greater positive amplitude in frontal and centroparietal electrodes at encoding phase, similar to the increase of correct recalls [20,21,31]. We also predict that objects encoded in the same context will show greater amplitude than those encoded in different contexts.

2. Methods

2.1. Participants

Twenty-six students participated in this research. None of the participants reported a medical history of neurological or psychiatric disorders. The group of participants consisted of 20 females and 6 males, with a mean age of 26.77 years (SD = 4.21). Each participant signed an informed consent, detailing the study procedures in accordance with the 1991 Declaration of Helsinki. All participants completed the encoding and recognition phase of the study under the same conditions. Sample size was calculated using G*Power [45]. With the aim of detecting an effect size of 0.25 and obtaining a statistical power of 0.95 the required sample was determined to be at least 18 participants.

2.2. Stimuli and Procedure

The stimuli and procedure used in this study were adapted from the methods described by Poch, Prieto, Hinojosa, and Campo [12]. A total of 1280 images were shown to participants during the study phase. In the instructions provided to participants, they were asked to pay attention to all the images, but that they would only be asked about the objects that appear in them. These images represented objects (without human body parts or animals) from different categories, and each category contained four images (see Figure 1). The categories were divided into two groups: same context (160 categories) or different context (160 categories). A category from the same context group has four different category objects (a total of 640 images), but all of them have the same background. A category from the different context group has also four different category objects (a total of 640 images), but each one of them with a different background. In addition, backgrounds are not repeated across categories, i.e., each category has a unique background (or backgrounds in the case of belonging to the “different context” group). The presentation of each image lasted for 1500 ms, and a 1000 ms grey screen was displayed after each image. In addition, the images were presented in four blocks of 320 images each, allowing participants to take untimed breaks between blocks. Participants had a 20-minute break after completing the study phase, and then moved on to the discrimination phase.
In the discrimination phase, 400 images were randomly presented to each participant. In line with the methods by Poch, Prieto, Hinojosa, and Campo [12], we tested the first object presented within each category to ensure that the effects were due to stimulus interference. These images were divided into 160 old items (previously presented in the study phase), 160 lure items (new images, but from a previously seen object category), and 80 new items (new images from new object categories). Furthermore, the old and similar items were divided into two groups: category encoded with the same context or category encoded with different contexts. Thus, there were 80 old items from the encoded categories with the same context, 80 old items from the encoded categories with different contexts, 80 lure items from the encoded categories with same context, 80 lure items from the encoded categories with different context, and finally 80 new items. All images were presented without background for 1500 ms each and followed by a 2000 ms white screen. Participants were instructed to press key numbers 1, 2, or 3 while the image is on the screen to indicate whether the items were old, lure, or new, respectively. Participants were given a non-timed pause at the midpoint of the discrimination phase, after the first 200 images.

2.3. EEG Acquisition and Processing

Data acquisition for the encoding phase used a Biosemi Active Two system equipped with 128 electrodes, along with extra electrodes for vertical and horizontal electrooculography (EOG) and nose-tip reference. Initial online referencing was conducted via sensors located in the cap’s posterior region (CMS and DRL). The data were digitized at a sampling rate of 2048 Hz and later re-referenced offline to the nose tip. Data processing continued in Matlab, using the Fiedltrip toolbox (version 20201113, accessible at www.fieldtriptoolbox.org), where data were down-sampled to 256 Hz. The toolbox operates within Matlab’s R2019b environment (The MathWorks, Natick, MA).
The raw continuous data were divided into epochs of 1700 ms duration, spanning from -500 to 1200 ms around the trial presentation. Following segmentation, an infomax independent components analysis was conducted to remove artifacts related to horizontal eye movements and blinks. Epochs that contained other types of artifacts were discarded based on visual inspection criteria. The signal underwent low-pass filtering with a cutoff frequency of 30 Hz and was averaged for each condition and participant separately. Subsequently, the event-related potentials (ERPs) were baseline corrected using a -200 ms interval before averaging the ERPs across conditions and participants.

2.4. Statistical Analyses

2.4.1. Behavioral Data

Analyses were performed using IBM SPSS Statistics 21.0 for Windows. As new items had no context, we performed two different repeated measures analyses of variance (ANOVAs). First, following Stark, Kirwan, and Stark [17], we ran a repeated measures analysis of variance (ANOVA), where Category is a within-subject factor with nine levels (old, similar and new responses to targets, lures and foils). Second, we also performed a repeated measures ANOVA where Category is a within-subject factor with six levels (old, similar and new responses to targets and lures), and Context was treated as a two-level (same and different) within-subject factor. Following the ANOVA tests, we applied Bonferroni-corrected pairwise comparisons to identify which means exhibited statistically significant differences. The same analytical procedures were applied for reaction time data, but the old response to foil condition was excluded in the first analysis due to missing values.
Additionally, we calculated a lure discrimination index (LDI) in terms of context, which was defined as the ability to reject similar lures and was calculated as the proportion of correctly identified lures corrected for the baseline rate of similar responses to novel items [33]. We used a paired samples t-test to compare the LDI from same and different context.

2.4.2. EEG Data

ERPs were analyzedusing a nonparametric cluster-based random permutation analysis approach [46] in the two windows of interest (300–500 ms and 500–800 ms). By using this type of analysis we are able to identify the spatial distribution of statistical effects while effectively handling with the multiple-comparisons problem. In this analysis, permutation tests were employed to calculate the sampling distribution for a cluster-based statistic. Cluster-based statistics involve aggregating spatially and temporally adjacent variables, such as t or F values, into clusters. The definition of the cluster statistic may rely on its maximum value, extent, or a combination of these factors [43]. The analytical procedure followed these steps: First, a parametric statistical test was conducted for each electrode within the specified time window of interest. P-values below 0.05 were utilized to identify clusters formed by adjacent electrodes, with a minimum of two channels required for cluster formation. The cluster-level statistic was determined by summing the individual t-statistics or F values within each cluster. Subsequently, a null distribution was generated by calculating 1.000 randomized cluster-level test statistics. The observed cluster-level test statistic was then compared to the null distribution, considering only those clusters that exceeded the 95th percentile as significant.

2.4.3. Encoding analysis

Based on a previous study [27], three different analyses were carried out: category analysis, first item analysis, and last item analysis, which are described below.
First, in the Category analysis, similar presented items within a certain category were averaged as a function of their associated response [see [31]]. We used a repeated measures ANOVA with two within-subject factor factors: Response is a factor with five levels: old hit, old false alarm, new miss, similar false alarm, similar correct rejection. Context is a with two levels: same context and different context.
In the First Item analysis, we compared the ERPs of an specific item which is the first object presentation within a given category. In this case we used a two-way repeated measures ANOVA with two factors, where Response is a within-subject factor with three levels(old hit, old false alarm, and new miss), and context is a within-subject factor with two levels (same context and different context).
Finally, we analyzed ERP responses of Last Item analysis, which weas the last studied item within a category (i.e., the fourth presentations). And then averaged as per recognition judgement of the probed item belonging to that category. We then used a two-way repeated measures ANOVA as in the Category analysis.
Therefore, these analyses not only test the items presented, but also include the conditions that entail the greatest demand on pattern separation [19].

3. Results

3.1. Behavioral Results

The first ANOVA showed a significant main effect of Category (F 8, 25 = 79.714, p < .001, ηp2 = .761). As expected, Bonferroni pairwise comparisons showed that New responses to Foils (M = 81.183; SD = 2.035) were higher than all other responses (all ps < .001), and Old responses to Foils (M = 1.887; .473) were lower than all other responses (all ps < .001). Old responses to Targets (M = 42.900; SD = 2.947) were higher than New responses to Targets (M = 20.933; 2.258) (p < .01), Old responses to Lures (M = 22.209; SD = 2.245) (p < .001), and Similar responses to Foils (M = 16.930; SD = 1.838) (p < .001). Crucially, Similar responses to Lures (M = 48.674; SD = 3.004) were higher than Old responses to Lures (M = 22.209; SD = 2.245) (p < .001), Similar responses to Targets (M = 36.167; SD = 3.093) (p < .001), New responses to Targets (M = 20.933; SD = 2.258) (p < .001), New responses to Lures (M = 29.117; SD = 2.692) (p < .05), and Similar responses to Foils (M = 16.930; SD = 1.838) (p < .001). Similar responses to Targets (M = 36.167; SD = 3.093) were higher than Similar responses to Foils (M = 16.930; SD = 1.838) (p < .001). New responses to Lures (M = 29.117; SD = 2.692) were higher than New responses to Targets (M = 20.933; SD = 2.258) (p < .001) (see Figure 2: A).
Table 1. Response proportions on different trial types.
Table 1. Response proportions on different trial types.
Table Mean proportion of responses M SD Significant differences*
Old response to Target (1) 42.900 2.947 2,3,6,7,9
Old response to Lure (2) 22.209 2.245 1,3,5,9
Old response to Foil (3) 1.887 .473 1,2,4,5,6,7,8,9
Similar response to Target (4) 36.167 3.093 3,5,6,9
Similar response to Lure (5) 48.674 3.004 2,3,4,6,7,8,9
Similar response to Foil (6) 16.930 1.838 1,3,4,5,9
New response to Target (7) 20.933 2.258 1,3,5,8,9
New response to Lure (8) 29.117 2.692 3,5,7,9
New response to Foil (9) 81.183 2.035 1,2,3,4,5,6,7,8
*See Appendix A for pairwise comparisons
The second ANOVA showed a significant main effect of Category (F 5, 25 = 14.104, p < .001, ηp2 = .361). No significant effect for Context (F 1, 25 = 1.401, p > .20, ηp2 = .053), nor a significant interaction between Category and Context (F 5, 25 = .436, p > .75, ηp2 = .017) were found. Bonferroni pairwise comparisons showed that Similar responses to Lures (M = 48.677; SD = 3.003) were higher than Old responses to Lures (M = 22.197; SD = 2.243) (p < .001), Similar responses to Targets (M = 36.176; SD = 3.097) (p < .001), New responses to Targets (M = 20.929; SD = 2.259) (p < .001), and New responses to Lures (M = M = 29.126; SD = 2.693) (p < .05). Moreover, Old responses to Targets (M = 42.895; SD = 2.950) were higher than Old responses to Lures (M = 22.197; SD = 2.243), and New responses to Targets (M = 20.929; SD = 2.259) (all ps < .001). Similar responses to Targets (M = 36.176; SD = 3.097) were higher than New responses to Targets (M = 20.929; SD = 2.259) (p < .05). New responses to Lures (M = 29.126; SD = 2.693) were higher than New responses to Targets (M = 20.929; SD = 2.259) (p < .001) (see Figure 2: B).
The dependent-sample test did not show a significant difference in LDI between same and different contexts (t(25)=.445, p > .05) (see Figure 3).

3.2. Reaction time results

The reaction time analysis (see Figure 4) showed a significant main effect of Category (F (3.301, 82.523) = 13.805, p < .01, ηp2 = .356) in the first ANOVA. Bonferroni pairwise comparisons showed that Similar responses to Lures (M = 1.054; SD = .079) were slower than old responses to Targets (M = .962; SD = .082) (p < .01), Similar responses to Targets (M = 1.030; Sd = .086) (p < .01), New responses to Targets (M = .955; SD = .113) (p < .001), Old responses to lures (M = .986; SD = .083) (p < .05), New responses to Lures (M = .955; SD = .114) (p < .01), and New responses to Foils (M = .933; SD = .099) (p < .001). In addition, Similar responses to Foils (M = 1.040; SD = .098) were slower than Old response to Targets (M = .962; SD = .082) (p < .05), New responses to Targets (M = .955; SD = .113) (p < .001), New responses to Lures (M = .955; SD = .114) (p < .01), and New responses to Foils (M = .933; SD = .099) (p < .001). Similar responses to Targets (M = 1.030; SD = .017) were slower than New responses to Targets (M = .955; SD = .113) (p < .01), New responses to Lures (M = .955; SD = .114) (p < .05), New responses to Foils (M = .933; SD = .099) (p > .001). The second ANOVA did not show a significant interaction between Category and Context (F (4.228, 105.695) = .400), p > .05, ηp2 = .142), and showed a Category main effect (F (.176, .017) = 10.438), p < .001, ηp2 = .996). Bonferroni pairwise comparisons showed that Similar responses to Lures (M = 1.054; SD = .016) were slower than Old responses to Targets (M = .962; SD = .016) (p < .01), Old responses to Lures (M = .986; SD = .016) (p < .05), Similar responses to Targets (M = 1.030; SD = .017), (p< .01), New responses to Targets (M = .955; SD = .022) ( p < .001), and New responses to Lures (.955; SD = .022) ( p < .01). In addition, Similar responses to Targets were slower than Old responses to Targets (M = .962; SD = .016) (p < 0.5), New responses to Targets (M = .955; SD = .022), (p < .01), and New responses to Lures (M = .955; SD = .022) (p < .01).
Table 2.
Reaction time responses M SD Significant differences*
Old response to Target (1) 0,962 0,083 4,5
Old response to Lure (2) 0,986 0,084 4
Similar response to Target (3) 1,030 0,087 6,4,7,8
Similar response to Lure (4) 1,054 0,079 1,3,6,7,8
Similar response to Foil (5) 1,040 0,099 1,6,7,8
New response to Target (6) 0,955 0,114 3,4,5
New response to Lure (7) 0,955 0,114 3,4,5
New response to Foil (8) 0,933 0,100 1,6,7,8
*See Appendix B for pairwise comparisons

3.3. ERP Results

3.3.1. Encoding

The Category analysis showed that ERP amplitudes were significantly higher between 500 and 800 ms). This effect was explained by a a more positive deflection in a central-anterior cluster of electrodes for the old hits compared to the new misses responses, which was more evident on the right side (p < .05). (Figure 5).
Statistical analyses also revealed a context main effect. There was a greater ERP amplitude in the same context condition (average of the ERPs of all the items presented within the same context) compared to the different context condition (average of the ERPs of all the items presented within the different context) in both temporal windows. We found a cluster of centroposterior electrodes (p < .05), specially on the left side (Figure 6: Left) between 300 and 500 ms In thethe late temporal window (500–800 ms) we observed a cluster of central-anterior electrodes (p <.05). (Figure 6: Right). The Category by Context interaction did not reach statistical significant (p > .05).
No statistical differences were found in the First Item analysis nor in the Last Item analysis. Nonetheless, event-related potential amplitudes were significantly higher in the same context level in both temporal windows. The first temporal window (300–500 ms) revealed a cluster of centroposterior electrodes (p < .05) (Figure 6: Left). Similarly, the late temporal window (500–800 ms) revealed a cluster of centroposterior electrodes (p < .05) (Figure 6: Right).

4. Discussion

In the current study we sought to determine the effects context stability at encoding on mnemonic discrimination and its neural substrate [38,39]. To the best of our knowledge this is the first study focusing on the encoding phase. Previous studies focused on the recognition stage and showed that keeping the background context consistent at encoding and recognition phases increased correct target recognition, but also leads to false recognition of lure items (i.e., similar items that were never actually learned), thus reducing the ability to discriminate (i.e., behavioral pattern separation) [19,20,22,23,24,25]. This modulation of background context on recognition and discrimination has been interpreted as an increase in familiarity due to the reappearance of encoding context, which aided target identification, but at the same time reduced mnemonic discrimination due to the increase in familiarity of the similar lure items [21,24]. This interpretation appears to be supported by the results from Libby, Reagh, Bouffard, Ragland, and Ranganath [22], who showed that hippocampal activity patterns were different for similar elements that had different encoding contexts, but overlapped for similar elements that shared contextual information. Likewise, Herz, Bukala, Kragel, and Kahana [20] found that false recalls that shared greater contextual similarity with the target context were associated with a hippocampal low-frequency activity reduction, similar to the reduction associated with correct recalls. Contrary to these studies, we did not find a significant modulation of background context on target identification, nor mnemonic discrimination performance. Our results aligned with those of Bouffard, Fidalgo, Brunec, Lee, and Barense [44], who tested how the distinctiveness of objects or scenes aided memory. Participants studied 34 scene-object pairs under three conditions: distinct scenes paired with similar objects, similar scenes paired with distinct objects, and similar scenes paired with similar objects. After the study phase, participants performed a single item recognition test (a single image, either a scene or an object), followed by an associative memory judgment. They found that, regardless of whether objects and scenes were similar or distinct, participants showed intact single item recognition of scenes and objects, which suggests that they rely on distinct objects (not scenes) to distinguish between similar memories [see also [48]]. Likewise, other authors [49] provided evidence suggesting that reinstatement of content- and context-based information occurred within separate cortical circuits, so that semantic representations can cue memories in a context-independent manner. Similarly, Stevenson, Reagh, Chun, Murray, and Yassa [36] proposed that source memory and pattern separation are separable processes that might be supported by distinct neural mechanisms. Gronau and Shachar [38] also reported that when using relative long exposure durations during encoding (i.e., 2 s), the influence of contextual information on recognition was eliminated. Additionally, Palmer, Grilli, Lawrence, and Ryan [23] showed that participants were worse identifying similar objects when they were placed in a scene context, repeated or novel, compared to a repeated white background. Finally, it should be also taken into account that context influence may change or even reverse depending on the presence or absence of contextual cues during recall [39]. Although we did not find a significant influence of background context, we observed a main effect of Category. Remarkably, mnemonic discrimination took place, since Similar lure correct rejections were comparable to correct recalls, and higher than the rest of the response types (Figure 2). Additionally, old hits were statistically different to similar FA (marking a lure as old), suggesting that participants were able to effectively differentiate between old items and lures. Altogether, these results support the idea of successful target recognition and behavioral pattern separation between similar items [33,47]. Previous studies have shown that participants are commonly able to correctly identify targets as ‘old’ and foils as ‘new’ in a high proportion of the cases, while have more difficulties in identifying lures as ‘similar’ [17]. Interestingly, it seems that participants in our study were more likely to classify old items as similar than previously reported. Further research is needed to fully understand this discrepancy.
In addition, reaction time analyses showed that similar CR were significantly slower than old false alarm, old hits, similar FA, and new CR. Moreover, old false alarms were slower than old hits and new CR. These results are consistent with previous literature, such as García-Rueda, Poch, and Campo [10], who found significantly slower reaction times for similar CR in comparison with old hits, similar FA and new CR. This slower reaction time for similar CR suggests that discriminating between items within a category is a more complex process than recognizing previously encoded items or items from an unseen category. Hayes, Nadel, and Ryan [29] similarly found that participants were slowest to respond to the “scene lures” (similar to targets, but novel object presented in a novel scene) than any other condition, supporting the idea that distinguishing related items and scenes involves a more complex processing effort. Moreover, pairwise comparisons between memory judgment (correct or incorrect) and conditions revealed that participants were fasted to respond to “object.object” correct trials (old object presented on a white background during encoding and testing phases), followed by “scene.scene” correct trials (old object presented on an old background). Participants were slowest to respond to the “scene.object” correct trials (an old object without background which was previously presented with a background during study), which did not differ in response times from object lures correct rejections (similar to targets, but novel object on a white background) and scene lures correct rejections. These results are also consistent with our study, as they suggest that participants experienced a greater difficulty discriminating the object when the context is lost, and that this difficulty was comparable to correctly rejecting a lure.

ERP Findings

Participants processed images belonging to different categories during the study phase (four per each category). ERPs from the encoding of image categories (when averaging all items) [see [31]] that were correctly identified as old (old hits) were equivalent to the ERPs for those categories that were subsequently marked as similar (old false alarms indicates recognition of the category) and for those categories that were lures which attracted old responses (similar false alarms). Furthermore, ERPs from the encoding of image categories that were subsequently recognized (old hit) were significantly different from those categories that were subsequently marked as new (new miss indicates both category and item non-recognition). These results suggest a category recollection in which the consecutive presentation of similar items within a category created a strong category-related memory trace [20,31]. This trace is reflected in an increased ERP positivity of those categories which were previously seen and recognized in comparison to non-recognized and marked-as-new items, as the last one implied less memory encoding strengthening and consequently a weaker trace. These results are consistent with global matching models, which propose that the memory strength of a tested item arises from the similarity between its representation and all other representations from studied items (known as global similarity) [13]. Thus, higher neural global similarity during encoding leads to an increase in recognition memory [7,8,13]. However, strengthening of categories could also lead to an increased ERP positivity of similar false alarms, as the items were new but belonged to previously seen categories. In this way, old hit and similar FA had a more positive ERP than the new miss (Figure 5), although only the old hits showed differences at a statistical level. ERPs from encoding category images (when averaging all items) from the same context group had higher activation than categories from different context group in both early and late temporal windows, respectively associated with familiarity and recollection. These results suggest a stronger memory trace for those category images that have the same inter-category context, regardless of the response (old, similar or new). Thus, common context during encoding could facilitate the formation of a stronger memory for images within a category and support the idea that associative memory formation is facilitated by similarity across encoding patterns [51]. However, this ERP increase could only be reflecting the recognition of the context, without implying a strengthening of the item associated with it, so in the absence of context, this recognition of common contexts would not necessarily translate into an advantage in remembering specific items that were in common contexts. In another experiment, Hayes, Nadel, and Ryan [29] found that recognition of context-free objects that were previously studied with a scene was lower than recognition of objects that were previously studied with a white background or context-presented objects that were previously studied with that scene. Therefore, it could be concluded that context is an element of the single episodic trace that is not determinant to recover the specific item, although it could be used as a cue when presented and even decrease the item recognition when it is changed compared to those conditions that continue with the same backgrounds (both white and visually rich scenes) [38]. Consistently, Dohm-Hansen and Johansson [19] designed an experiment where they presented pictures of objects in different contexts. During the test, they presented objects and contexts in different conditions (old object-old context, new object-new context, similar object-similar context, old object-similar context, and similar object-old context) and participants had to response whether the object and context were old, similar or new. Results showed that the hit rate for objects is higher when the accompanying context has been presented previously than when the context is a lure. Contextual information may therefore influence object recall. Similarly, participants had a higher hit rate for contexts when the accompanying objects had been previously presented compared to when it was a lure, so the object may also influence the ability to remember the context. ERPs of the encoding of the first presented items (the item which is subsequently tested in the discrimination phase) that were subsequently recognized did not differ from either the old false alarm or the new miss in any of the contexts. Taken together, the results of the category condition and the first item condition suggest that it is not the encoding of the tested item, but the encoding of all the studied items, that influences subsequent recognition through familiarity with the whole object category [31,52]. ERPs of the encoding of the fourth presentation (the last item studied within each category) showed both an increase in the amplitude for the common contexts and no differences between the category responses, which is added in a consistent manner with the non-differentiation between old hits and similar false alarms, indicating that there is a recognition of the category. Previous literature suggests that the hippocampus enhances differences between events, even though they share item or context information [[9,13], although see [22,53]]. Instead, there is also previous research suggesting that, although neural coding in the hippocampus may differentiate between events with some overlapping attributes, it assigns overlapping neural representations when the average amount of overlap between stimuli is high [54].
Overall, our results suggest a category-based recognition, as old hit did not differ from old false alarm or similar FA during encoding. One possible explanation is that episodic memory formation is a single process, meaning that different forms of memory, which contribute to different aspects of the trace (such us context), reflect differing levels of a single encoding mechanism that forge distinct object and contextual representations into a coherent episodic trace [4]. This way, increased amplitude of the ERPs from the same context condition could be reflecting a repetition factor that is not determinant to recover the item trace at the behavioral level.

5. Conclusions

Our results suggest that behavioural pattern separation can take place even when there is similarity of context and item between events. Cortical activity also suggests that the encoding of all items studied, rather than the encoding of the subsequently tested item, influences subsequent recall. Finally, the differences in cortical activity between contexts at encoding suggest a process of familiarity through repetition of the same context that is not determinant in behavioural discrimination. These differences thus support the idea that different forms of memory are recruited during encoding to build a single episodic trace.

Author Contributions

Laura García-Rueda: Writing – original draft, Formal analysis, Data curation, Conceptualization. Joaquín Macedo-Pascual: Formal analysis, Writing – revised draft. Claudia Poch: Writing – original draft, Validation, Supervision, Methodology, Investigation, Formal analysis, Data curation, Funding acquisition. Pablo Campo: Writing – original draft, Validation, Supervision, Project administration, Methodology, Investigation, Funding acquisition, Formal analysis, Data curation, Conceptualization.

Funding

Universidad Nebrija. This work was supported by a grant from Fundación Tatiana Pérez de Guzman el Bueno to PC. LG-R was supported by a grant from the Regional ministry of research and education of the Community of Madrid and European Regional Development Fund to PC [H2019/HUM-5705]. This work was partially funded by the Ministerio de Ciencia, Innovación y Universidades under grant PID2019-111335GA-I00 to CP.

Institutional Review Board Statement

The ethical committee of the Universidad Autónoma de Madrid approved the study (CEI-82-15349

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data will be made available on request.

Conflicts of Interest

No potential conflict of interest was reported by the authors.

Appendix A

Pairwise comparisons
(I)Response (J)Response Mean differences (I-J) Sig.b
Old response to target New response to target 21,967* ,001
Old response to lure 20,691* ,000
New response to foil -38,283* ,000
Similar response to foil 25,970* ,000
Old response to foil 41,013* ,000
Similar response to target Similar response to lure -12,507* ,000
New response to foil -45,016* ,000
Similar response to foil 19,237* ,000
Old response to foil 34,280* ,000
New response to target Similar response to lure -27,742* ,000
New response to lure -8,184* ,000
New response to foil -60,250* ,000
Old response to foil 19,046* ,000
Similar response to lure Old response to lure 26,466* ,000
New response to lure 19,558* ,036
New response to foil -32,509* ,000
Similar response to foil 31,745* ,000
Old response to foil 46,787* ,000
Old response to lure New response to foil -58,974* ,000
Old response to foil 20,322* ,000
New response to lure New response to foil -52,066* ,000
Old response to foil 27,230* ,000
New response to foil Similar response to foil 64,253* ,000
Old response to foil 79,296* ,000
Similar response to foil Old response to foil 15,043* ,000
b. Adjustment for multiple comparisons: Bonferroni.

Appendix B

Pairwise comparisons
(I)Response (J)Response Mean differences (I-J) Sig.b
Old response to target Similar response to lure -,092* ,002
Similar response to foil -,078* ,011
Similar response to target New response to target ,075* ,003
Similar response to lure -,024* ,004
New response to lure ,075* ,015
New response to foil ,097* ,000
New response to target Similar response to lure -,099* ,000
Similar response to foil -,085* ,000
Similar response to lure Old response to lure ,068* ,041
New response to lure ,100* ,001
New response to foil ,121* ,000
New response to lure Similar response to foil -,086* ,001
New response to foil Similar response to foil -,107* ,000
b. Adjustment for multiple comparisons: Bonferroni.

References

  1. Tulving, E. Episodic and semantic memory. In Organization of Memory; Tulving, E., Tulving, W., Eds.; Academic Press: New York, NY, USA, 1972; pp. 382–403. [Google Scholar]
  2. Kesner, R.P.; Rolls, E.T. A computational theory of hippocampal function, and tests of the theory: New developments. Neurosci. Biobehav. Rev. 2015, 48, 92–147. [Google Scholar] [CrossRef] [PubMed]
  3. Bachevalier, J.; Nemanic, S.; Alvarado, M.C. The influence of context on recognition memory in monkeys: effects of hippocampal, parahippocampal and perirhinal lesions. Behav. Brain. Res. 2015, 285, 89–98. [Google Scholar] [CrossRef] [PubMed]
  4. Davachi, L. Item, context and relational episodic encoding in humans. Curr. Opin. Neurobiol. 2006, 16, 693–700. [Google Scholar] [CrossRef] [PubMed]
  5. Hasselmo, M.E.; Eichenbaum, H. Hippocampal mechanisms for the context-dependent retrieval of episodes. Neural Netw. 2005, 18, 1172–1190. [Google Scholar] [CrossRef] [PubMed]
  6. Theves, S.; Grande, X.; Düzel, E.; Doeller, C.F. Pattern completion and the medial temporal lobe memory system. In The Oxford Handbook of Human Memory, Two Volume Pack: Foundations and Applications; Kahana, M.J., Wagner, A.D., Eds.; Oxford University Press: Oxford, UK, 2024; pp. 988–1016. [Google Scholar]
  7. Davis, T.; Xue, G.; Love, B.C.; Preston, A.R.; Poldrack, R.A. Global neural pattern similarity as a common basis for categorization and recognition memory. Journal of Neuroscience, 2014, 34, 7472–84. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  8. LaRocque, K.F.; Smith, M.E.; Carr, V.A.; Witthoft, N.; Grill-Spector, K.; Wagner, A.D. Global similarity and pattern separation in the human medial temporal lobe predict subsequent memory. Journal of Neuroscience, 2013, 33, 5466–74. [Google Scholar] [CrossRef]
  9. Dimsdale-Zucker, H. R. , Ritchey, M. , Ekstrom, A. D., Yonelinas, A. P., Ranganath, C. CA1 and CA3 differentially support spontaneous retrieval of episodic contexts within human hippocampal subfields. Nature Communications, 2018, 9, 294. [Google Scholar] [CrossRef]
  10. García-Rueda, L.; Poch, C.; Campo, P. Forgetting Details in Visual Long-Term Memory: Decay or Interference? Frontiers in Behavioral Neuroscience, 2022, 16, 1662–5153. [Google Scholar] [CrossRef]
  11. Kirwan, C.B.; Stark, C.E. Overcoming interference: an fMRI investigation of pattern separation in the medial temporal lobe. Learn Mem, 2007, 14, 625–633. [Google Scholar] [CrossRef]
  12. Poch, C.; Prieto, A.; Hinojosa, J.A.; Campo, P. The impact of increasing similar interfering experiences on mnemonic discrimination: electrophysiological evidence. Cognitive Neuroscience, 2019, 10, 129–138. [Google Scholar] [CrossRef]
  13. Xue, G. The Neural Representations Underlying Human Episodic Memory. Trends in Cognitive Sciences, 2018, 22, 544–561. [Google Scholar] [CrossRef] [PubMed]
  14. Yassa, M.A.; Stark, C.E. Pattern separation in the hippocampus. Trends Neurosci, 2011, 34, 515–525. [Google Scholar] [CrossRef] [PubMed]
  15. Amer, T.; Davachi, L. Extra-hippocampal contributions to pattern separation. Elife 2023, 12. [Google Scholar] [CrossRef] [PubMed]
  16. Leal, S.L.; Yassa, M.A. Integrating new findings and examining clinical applications of pattern separation. Nat Neurosci, 2018, 21, 163–173. [Google Scholar] [CrossRef] [PubMed]
  17. Stark, S.M.; Kirwan, C.B.; Stark CE, L. Mnemonic Similarity Task: A Tool for Assessing Hippocampal Integrity. Trends in Cognitive Sciences 2019. [Google Scholar] [CrossRef]
  18. Aldi, G.A.; Lange, I.; Gigli, C.; Goossens, L.; Schruers, K.R.; Cosci, F. Validation of the Mnemonic Similarity Task - Context Version. Braz J Psychiatry, 2018, 40, 432–440. [Google Scholar] [CrossRef]
  19. Dohm-Hansen, S.; Johansson, M. Mnemonic discrimination of object and context is differentially associated with mental health. Neurobiology of Learning and Memory, 2020, 173, 107268. [Google Scholar] [CrossRef]
  20. Herz, N.; Bukala, B.R.; Kragel, J.E.; Kahana, M.J. Hippocampal activity predicts contextual misattribution of false memories. Proc. Natl. Acad. Sci. USA 2023, 120, e2305292120. [Google Scholar] [CrossRef]
  21. Hollarek, M. Remembering Objects in Contexts: ERP Correlates of a Modified Behavioral Pattern Separation Task. Master Thesis, University of Amsterdam/ Lund University, 2015, 2015; p. 10608753. [Google Scholar]
  22. Libby, L.A.; Reagh, Z.M.; Bouffard, N.R.; Ragland, J.D.; Ranganath, C. The Hippocampus Generalizes across Memories that Share Item and Context Information. Journal Cognitive Neuroscience 2019, 31, 24–35. [Google Scholar] [CrossRef]
  23. Palmer, J.M.; Grilli, M.D.; Lawrence, A.V.; Ryan, L. The impact of context on pattern separation for objects among younger and older apolipoprotein ϵ4 carriers and noncarriers. J Int Neuropsychol Soc, 2023, 29, 439–449. [Google Scholar] [CrossRef]
  24. Racsmány, M.; Bencze, D.; Pajkossy, P.; Szőllősi, Á.; Marián, M. Irrelevant background context decreases mnemonic discrimination and increases false memory. Sci Rep, 2021, 11, 6204. [Google Scholar] [CrossRef] [PubMed]
  25. Szőllősi, Á.; Pajkossy, P.; Bencze, D.; Marián, M.; Racsmány, M. Litmus test of rich episodic representations: Context-induced false recognition. Cognition, 2023, 230, 105287. [Google Scholar] [CrossRef] [PubMed]
  26. Motley, S.E.; Kirwan, C.B. A parametric investigation of pattern separation processes in the medial temporal lobe. J Neurosci, 2012, 32, 13076–13085. [Google Scholar] [CrossRef] [PubMed]
  27. García-Rueda, L.; Poch, C.; Campo, P. Pattern separation during encoding and Subsequent Memory Effect. Neurobiol Learn Mem, 2024, 216, 107995. [Google Scholar] [CrossRef] [PubMed]
  28. Rollins, L.; Khuu, A.; Bennett, K. Event-related potentials during encoding coincide with subsequent forced-choice mnemonic discrimination. Sci Rep, 2024, 14, 15859. [Google Scholar] [CrossRef]
  29. Hayes, S.M.; Nadel, L.; Ryan, L. The effect of scene context on episodic object recognition: parahippocampal cortex mediates memory encoding and retrieval success. Hippocampus, 2007, 17, 873–889. [Google Scholar] [CrossRef]
  30. Arndt, J. The role of memory activation in creating false memories of encoding context. J Exp Psychol Learn Mem Cogn, 2010, 36, 66–79. [Google Scholar] [CrossRef]
  31. Wing, E.A.; Geib, B.R.; Wang, W.C.; Monge, Z.; Davis, S.W.; Cabeza, R. Cortical Overlap and Cortical-Hippocampal Interactions Predict Subsequent True and False Memory. J Neurosci, 2020, 40, 1920–1930. [Google Scholar] [CrossRef]
  32. Garcia-Munoz, A.C.; Alemán-Gómez, Y.; Toledano, R.; Poch, C.; García-Morales, I.; Aledo-Serrano, Á.; Gil-Nagel, A.; Campo, P. Morphometric and microstructural characteristics of hippocampal subfields in mesial temporal lobe epilepsy and their correlates with mnemonic discrimination. Front Neurol 2023, 14, 1096873. [Google Scholar] [CrossRef]
  33. Morcom, A.M. Resisting false recognition: An ERP study of lure discrimination. Brain Research 2015, 1624, 336–348. [Google Scholar] [CrossRef]
  34. Nash, M.I.; Hodges, C.B.; Muncy, N.M.; Kirwan, C.B. Pattern separation beyond the hippocampus: A high-resolution whole-brain investigation of mnemonic discrimination in healthy adults. Hippocampus, 2021, 31, 408–421. [Google Scholar] [CrossRef] [PubMed]
  35. Pidgeon, L.M.; Morcom, A.M. Cortical pattern separation and item-specific memory encoding. Neuropsychologia, 2016, 85, 256–271. [Google Scholar] [CrossRef] [PubMed]
  36. Stevenson, R.F.; Reagh, Z.M.; Chun, A.P.; Murray, E.A.; Yassa, M.A. Pattern Separation and Source Memory Engage Distinct Hippocampal and Neocortical Regions during Retrieval. J Neurosci, 2020, 40, 843–851. [Google Scholar] [CrossRef] [PubMed]
  37. Mecklinger, A.; Kamp, S.M. Observing memory encoding while it unfolds: Functional interpretation and current debates regarding ERP subsequent memory effects. Neurosci Biobehav Rev, 2023, 153, 105347. [Google Scholar] [CrossRef] [PubMed]
  38. Gronau, N.; Shachar, M. Contextual consistency facilitates long-term memory of perceptual detail in barely seen images. J Exp Psychol Hum Percept Perform, 2015, 41, 1095–1111. [Google Scholar] [CrossRef]
  39. Cox, W.R.; Dobbelaar, S.; Meeter, M.; Kindt, M.; van Ast, V.A. Episodic memory enhancement versus impairment is determined by contextual similarity across events. Proc. Natl. Acad. Sci. USA 2021, 118. [Google Scholar] [CrossRef]
  40. Hicks, J.L.; Starns, J.J. Remembering source evidence from associatively related items: explanations from a global matching model. J Exp Psychol Learn Mem Cogn, 2006, 32, 1164–1173. [Google Scholar] [CrossRef]
  41. Melega, G.; Sheldon, S. Conceptual relatedness promotes memory generalization at the cost of detailed recollection. Sci Rep, 2023, 13, 15575. [Google Scholar] [CrossRef]
  42. Fell, J.; Klaver, P.; Elger, C.E.; Fernandez, G. The interaction of rhinal cortex and hippocampus in human declarative memory formation. Rev Neurosci, 2002, 13, 299–312. [Google Scholar] [CrossRef]
  43. Fell, J.; Klaver, P.; Lehnertz, K.; Grunwald, T.; Schaller, C.; Elger, C.E.; Fernandez, G. Human memory formation is accompanied by rhinal-hippocampal coupling and decoupling. Nat Neurosci, 2001, 4, 1259–1264. [Google Scholar] [CrossRef]
  44. Fernandez, G.; Effern, A.; Grunwald, T.; Pezer, N.; Lehnertz, K.; Dumpelmann, M.; Elger, C.E. Real-time tracking of memory formation in the human rhinal cortex and hippocampus. Science, 1999, 285, 1582–1585. [Google Scholar] [CrossRef] [PubMed]
  45. Faul, F.; Erdfelder, E.; Lang, A.-G.; Buchner, A. G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods 2007, 39, 175–191. [Google Scholar] [CrossRef] [PubMed]
  46. Maris, E.; Oostenveld, R. Nonparametric statistical testing of EEG- and MEG-data. Journal of Neuroscience Methods, 2007; 164, 177–190. [Google Scholar] [CrossRef] [PubMed]
  47. Bouffard, N.R.; Fidalgo, C.; Brunec, I.K.; Lee AC, H.; Barense, M.D. Older adults can use memory for distinctive objects, but not distinctive scenes, to rescue associative memory deficits. Aging, Neuropsychology, and Cognition 2023. [Google Scholar] [CrossRef] [PubMed]
  48. Brady, T.F.; Konkle, T.; Alvarez, G.A.; Oliva, A. Real-world objects are not represented as bound units: independent forgetting of different object details from visual memory. J Exp Psychol Gen, 2013, 142, 791–808. [Google Scholar] [CrossRef]
  49. Kragel, J.E.; Ezzyat, Y.; Lega, B.C.; Sperling, M.R.; Worrell, G.A.; Gross, R.E.; Jobst, B.C.; Sheth, S.A.; Zaghloul, K.A.; Stein, J.M.; Kahana, M.J. Distinct cortical systems reinstate the content and context of episodic memories. Nat. Commun. 2021, 12, 4444. [Google Scholar] [CrossRef]
  50. Anderson, M.L.; James, J.R.; Kirwan, C.B. An event-related potential investigation of pattern separation and pattern completion processes. Cognitive Neuroscience 2016, 1–15. [Google Scholar] [CrossRef]
  51. Wagner, I.C.; Van Buuren, M.; Bovy, L.; Fernández, G. Parallel Engagement of Regions Associated with Encoding and Later Retrieval Forms Durable Memories. Journal of Neuroscience, 2016, 36, 7985–95. [Google Scholar] [CrossRef]
  52. Szőllősi, Á.; Bencze, D.; Racsmány, M. Behavioural pattern separation is strongly associated with familiarity-based decisions. Memory, 2020, 28, 337–347. [Google Scholar] [CrossRef] [PubMed]
  53. Schacter, D.L.; Guerin, S.A.; St Jacques, P.L. Memory distortion: an adaptive perspective. Trends in Cognitive Sciences, 2011, 15, 467–74. [Google Scholar] [CrossRef]
  54. Norman, K.A.; O’Reilly, R.C. Modeling hippocampal and neocortical contributions to recognition memory: a complementary-learning-systems approach. Psychological Review, 2003, 110, 611–46. [Google Scholar] [CrossRef]
Figure 1. Test structure and examples of the items employed. The 1280 images of objects from different categories were presented at the study phase. Each image was presented for 1500 ms, followed by a 1000 ms grey screen. The similarity of the context presented for each category was manipulated to “same” or “different” context. Twenty minutes after the study phase, the discrimination phase took place. During the task, participants were shown 400 images of objects and were instructed to classify each image as old (previously seen in the study phase), similar (new object, but from a category previously seen), or new (new object from a new category). Each image appeared for 1500 ms and was followed by a 2000 ms white screen.
Figure 1. Test structure and examples of the items employed. The 1280 images of objects from different categories were presented at the study phase. Each image was presented for 1500 ms, followed by a 1000 ms grey screen. The similarity of the context presented for each category was manipulated to “same” or “different” context. Twenty minutes after the study phase, the discrimination phase took place. During the task, participants were shown 400 images of objects and were instructed to classify each image as old (previously seen in the study phase), similar (new object, but from a category previously seen), or new (new object from a new category). Each image appeared for 1500 ms and was followed by a 2000 ms white screen.
Preprints 140282 g001
Figure 2. A. Mean proportion of old, similar and new responses to targets, lures and foils. B. Mean proportion of old, similar and new responses to targets from same or different context and lures from same or different context.
Figure 2. A. Mean proportion of old, similar and new responses to targets, lures and foils. B. Mean proportion of old, similar and new responses to targets from same or different context and lures from same or different context.
Preprints 140282 g002
Figure 3. Lure Discrimination Index. Proportion of correctly identified lures corrected for the baseline rate of similar responses to novel items in terms of context.
Figure 3. Lure Discrimination Index. Proportion of correctly identified lures corrected for the baseline rate of similar responses to novel items in terms of context.
Preprints 140282 g003
Figure 4. Reaction time responses of old, similar and new responses to targets, lures and foils.
Figure 4. Reaction time responses of old, similar and new responses to targets, lures and foils.
Preprints 140282 g004
Figure 5. Encoding, Category analysis (Response). Differences reaching statistical significance are marked with an asterisk in the ERPs and with black circles in the topography. Left: ERPs from “old hit” (blue line), “similar FA” (red line), “similar CR” (green line), “old FA”, and “new miss” in the 500–800 ms time window. Right: topographic map of the grand-average of “old hits-new miss” in the 500–800 ms time window.
Figure 5. Encoding, Category analysis (Response). Differences reaching statistical significance are marked with an asterisk in the ERPs and with black circles in the topography. Left: ERPs from “old hit” (blue line), “similar FA” (red line), “similar CR” (green line), “old FA”, and “new miss” in the 500–800 ms time window. Right: topographic map of the grand-average of “old hits-new miss” in the 500–800 ms time window.
Preprints 140282 g005
Figure 6. Encoding, Category analysis (Context). Significant differences are marked with an asterisk in the ERPs and with black circles in the topography. Top and Left: ERPs from “same context” (grey line), and “different context” (black line) in the 300–500 ms time window. Bottom and Left: topographic map of the grand-average of “same context-different context” in the 300–500 ms time window. Top and Right: ERPs from “same context” (grey line), and “different context” (black line) in the 500–800 ms time window. Bottom and Right: topographic map of the grand-average of “same context-different context” in the 500–800 ms time window.
Figure 6. Encoding, Category analysis (Context). Significant differences are marked with an asterisk in the ERPs and with black circles in the topography. Top and Left: ERPs from “same context” (grey line), and “different context” (black line) in the 300–500 ms time window. Bottom and Left: topographic map of the grand-average of “same context-different context” in the 300–500 ms time window. Top and Right: ERPs from “same context” (grey line), and “different context” (black line) in the 500–800 ms time window. Bottom and Right: topographic map of the grand-average of “same context-different context” in the 500–800 ms time window.
Preprints 140282 g006
Figure 7. Encoding, Last Item analysis (Context). Significant differences are marked with an asterisk in the ERPs and with black circles in the topography. Top and Left: ERPs from “same context” (grey line), and “different context” (black line) in the 300–500 ms time window. Bottom and Left: topographic map of the grand-average of “same context-different context” in the 300–500 ms time window. Top and Right: ERPs from “same context” (grey line), and “different context” (black line) in the 500–800 ms time window. Bottom and Right: topographic map of the grand-average of “same context-different context” in the 500–800 ms time window.
Figure 7. Encoding, Last Item analysis (Context). Significant differences are marked with an asterisk in the ERPs and with black circles in the topography. Top and Left: ERPs from “same context” (grey line), and “different context” (black line) in the 300–500 ms time window. Bottom and Left: topographic map of the grand-average of “same context-different context” in the 300–500 ms time window. Top and Right: ERPs from “same context” (grey line), and “different context” (black line) in the 500–800 ms time window. Bottom and Right: topographic map of the grand-average of “same context-different context” in the 500–800 ms time window.
Preprints 140282 g007
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated