Alterations
In the sections that follow, we draw on foundational neurobiological principles (Dayan & Abbott, 2001; Kandel et al., 2021; Purves et al., 2018) and the primary studies referenced therein to interpret the involvement of different brain areas in FTSD.
Basal Ganglia
Consistent with the M1-centric framework developed in this paper, we propose that the prominent basal ganglia alterations identified by Simonyan et al. (2017) can be hypothesized as emergent “secondary by-products” of the repeated usage of the maladaptive dystonic synergy. When a dystonic synergy in M1 is repetitively activated for the same specialized movement—such as writing in writer’s cramp or phonation in laryngeal dystonia—the descending signals systematically converge onto the corresponding somatotopic sector of the striatum. This repeated, high-frequency cortical input preferentially engages the medium spiny neurons (MSNs) in that striatal territory.
The essence of “learning” at the level of substantia nigra pars compacta (SNc) dopaminergic neurons boils down to a remodeling of both their afferent synapses—excitatory glutamatergic inputs from the pedunculopontine nucleus (PPN) and subthalamic nucleus (STN) plus inhibitory GABAergic inputs from striatal direct-pathway neurons—and activity-dependent tuning of their intrinsic membrane properties (ion-channel makeup and signaling cascades that determine whether the cell fires tonically, bursts, or stays quiescent). When a movement produces an unexpectedly good outcome—a positive reward-prediction error—those excitatory afferents fire in tight temporal register with the action, driving a precisely timed, high-frequency burst in SNc neurons that delivers a phasic surge of dopamine to the corresponding sector of striatum. However, when the same movement consistently yields erroneous or suboptimal results, a cascade of molecular and synaptic plasticity events within the dopaminergic neurons progressively suppresses that bursting mechanism for that particular movement context. One can break down the “how” of this plastic reconfiguration into three interdependent layers.
First, synaptic plasticity at excitatory glutamate inputs onto SNc dendrites. The dopaminergic neurons in SNc receive glutamatergic inputs from multiple sources, including the STN, PPN, and others. These synapses are capable of both LTP and LTD, much like cortical or striatal synapses, depending on patterns of pre- and postsynaptic firing and the resultant intracellular calcium transients. When a motor action yields a negative or neutral outcome—repeatedly failing to produce the expected sensorimotor match—we hypothesize that various basal ganglia feedback loops (particularly from the striatum via pallidal or nigral reticulata circuits) provide inhibitory or desynchronizing signals that discourage these glutamatergic inputs from firing in tight synchrony with movement onset. At the same time, the dopaminergic neuron itself experiences partial or asynchronous excitatory drive that does not push membrane potentials into the high-threshold bursting range.
Because the dystonic synergy fails to produce a coherent or “successful” motor outcome, we hypothesize that the afferent inputs that typically drive a strong, synchronized excitatory burst in the dopaminergic neurons arrive in an uncoordinated fashion or with insufficient amplitude whenever that synergy is attempted. Normally, if a movement matches internal predictions or yields positive reinforcement, the STN, PPN and other excitatory sources coordinate their firing to coincide tightly with the moment of correct action, pushing the dopaminergic neuron’s membrane potential above a critical threshold and enabling a phasic burst. In the dystonic scenario, however, negative or neutral prediction-error signals flowing through basal ganglia loops disrupt this synchronization. Instead of a single well-timed “surge” of excitatory drive, the midbrain neurons experience smaller or ill-timed bursts from their inputs: sometimes excitatory volleys are too spread out or arrive at off-peak moments, sometimes inhibitory signals from pallidal or nigral reticulata circuits interject. The end result is that each piece of excitatory input by itself is insufficient to overcome the intrinsic potassium currents, calcium channels, and other membrane ionic regulatory mechanisms that demand a tightly clustered, high-amplitude depolarization to elicit a burst. With each repetition of the synergy, these excitatory drives remain fragmented or asynchronous, never coalescing into the strong depolarizing pulse that triggers the characteristic high-frequency dopaminergic discharge. Consequently, the dopaminergic neuron remains only partly activated, entering brief or moderate depolarizations that fall short of the level needed to initiate a full phasic burst. Over time, repeated episodes of subthreshold activation further reinforce this pattern by inducing forms of synaptic depression at excitatory inputs, making it even less likely that future volleys, arriving out of sync, will sum enough to propel the neuron into burst-firing territory.
More specifically, in dopaminergic neurons of the SNc, high-frequency bursts typically require a sufficiently large and synchronous excitatory input to overcome multiple stabilizing currents (e.g., potassium conductances, calcium-dependent after-hyperpolarizations) and push the membrane potential into a plateau or “bursting” voltage range. In a healthy reward-predictive context, excitatory projections from structures like the PPN or STN arrive in tight temporal coordination precisely when the movement outcome is deemed better than expected, producing a robust depolarization that triggers a phasic burst. By contrast, if the excitatory drive is partial, asynchronous, or arrives at the wrong time relative to the motor event, the neuron does not receive the consolidated depolarization needed to cross that bursting threshold. Instead, the neuron experiences multiple subthreshold episodes where each incoming volley of glutamate partially depolarizes the membrane but fails to initiate a full burst.
The significance of these repeated subthreshold depolarizations—and why they lead to synaptic depression rather than potentiation—comes down to the intracellular calcium dynamics and receptor-trafficking rules in dopaminergic neurons. When the neuron depolarizes strongly and synchronously (for example, due to coherent afferent firing), a large calcium influx typically enters through voltage-gated calcium channels and/or NMDA receptors, activating kinases like CaMKII or PKC that phosphorylate key receptor subunits (AMPA or NMDA), stabilizing them in a potentiated state. In a subthreshold scenario, however, the depolarization is weaker or briefer, generating only small or short-lived increases in intracellular calcium. These smaller calcium signals frequently engage phosphatase-dominated pathways—particularly calcineurin (PP2B)—and can also recruit internalization machinery (such as β-arrestin-mediated endocytosis for glutamate receptors). NMDA receptor subunits that remain only partially activated, instead of being fully engaged in a high-calcium “LTP-like” event, end up dephosphorylated, which discourages their synaptic incorporation or stabilizing scaffolds. AMPA receptors also become more susceptible to endocytosis. We propose that repeated occurrences of this partial or poorly synchronized excitatory input—each time the synergy misfires—push the neuron’s glutamatergic synapses toward LTD. The cell’s logic, so to speak, is that “I keep receiving excitatory drive at times that do not align with a meaningful reward or successful action, so I will reduce my sensitivity to these ineffective inputs.” Each such “error repetition” increases the likelihood that these STN/PPN-SNc synapses shed receptor content and reduce synaptic weights, diminishing their ability to deliver enough depolarization for subsequent bursts.
Because the synergy continues to produce the same partially coherent or off-timed excitation during each failed attempt, the dopaminergic neuron is continually subject to subthreshold calcium transients and partial receptor activation, consolidating the LTD-like mechanism. NMDA receptor subunits might be dephosphorylated at certain key residues (like on the NR2B subunit), AMPA receptor subunits (like GluA1) can be internalized, and local scaffolding proteins (e.g., PSD-95) may also be downregulated or displaced. As a result, the neuron’s capacity to generate a strong synchronized burst in subsequent attempts is lowered because the synapses that would normally build the depolarization are now weaker. With repeated occurrences, these depressed inputs no longer arrive with the high-gain, coordinated excitatory potential that would be needed to push the dopaminergic neuron above its threshold for bursting. Consequently, the phasic release of dopamine becomes more and more blunted.
In short, the “why” is that the consistent mismatch of timing and amplitude signals the cell that these excitatory inputs are unproductive or even “erroneous,” triggering a shift toward synaptic depression rather than reinforcement. The “how” is that partial or asynchronous excitatory drive leads to calcium transients too small or too brief to initiate kinase-dependent LTP, favoring phosphatase-heavy LTD processes that dephosphorylate NMDA/AMPA receptor subunits and stimulate receptor endocytosis. Over multiple failed trials, this LTD-like plasticity becomes consolidated, so later volleys from the STN/PPN—even if they occasionally become more synchronous—will find the synapses already depressed and incapable of pushing the neuron into burst mode. We propose that this mechanism thus locks in the reduced phasic dopamine release that characterizes the dystonic state.
Second, homeostatic or adaptive changes in intrinsic membrane conductances. Dopaminergic neurons have distinctive pacemaker properties mediated by voltage-gated calcium channels (particularly Cav1.3 L-type channels), hyperpolarization-activated cyclic nucleotide-gated channels (HCN), potassium channels (SK, K-ATP, etc.), and more. Phasic bursts are usually triggered when afferent excitation converges at the right time to push the membrane into a plateau potential or high-frequency firing mode. However, if repeated sensorimotor mismatches cause negative prediction errors, the dopaminergic neuron experiences perturbations in intracellular calcium that activate a range of phosphatases and other signaling molecules (for instance, calcineurin or specific protein kinases). We propose that over multiple failed attempts, these signals can alter phosphorylation states or expression levels of channels that support bursting. For example, T-type calcium channels in distal dendrites may become less available for activation, or SK channels (which hyperpolarize the cell after bursts) may be upregulated or remain persistently open at lower thresholds. The net effect is to make the neuron more resistant to the high-voltage plateau needed for a strong phasic burst. In other words, the cell “tunes” itself to require more robust excitatory synchrony before it will fire a burst, and such synchrony seldom occurs if the synergy is consistently errant. This adaptation is akin to a negative feedback loop: the cell sees that each time it partially depolarizes, the outcome is not rewarded, so it lowers overall excitatory gain to avoid wasting metabolic resources on fruitless bursts.
Third, negative reward-prediction error signals shaping the mesocircuit. The basal ganglia have a sophisticated architecture in which outputs from the striatum, globus pallidus (GPi/GPe), and substantia nigra pars reticulata (SNr) can modulate the excitatory or inhibitory drive that arrives at SNc. In typical successful motor learning, when the movement is better than expected, striatal signals suppress certain pallidal/nigral outputs that would otherwise inhibit the burst-driving circuits. This suppression allows the STN or PPN excitatory projections to the SNc to align tightly with movement onset, generating a pulse of dopamine release. We propose that when the dystonic synergy keeps generating involuntary contractions and prediction errors, this gating reverses: GPi/SNr inhibition stays high, STN and/or PPN excitation is damped or desynchronized, and additional GABA tone from SNr collaterals to SNc rises. Repetition cements these changes: each unsuccessful attempt slightly retunes the loop so that future STN/PPN volleys are weaker or mistimed. The dopaminergic neuron, as part of this circuit, thus “learns” to withhold bursts when it senses repeated mismatch or negative outcome signals, effectively generating a state of lowered phasic release specific to that maladaptive motor pattern. Hence, the synergy no longer triggers the kind of strong excitatory gating or synchronous conduction that would produce a robust dopamine transient.
Because this plastic reshaping persists over numerous attempts, the synergy never receives large, timely pulses of dopamine. Nonetheless, the striatum can still get low-level or tonic dopamine. From the dopaminergic neurons’ standpoint, continuing to fire large bursts for a motor program that produces consistent errors is a wasteful, undesirable scenario. Through the synaptic and intrinsic changes described, we propose that these neurons reduce the probability of high-frequency firing. The result is a stable (albeit maladaptive) decrease in the amplitude and consistency of phasic dopamine release each time the dystonic synergy is attempted. This is how the dopaminergic system “learns” at a detailed, mechanistic level: repetitive negative prediction errors initiate a chain of synaptic LTD in excitatory inputs, channel phosphorylation state changes that heighten the threshold for bursting, and circuit-level modifications that feed more inhibition or reduced excitatory synchrony onto SNc neurons, effectively locking in a low-bursting regime for that specific motor context.
In addition, despite this reduction in phasic dopamine, there remains at least some residual dopaminergic tone in the striatal compartments where the maladaptive synergy projects—composed of either small or erratically timed pulses and a tonic baseline that never disappears entirely. The cortical drive itself, being abnormally high and repetitive, guarantees that the direct-pathway medium spiny neurons (those expressing D1 receptors) are strongly depolarized and often in an up-state when dopamine arrives, even if that dopaminergic signal is relatively weak or late. D1 receptors couple to Gs/olf proteins, causing an upregulation of cAMP and subsequent activation of protein kinase A (PKA) whenever they bind dopamine during neuronal depolarization. Because the synergy is initiated over and over again, these partial dopamine-plus-glutamate coincidences happen frequently enough to trigger incremental but persistent synaptic and transcriptional changes. PKA phosphorylates key effectors such as DARPP-32 and NMDA receptor subunits, which stabilizes excitatory inputs on D1-expressing cells. Moreover, we propose that repeated partial activation can engage immediate early genes such as c-Fos and ΔFosB that induce long-lasting epigenetic modifications, gradually increasing the number of D1 receptors on the membrane or boosting their downstream signaling potency. Essentially, the repeated large glutamate pulses ensure that D1 neurons remain above threshold for potentiation each time even a trickle of dopamine arrives, so they can accumulate enough intracellular signals to achieve a net strengthening (rather than a net weakening) of the direct pathway. Over many iterations, this process manifests as D1 receptor upregulation, despite the overall depressed phasic dopamine environment. The crux is that D1-MSNs do not require perfectly timed or robust bursts of dopamine so long as the cortical drive consistently pushes them into a depolarized state and there is still some dopamine available to bind D1 receptors repeatedly. Once these neurons begin upregulating D1 receptor density, each subsequent partial dopaminergic pulse has an even greater impact on cAMP and PKA cascades, accelerating the maladaptive reinforcement of the direct pathway within the corresponding somatotopic region. The result is a self-perpetuating cycle of lowered phasic dopamine release for that task alongside a paradoxical but persistent increase in D1 receptor-mediated plasticity, which further cements the dystonic synergy’s dominance over the affected movement.
Furthermore, when we discuss D2 receptor downregulation under conditions of low phasic dopamine plus hyperexcitable cortical (glutamate) input, it helps to distinguish between simple “dopamine deficiency” at the receptor and the dynamic, synaptic plasticity mechanisms triggered by repeated mismatches in timing and amplitude of dopamine relative to excitatory drive. In a typical scenario such as partial denervation (e.g., Parkinson’s disease) where there is an outright dopamine shortfall, we often see a compensatory upregulation of D2 receptors (Blesa et al., 2017; and references therein), but that happens largely when the neurons are starved of dopamine in a relatively quiescent or normal pattern of excitatory input. In focal dystonia, we hypothesize that the difference is that the relevant medium spiny neurons (MSNs) in the indirect pathway are bombarded by powerful, disorganized bursts of glutamate that arrive without well-timed reinforcing dopamine pulses. This mismatch of strong cortical input plus erratic or minimal dopamine triggers a plasticity process that actively suppresses D2 receptor expression at the membrane rather than promoting a compensatory increase.
The principle reason behind this outcome is that iSPNs depend on coherent, instructive dopamine signals to stabilize or adapt their role in suppressing unwanted movements. When the motor cortex is hyperexcitable and repeatedly sends large glutamate volleys to iSPNs, but dopamine release is not only reduced overall but also arrives at the “wrong” times (e.g., too late, too small, or unpredictably absent), the neuron receives recurring “error-laden” activity. Instead of a constructive reinforcement of which synapses should maintain or upregulate D2 receptors, the cell repeatedly experiences partial activation of D2 receptors in a context that does not match a successful or reward-consistent movement. We reason that these partial, non-optimal stimulations can engage intracellular pathways (for instance, involving β-arrestin) that tag the receptor for internalization rather than maintain it. Over many such cycles, the net effect is a genuine downregulation of functional D2 receptor density on the cell surface, because the neuron is effectively “learning” that this pattern of input is not beneficial or is consistently associated with a negative or neutral outcome. It is not a simple homeostatic response to less dopamine—rather, it is an active plasticity process in which large, misaligned excitatory drive arrives in the absence of a coherent dopaminergic reinforcing signal, pushing the neuron toward pruning its receptors.
Mechanistically, the intracellular signaling in iSPNs that leads to downregulation of D2 can be traced to the interplay between glutamate-induced depolarization and low-level, mistimed dopamine. Under normal conditions, well-timed phasic dopamine bursts help iSPNs refine or depress unwanted excitatory inputs and preserve receptor expression in a context that supports balanced inhibition. Here, however, it is conceivable that each time the cortex fires off its excessive burst, iSPNs are hit with too much or erratic glutamate, which drives the cell toward a heightened or prolonged up-state. The minimal or off-phase dopamine then does not confer a clear “LTD” or stabilizing signal but instead can promote partial receptor phosphorylation patterns that favor internalization or degradation. This is sometimes referred to as maladaptive plasticity: it’s not merely that dopamine is low; it’s that the timing is so incompatible with the excitatory input that the iSPNs interpret this as persistent motor “error” rather than a scenario where upregulating D2 would help. As an additional feedback loop, persistent negative reward-prediction error signals propagate through basal ganglia circuits to the substantia nigra, likely further reducing phasic dopamine for that motor context, so each trial of the dystonic movement replays the mismatch. With repetition, these iSPNs systematically shift toward lower receptor availability.
Put simply, the presence of large, frequent, pathologically timed glutamate bursts changes the rules for how a neuron reacts to low dopamine levels. We propose that instead of the typical upregulation that might be seen if dopamine were absent but everything else normal, the iSPNs are forced into repeated partial stimulation under conditions that never match successful or rewarding movement. The cell’s internal signaling machinery responds by internalizing or failing to replace D2 receptors at the membrane, culminating in a real downregulation of D2 function. The “why” is because the mismatch of excitatory input and insufficient, untimely dopamine triggers repeated “negative feedback” states that drive synaptic and receptor-level plasticity away from D2 receptor maintenance. The “how” is through the molecular cascades (like β-arrestin, abnormal calcium transients, or altered phosphorylation patterns) that cause trafficking of D2 away from the membrane rather than upregulating its expression. The end result is that indirect-pathway MSNs lose their receptor population, further undermining their ability to inhibit the maladaptive motor output.
Moreover, we reason that the topological disorganization of D1- and D2-expressing zones, as well as dopaminergic release sites in dystonia, emerges from the same repeated mismatch conditions—strong, poorly timed glutamate from cortex plus reduced and erratic phasic dopamine—that simultaneously drive D1 upregulation, D2 downregulation, and depressed burst firing in SNc. Under healthy conditions, direct and indirect pathway neurons (and their attendant dopaminergic inputs) form partially overlapping fields in the striatum: a single somatotopic zone that controls, for example, finger flexion, will contain interwoven populations of D1-MSNs and D2-MSNs, both of which can be modulated by consistent dopamine bursts arriving in that region. This creates a functional overlap so that any movement—especially a finely tuned action—can be sculpted by a balance of direct-pathway facilitation and indirect-pathway inhibition, all reinforced by well-timed dopaminergic pulses.
In dystonia, prolonged “error-laden” input from the cortex causes large, repeated excitatory signals to converge upon discrete patches of striatum, but the normal, synchronized pulses of dopamine do not arrive. Instead, there is a partial or asynchronous dopaminergic trickle, which has a profoundly different plastic impact on D1- vs. D2-MSNs. Over many training episodes, D1-MSNs in that region get strongly potentiated—both at their glutamatergic synapses and by an upregulation of D1 receptor density—while D2-MSNs in the same territory become downregulated and fail to maintain or expand their receptor population. Crucially, this differential plasticity could begin to carve out sub-territories within the same somatotopic striatal zone in which only the D1-upregulated neurons remain robustly connected to the pathologically active cortical inputs. The D2-MSNs can shrink their dendritic arbors or downregulate synaptic spines, leading to a near “exclusion zone” where the indirect pathway no longer effectively participates. This separation is further enhanced by the fact that the subregions of the striatum where dopamine release typically overlaps with both direct and indirect pathway neurons also undergo reorganization: the misalignment of excitatory bursts and reduced phasic dopamine release fosters localized LTD of excitatory inputs onto dopaminergic neurons (e.g., from STN or PPN) that would otherwise service both D1- and D2-rich compartments.
As a result, we hypothesize that the dopaminergic midbrain no longer provides phasic release in a single, well-shared region of the striatum; instead, the few partial pulses that do occur become channeled or reinforced primarily to the patch of D1-dominated neurons. Because these same D1-MSNs keep receiving robust cortical drive, any dopaminergic pulses that trickle in become relevant almost exclusively to that “winning” D1 subcircuit. Meanwhile, the iSPNs in nearby areas—once part of a shared network—have lost receptor density and synaptic strength, and the net dopaminergic innervation to them is even more attenuated or out of phase. This sets off a vicious cycle of functional segregation: the direct-pathway subregion effectively “grows” in terms of receptor potency and remains tied to the minimal dopaminergic input, whereas the indirect-pathway subregion no longer participates effectively in the same movement domain.
Hence, our model predicts that the previously overlapping topography—where healthy striatal microzones contained a blend of D1- and D2-expressing neurons all subject to the same bursts of dopamine—becomes replaced by more isolated clusters. One cluster is heavily D1-based, hyperresponsive to cortical inputs, and still sees some modicum of dopaminergic effect (leading to upregulation). The other, previously overlapping indirect cluster is diminished, with D2 receptors downregulated or internalized, failing to capture the scarce dopamine that remains. In imaging terms, these appear as distinct or non-overlapping territories for D1 vs. D2 expression, alongside a minimal or missing overlap with dopamine release sites.
An important cell-biological component is that repeated maladaptive motor attempts may drive local microcircuits to reorganize spines and receptor composition in ways that physically confine D1 upregulation to certain dendritic “islands” (and the synapses that feed them). In healthy subjects, microcolumns of direct and indirect pathway neurons interdigitate, each receiving shared dopaminergic terminals. But if the D1 and D2 subpopulations respond differently to the same erroneous input—one fraction being potentiated, the other depressed—they can effectively reorganize their local arbors, losing the shared dopaminergic microenvironments that once overlapped. Meanwhile, the dopaminergic neurons themselves are plastic, reducing or redirecting their excitatory drive inputs in ways that reinforce a specialized, narrower output domain (the subcircuit that still responds). Over time, this combination of structural and synaptic changes could yeild the discrete, topologically segregated D1-dominant vs. D2-deficient zones we see in dystonia, rather than the healthy pattern of partial overlap among D1, D2, and dopaminergic release.
Therefore, we posit that the basal ganglia alterations observed by Simonyan et al.—the reduced phasic dopamine release during the affected task, the upregulation of D1 receptors in the striatum, the downregulation of D2 receptors, and the breakdown of the typical overlap between dopaminergic inputs and direct/indirect pathways—are best interpreted as secondary adaptations of the maladaptive synergy’s repetitive usage.
Cerebellum
Extending the M1-centric view, we suggest that the cerebellar abnormalities reported in FTSD represent adaptive responses to the chronic output of the dystonic synergy rather than a primary lesion within the cerebellum itself. We propose that repeated hyperexcitable signals—originating from a persistent, maladaptive dystonic synergy in the M1 and conveyed indirectly through intermediary pathways—may cause noisy disruptions that gradually reshape cerebellar function. In a healthy system, each climbing fiber (CF) burst that reaches a Purkinje cell triggers a large, transient surge of Ca2+ in the dendrites, facilitated by voltage-gated calcium channels (notably P/Q-type Ca2+ channels in Purkinje somato-dendritic membranes) and the robust depolarization caused by that uniquely powerful synapse. These Ca2+ transients drive downstream second-messenger cascades—most notably involving protein kinase C (PKC), Ca2+/calmodulin-dependent protein kinases (CaMKs), and other signaling intermediaries in the postsynaptic density—that mediate synaptic plasticity (e.g., long-term depression of parallel fiber-Purkinje synapses). Under normal circumstances, CF bursts are relatively infrequent, so they are interpreted as salient “error signals” requiring an adaptive change. However, in FTSD, we hypothesize that if CF barrages become chronically large and frequent—when the cerebellum is continuously exposed to persistent hyperexcitability drive from the dystonic synergy—then intracellular Ca2+ and its downstream signaling elements could enter a state of near-constant upregulation.
When these strong bursts are no longer rare but the new norm, multiple homeostatic and compensatory mechanisms ramp up within Purkinje cells. One prominent and likely response is the biochemical “saturation” of the plasticity machinery: high levels of Ca2+-induced signaling can drive partial inactivation or desensitization of key enzymes such as PKC and CaMKII. Such a phenomenon is inferred from other chronic stimulation paradigms in cerebellar research. Mechanistically, it can happen via diverse routes: for instance, chronic phosphorylation of certain kinase subunits can paradoxically render them less effective, while excessive Ca2+ triggers phosphatase pathways (e.g., calcineurin) that dephosphorylate crucial receptor or enzyme sites. In parallel, negative-feedback loops can alter subunit composition or anchoring of these kinases, further reducing their efficacy over time.
Simultaneously, Purkinje cells undergo homeostatic shifts in intrinsic excitability. Chronically elevated dendritic Ca2+ leads to changes in both the expression and posttranslational modification of voltage-gated Ca2+ channels (including P/Q-type channels), reducing their overall density or modifying subunits to decrease conductance. Parallel adjustments in various K+ channels—such as SK channels, which modulate afterhyperpolarizations, or large-conductance BK channels—and HCN “pacemaker” channels can limit dendritic depolarization triggered by climbing fiber input. Each step curtails the amplitude of subsequent Ca2+ transients and the postsynaptic depolarization that climbing fibers elicit. Over longer time frames, transcriptional regulation (e.g., changes in immediate-early genes) cements these adaptations, ensuring the cell’s baseline biophysical state is reset to resist further CF-induced plasticity surges. Taken together, we propose that Purkinje neurons drift toward an intrinsically “down-scaled” excitability when exposed to persistent, high-gain CF activity.
At the synaptic level, repeated large-amplitude CF events may also disrupt the crucial “coincidence detection” that underpins cerebellar learning, in line with Marr-Albus-Ito theory (Yamazaki & Lennon, 2019). Ordinarily, CF bursts arriving with precisely timed parallel fiber (PF) activity instruct Purkinje cells to strengthen or weaken the relevant PF-Purkinje synapses. This process calibrates which motor commands are flagged as errors. We posit that when CF bursts become near-constant (or so frequent that they coincide with almost all PF firing) no meaningful discrimination occurs. Rather than marking an isolated subset of PF inputs as erroneous, the CF is effectively labeling “everything” as an error, far too often. Consequently, Purkinje neurons can move to a new equilibrium where further parallel fiber-Purkinje synaptic depression (or potentiation) is minimized because their downstream molecular cascades—already saturated or chronically engaged—fail to implement further PF-Purkinje LTD or LTP with the usual specificity. Thus, Purkinje cells no longer produce strong corrective adjustments with each subsequent CF burst; the repeated strong signal is effectively downgraded to what can be considered as “background noise”. The normal CF-driven plasticity events do not initiate as robustly, depriving the system of the carefully targeted corrections that typically refine movement.
In parallel, upstream alterations in the inferior olive (IO) can occur, compounding the problem. Olivary neurons rely on electrotonic coupling (via gap junctions) and subthreshold oscillations to generate highly synchronized “teaching” bursts. Under conditions of uniformly high excitatory input—such as persistent signals from a hyperexcitable M1 synergy—the IO may reduce the degree of synchronous spiking or undergo a shift in its gap-junction conductance, effectively desynchronizing the ensemble that normally fires CF bursts together. Moreover, while direct evidence is limited in FTSD, IO cells can display homeostatic changes in their intrinsic membrane properties (for instance, regulating T-type Ca2+ channels or K+ currents), further degrading the amplitude or timing precision of the CF bursts. As a result, the reliability of the CF “teaching signal” declines. Instead of delivering sharp, well-phased error bursts at carefully timed intervals, the IO provides a duller, less coherent excitatory train. This diminishes the “salience” of each burst and would reinforce the cerebellum’s tendency to accept the elevated CF input as baseline.
Moreover, downstream of Purkinje cells, the deep cerebellar nuclei (DCN) normally integrate the inhibitory output from Purkinje neurons and project excitatory signals back toward the motor thalamus and, ultimately, M1. We speculate that, when Purkinje cells chronically fire in a manner shaped by saturated CF input—meaning they no longer deliver strongly modulated inhibitory commands—DCN neurons can undergo their own homeostatic or plastic adaptations. Over time, and as inferred from studies of cerebellar plasticity in other disorders, DCN cells may reduce the responsiveness to Purkinje inhibition or alter the expression of specific receptors and ion channels that govern how they interpret Purkinje input. For example, persistent mild hyperinhibition (or disorganized inhibitory bursts) can trigger DCN remodeling that blunts further inhibitory influence, effectively ratcheting down the net inhibitory drive traveling up the cerebello-thalamic-cortical pathway. This is the final neural link that would ordinarily “clamp” motor cortical excitability. Hence, once the DCN adapt to a background of excessive or chronically unstructured Purkinje firing, cerebello-cortical inhibition may be functionally weakened.
Altogether, these convergent processes—kinase/phosphatase saturation, channel regulation, altered transcription, local circuit feedback in Purkinje cells, gap-junction changes in the IO, and DCN homeostatic changes—lead the cerebellar circuit to “redefine” what it considers a significant deviation from expectation, though it is important to note that such mechanisms are primarily inferred from animal models and general principles of cerebellar learning. Rather than treating these high-intensity CF bursts as salient error events, the system shifts to an ongoing baseline state wherein the plastic changes that would normally curtail hyperactivity are no longer triggered. Purkinje neurons effectively reset their threshold for responding to the CF, desensitizing to further large inputs so that chronically high CF activity fades into the background.
Consequently, Brighina et al.’s (2009) observation that cerebellar stimulation fails to suppress M1 excitability in dystonia reflects the downstream consequence of this altered plasticity. Under normal conditions, Purkinje cells and DCN together would robustly dampen an overactive M1, but here they are tuned to regard that hyperexcitability as an acceptable baseline. The cerebello-cortical feedforward loop therefore weakens its inhibitory influence on the dystonic synergy. Kita et al.’s (2021) data, showing elevated cerebellar BOLD in pianists with overt dystonia, align with this scenario: despite having “accepted” a high excitatory baseline, the cerebellum remains constantly engaged, performing micro-corrections on local deviations caused by unintended antagonist co-contractions stemming from diminished reciprocal inhibition. Each unanticipated mismatch triggers climbing fiber and granule cell activation, raising the metabolic demand in the cerebellar cortex and DCN. Thus, this system is paradoxically overactive (high BOLD) while simultaneously failing to clamp M1 excitability. Instead of fully suppressing the synergy, it “chases” moment-to-moment motor errors around a chronically heightened setpoint. In healthy individuals, a sudden burst of excessive cortical drive is swiftly recognized and downregulated; in FTSD, we propose that it is processed merely as a modest deviation from an abnormally large baseline. The result is continuous partial corrections, which consume significant cerebellar resources without restoring a normal synergy. Over time, such “locking” of synergy perpetuates the FTSD condition at both the cortical and subcortical levels, with the cerebellum’s “error threshold” effectively reprogrammed to accommodate the persistent hyperexcitability from M1.
Primary Somatosensory Cortex
Cortical “smudging” in FTSD can be viewed as a secondary by-product of maladaptive plasticity within S1. As noted earlier, neocortical inhibition is divided among three molecular classes; of these, PV interneurons—already established as the engines of rapid surround inhibition—are most critical here, while the SST and 5HT3aR lineages supply slower, modulatory braking. Under normal conditions each digit occupies a partly distinct S1 territory. Rodent slice and in-vivo work show that thalamocortical EPSPs in layer-4 principal cells are followed ~1-3 ms later by disynaptic feed-forward IPSCs from PV interneurons, producing an integration window on the order of a single millisecond; this window can broaden to ~10 ms during sustained 10 Hz activity as inhibition depresses (Cruikshank et al., 2007; Gabernet et al., 2005). By extension, a rapid PV discharge in the neighboring digit column is thought to curb spill-over and dampen unwanted co-activation, thereby preserving relatively discrete zones of balanced excitation and inhibition for every finger.
However, in FTSD, repetitive co-activations of two or more digits frequently occur. This can happen when a forceful contraction of a symptomatic digit (say, digit 4) mechanically drags or co-activates its neighbor digit 3—a phenomenon often termed “enslaving” (Zatsiorsky et al., 2000); when the symptomatic finger co-contracts with an unaffected neighboring digit because surround inhibition is inadequate; or when multiple fingers are themselves symptomatic. These simultaneous signals from the co-activation of multiple digits arriving at S1 drive highly correlated presynaptic volleys, including in the horizontal connections (e.g., layer 2/3 pyramidal neuron collaterals) that span digit columns, thereby engaging precisely the STDP that produces LTP among the participating neurons. Over many thousands of repeated co-activations, STDP can begin to strengthen excitatory synapses linking the digit columns, while simultaneously undermining the timed inhibitory connections that once enforced clear demarcation between them. Additionally, we propose that the S1 “learns” that inhibiting the adjacent digit is less effective in this context and begins to synaptically downregulate these inhibitory synapses through either LTD or outright synaptic pruning. On a receptor level, repeatedly failing inhibition may involve reduced GABAA receptor function—e.g., internalization or subunit changes driven by phosphatase-like pathways (calcineurin or β-arrestin) that accelerate receptor endocytosis.
What ensues is a cycle in which strengthening of cross-column excitatory synapses (i.e., “neurons that fire together wire together”) and weakening of time-mismatched inhibition (the “lose their link” phenomenon for ineffective inhibitory firing) combine to degrade the once-robust boundary between digit 3 and digit 4 in S1. Furthermore, this abnormal reorganization is magnified by unmasking previously silent or subthreshold connections: once lateral inhibition wanes through the process described above, any anatomically extant but functionally suppressed connections between the two columns become functionally active. In other words, even before structural LTP fully consolidates the two-digit maps, the sudden “unmasking” of existing synaptic pathways can produce an apparent enlargement or overlap of cortical representation. Such LTP-like changes often involve molecular cascades (e.g., CaMKII, PKA, or CaMKIV activation) that increase AMPA receptor (e.g., GluA1) insertion at the bridging synapses. Repetitive usage cements these connections through Hebbian plasticity, culminating in significantly smudged territories for the affected digits. The well-documented phenomenon in nonhuman primates described by Allard et al. (1991)—where artificially syndactylized fingers received correlated stimulation and ended up with fused cortical territories—provides an elegant parallel to the repetitive co-activations in dystonic hands. Human data from Elbert et al. (1998) reinforce the notion that in focal dystonia (particularly in highly trained musicians), representations of adjacent digits collapse inward and show a reduced distance between peak cortical sites, consistent with losing the carefully timed feedforward inhibition. Furthermore, Yoshie et al. (2015) demonstrate how restoring temporally segregated inputs can reverse or reduce that smudging. Their case study of a pianist who practiced SDE (performing deliberate, slow, and distinct finger movements just like BATR) exemplifies how re-introducing unique, offset patterns of digit activation likely gives interneurons the correct time window to reassert effective inhibition—thus reversing the overlap in the receptive fields and improving two-point discrimination thresholds.
In sum, the process can be conceived of at three levels: (1) at the cellular level, STDP governs synaptic strength: excitatory synapses bridging two digit columns become potent when repeatedly coactivated, while inhibitory synapses fail to curtail excitatory drive (through GABAA receptor downregulation or synaptic pruning) and thus undergo LTD; (2) at the local circuit level, the interneuron timing that was once finely tuned to deliver hyperpolarization a few milliseconds after one digit’s firing is lost, because both digit columns fire too synchronously; the inhibitory bursts arrive in phase with, or even behind, the strong depolarizing events, yielding minimal net suppression. Basket cells (a type of PV interneuron) or chandelier cells may no longer provide a well-timed clamp on excitatory layer 2/3 microcircuits that bridge adjacent digits; and (3) at the larger network level, the representational boundary in S1 dissolves, and previously discrete digit zones fuse into a single smudged territory. This breakdown of surround inhibition plus the unmasking of latent excitatory pathways means the cortex now treats the two digits as a partially unified entity, producing the enlarged or merged receptive fields characteristic of FTSD. Critically, once these maladaptive patterns become entrenched, triggering co-activation of digits reinforces them further. However, as Yoshie et al. (2015) highlighted with SDE, providing each finger with distinct, well-timed sensory input can shift excitatory correlations out of phase and re-enable inhibition at precisely the right offset. Over time, these more physiologically adaptive patterns re-empower the local interneurons that impose digit-specific gating, allowing the cortical columns to regain their separation. In some instances, neuromodulators such as acetylcholine or norepinephrine can further shape synaptic plasticity thresholds by modulating spike timing windows and receptor trafficking, potentially facilitating either more rapid strengthening or faster downregulation, depending on the context. Ultimately, this is why carefully guided retraining can restore digit independence and reverse at least some of the cortical smudging. The system that learned maladaptive timing can, given consistent uncoupled input, learn once more how to keep digit representations distinct.
Moreover, to understand the crux of how the S1 “learns” that inhibiting the adjacent digit is ineffective, we must consider when and how effectively these interneurons are firing relative to the excitatory neurons they are supposed to modulate. Recall that inhibition in the S1 depends crucially on precise timing. Surround inhibition for digit representations is most efficient when a PV interneuron discharges a few milliseconds after the “active” column fires, pre-emptively silencing the adjacent column. That small offset ensures that the excitatory drive coming from one digit’s afferents arrives in the adjacent column at a different phase, allowing the inhibitory burst to arrive in time to diminish or shunt excitatory depolarization there. However, in FTSD—although it is important to note that no single study in humans has directly recorded spike-by-spike timing of PV interneurons in S1 during FTSD (technical and ethical constraints make that impossible)—it can be inferred that inhibitory interneurons are firing “in phase” (i.e., nearly simultaneous) with local excitatory events during the co-activation of digits. For example, data from cortical reorganization studies in both nonhuman primates and human dystonic patients (e.g., Elbert et al. 1998, Yoshie et al. 2015) demonstrate that once adjacent finger zones are chronically co-activated, the representational boundary effectively collapses. A logical explanation for this is that inhibition is not operating in a well-timed “slightly delayed” manner to keep the zones separate, but rather is swamped or arrives when the excitatory wave is too large and too synchronous. This repeated mismatch fosters a scenario in which the network “learns/concludes” that a short-lived or in-phase inhibitory discharge does not reduce spiking—hence synaptic depression or pruning ensues. Essentially, in FTSD when two or more digits simultaneously produce highly correlated sensory volleys that converge onto S1 columns, because the excitatory neurons in each digit’s column are repeatedly firing together and the inhibitory interneurons for columns 3 and 4 are chronically in phase with the excitatory neurons, the well-timed hyperpolarization that actually reduces spiking in the target excitatory cells is never produced. In practice, the biological “credit-assignment” for whether an inhibitory synapse is doing its job relies on feedback signals—calcium transients, membrane potentials, second-messenger cascades (e.g., calcineurin or CaMKIV), and so forth—within both the interneuron and its postsynaptic excitatory targets. If, over thousands of co-activations, excitatory synapses linking digit columns persistently strengthen while the net depolarization of the excitatory neuron is never meaningfully lessened by the inhibitory input, that situation triggers the molecular conditions for LTD. Conceptually, one can imagine each cortical synapse “asking”: “When I fire, does it make any difference?” If the answer is repeatedly “No”—because the excitatory drive from the other column is strong, in sync, and unstoppable—then gene transcription and receptor trafficking gradually shift so that the inhibitory synapses lose synaptic strength, or entire interneuronal branches (dendrites or axon terminals) get pruned as the local circuit has no impetus to preserve an inhibitory connection that never achieves the desired effect (i.e., a measurable drop in the excitatory neuron’s activity). Meanwhile, excitatory synapses bridging the two columns can become positively reinforced through additional AMPA receptor insertion—a classic Hebbian (“neurons that fire together wire together”) rule. Once the bridging inhibitory connections have weakened, the boundary between the two digit representations dissolves, and the incoming afferent signals begin to appear “shared” by both columns—manifesting as enlarged, overlapping receptive fields (i.e., cortical smudging).