Power Spectral Analysis upon Disfluent Utterance in Adults Who Stutter; a qEEG- based Investigation

Purpose: The present study which addressed adults who stutter (AWS), has been an attempt to investigate power spectral dynamics in stuttering state through answering the written questions using quantitative electroencephalography (qEEG). Materials and Methods: A 64-channel EEG setup was used for data acquisition in 9 AWS. Since speech, and especially stuttering, causes significant noise in the EEG, the three conditions of speech preparation (SP), imagined speech (IS), and simulated speech (SS) in a 7-band format were chosen to source localize the signals using the standard low-resolution electromagnetic tomography (sLORETA) tool in fluent and disfluent states. Results: Having extracted enough fluent and disfluent utterances, significant differences were noted. Consistent with previous studies, the lack of beta suppression in SP, especially in beta2 and beta3 and somewhat in gamma band, was localized in supplementary motor area (SMA) and pre motor area in disfluent state. Delta band frequency was the best marker of stuttering shared in all 3 experimental conditions. Decreased delta power in SMA of both hemispheres and right premotor area through SP, in fronto-central and right angular gyrus through IS, and in SMA of both hemispheres through SS were a notable qEEG features of disfluent speech. Conclusion: The dynamics of beta and delta frequency bands may potentially explain the neural networks involved in stuttering. Based on the above, neurorehabilitation may better be formulated in the treatment of speech disfluency, namely stuttering.


Introduction
Stuttering is predominantly a neurodevelopmental disorder (Smith & Weber, 2016) which appear in almost 5% of children extended into adulthood in 1% of cases (Yairi & Ambrose, 2004). The condition is characterized by involuntary prolongations, repetitions, hesitations, and blocks in sounds, syllables and words level which interfere with normal speech stream (Etchell, Civier, Ballard, & Sowman, 2017;Jiang, Lu, Peng, Zhu, & Howell, 2012). Pathogenic theories of stuttering have referred to underpinning factors such as distorted inner patterns of motor speech control resulting in overreliance on sensory feedback (Max, Guenther, Gracco, Ghosh, & Wallace, 2004), linguistic bases (Howell, 2007), sensory-motor integration problems (Sengupta, Shah, Gore, Loucks, & Nasir, 2016), and/or specific motor speech deficits (Namasivayam & van Lieshout, 2011). Besides all its detrimental effects on quality of life, interpersonal relationships, and socioeconomic opportunity of adults who stutter (AWS) (Craig, Blumgart, & Tran, 2009;Yaruss, 2010), stuttering has been a challenging disorder in terms clinical and pathological perspective. Conventional behavioral interventions in AWS are costly and time consuming and are more compensatory than therapeutic (Blomgren, 2013), and all AWS do not tend to respond to the interventions the same way (Nang & Ciccone, 2016). It is therefore necessary to address more efficient interventional approaches.
to be some neural networks to incorporate fluent speech in AWS, brain stimulation techniques can imply on such networks to remediate stuttering.
However, little is known about how activity in the brains of AWS may differ during stuttering as compared to their brain activity upon fluent speech. Some investigators highlighted the challenge to systematically observe stuttering in brain imaging settings because in the laboratory setting AWS often stutter less than they frequently would (Connally et al., 2018;Sowman, Crain, Harrison, & Johnson, 2012). In other words, it appears that he experimental tasks applied in current brain research methods -e.g. single word or syllable production -typically do not induce enough instances of stuttered speech (Sowman et al., 2012). As such, elicitation of naturally stuttered versus fluent speech is considered a challenge in functional neuroimaging studies of stuttering (Ouden, Montgomery, & Adams, 2013).
Metronome-timed and choral speech are common fluency induced techniques used in functional neuroimaging investigations to extract fluent speech (Peter T. Fox et al., 1996;Toyomura, Fujii, & Kuriki, 2011;Wu et al., 1995). Wu and colleagues reported a significant decreases in regional glucose metabolism in Broca's area, Wernicke's area and frontal pole upon stuttering vs. non-stuttering condition in choral reading (Wu et al., 1995). In addition, Fox et al. found that induced fluency could decrease or eliminate the over-activity observed in most motor areas, and largely reverse the auditory-system under-activations (Peter T. Fox et al., 1996). According to Stager et al. brain areas that were either uniquely activated during fluency-inducing conditions (i.e. paced speech with metronome and singing), or in which the magnitude of activation was significantly greater during fluency-inducing than dysfluent speech, were concurrently activated with auditory association areas which process speech and voice as well as the motor regions related to control of the larynx and oral articulators (Stager, Jeffries, & Braun, 2003). On the other hand, Connelly and colleagues pursued a sparse-sampled functional MRI investigation during two overt speech tasks (sentence reading and picture description) showing a greater activation of bilateral inferior frontal and premotor cortices (extending into the frontal operculum) in disfluent vs. fluent states. The above differences were seen upon both tasks of reading and picture description while subcortical state effects differed based on the task (Connally et al., 2018).
In the same vein, other investigators have reported significant differences in speech preparation processes between AWS and fluent speakers (FS) (Achim, Braun, & Collin, 2008;Mersov, Jobst, Cheyne, & De Nil, 2016;Ning, Peng, Liu, & Yang, 2017). Vanhoutt and colleagues found an increased activity related to speech motor preparation preceding fluently produced words in AWS in comparison with FS (Vanhoutte et al., 2015) and evaluated whether this increase reflects a successful compensation strategy in stuttering. To this end, a contingent negative variation (CNV) was evoked during a picture naming task and measured by use of EEG. CNV is a slow, negative event-related potential known to reflect motor preparation generated by the basal ganglia-thalamo-cortical (BGTC) loop. To elucidate whether this increase is a compensation strategy enabling fluent speech in AWS, they evaluated the CNV of 7 AWS during the picture naming task. Though no difference emerged between the CNV of the AWS stuttered words and the FS fluent words, a significant reduction was observed when comparing the CNV preceding AWS stuttered words to the CNV preceding AWS fluent words. Researchers then considered the latter as a confirmation for the compensation hypothesis. It appears that the increased CNV prior to AWS fluent words is a successful compensation strategy, especially when it occurs over the right hemisphere. The words are produced fluently by virtue of an enlarged activity during speech motor preparation (Vanhoutte et al., 2016).
Having hypothesized that stuttering disfluencies would include the characteristic anomalies in neuronal oscillations which preempt speech onset and they would reflect neural miscommunication during motor planning within the speech motor network, Sengupta et al. tried to find probable differences in cortical dynamics of speech preparation network between stuttered speech and fluent speech in AWS (Sengupta et al., 2017). They adopted a paradigm that involved overt reading of long and complex non-sense tokens under continuous EEG recording to elucidate disfluency and subtracted out neural activity for fluent utterances from their disfluent peers. Differences in qEEG spectral power involving alpha, beta, and gamma bands were observed prior to the production of the disfluent utterances (Sengupta et al., 2017) The functional roles of neural oscillations in different stages of speech planning and production are less articulated even in fluent speakers. While a broadband low-frequency effect was found for overt and covert preparation in bilateral prefrontal cortices, preparation for overt speech production was reported specifically associated with left-lateralized alpha and beta suppression in temporal cortices and beta suppression in motor-related brain regions (Gehrig, Wibral, Arnold, & Kell, 2012). Both overt syllable and word productions yielded similar alpha/beta inhibition which preempted the production and was strongest during muscle activity (Jenson et al., 2014). AWS showed stronger beta (15-25 Hz) suppression in the speech preparation stage, followed by stronger beta synchronization in the bilateral mouth motor cortex in comparison with controls. Also the right mouth motor cortex in AWS was found to be activated significantly earlier in the speech preparation stage as compared to controls (Mersov et al., 2016).
Moreover, a considerable body of evidence supports the notion that motor activities, such as an arm or finger movement, produces neural activations similar to what occur during actual movement execution (Grafton, Arbib, Fadiga, & Rizzolatti, 1996;Lotze et al., 1999). Motor imagery (MI) is another condition which has been used in stuttering research. MI is shown to be in considerable agreement with overt stuttering in single word reading in terms of regional brain activation (Ingham, Fox, Costello Ingham, & Zamarripa, 2000;Wymbs, Ingham, Ingham, Paolini, & Grafton, 2013).
To our best knowledge, no neuroimaging investigation has systematically studied natural oral speech in fluent and disfluent participants owing to the methodological complexities. Using fluencyinduced methods generate utterances which do not necessarily represent natural fluent speech. The extraction of stuttered speech through reading and picture description produce speech samples unlike the common conversational contexts which require massages communication and processing of semantic and syntactic correlates.
Low-resolution methods such as LORETA and sLORETA can estimate the area of primary activity yet suffer from the poor spatial resolution. Although it is possible to correctly localize sources by finding well separated maxima in the image, these low-resolution approaches can exhibit poor performance in recovering multiple sources when the point-spread functions of the sources overlap (Liu et al., 2005).
Meanwhile, the high-resolution methods such as FOCUSS are able to localize focal sources which relate to some specific diseases or functions of the brain. Nonetheless, these methods are not generally robust for distributed activity and may generate "over-focal" results. It is emphasized that "high-resolution" is not necessarily better than "low-resolution". SSLOFO is a modest attempt to combine the advantages of both low-and high-resolution methods in an automated fashion (Liu et al., 2005).
Some research has shown that combining sLORETA and SSLOFO algorithms on EEG signals can be a useful tool for physiologists to find the neural sources of primary circuits in the brain (Sabeti, Boostani, & Rastgar, 2018;Sabeti, Katebi, & Rastgar, 2015). As such, we applied both sLORETA and SSLOFO on EEG signals recorded from 45 channels to estimate the spectral and spatial cortical distribution of the acquired signals.
That being said, we took a novel paradigm based on answering to 50 delicately-designed questions/command statements (e.g. what is your mother name? /name two fruit starting with "A", etc.), and examined the neural substrate of stuttering state using continues task-concurrent EEG recording. Subjects were asked to answer the questions/command statements (verbal stimuli) aloud. They were trained to imagine and whisper (simulated speech production) their own productions exactly the same as the original answer (the same words, the same fluency pattern). Neural oscillations related to speech preparation (SP), imagined speech production (IS), and simulated speech production (SS) were respectively analyzed. This present study aimed to test whether anomalies in oscillatory brain dynamics occur in SP, IS, and SS in AWS, and if so, to what extent their patterns are similar based on band power and anatomical brain regions. Delta, theta, alpha, beta1, beta2, beta3, and gamma band dynamics were aimed to be investigated. We could extract enough stuttered and fluent natural samples to compare with standard low-resolution electromagnetic tomography (sLORERTA) and shrinking sLORETA-FOCUSS algorithms.

Participants
Eleven native Persian-speaking men (mean age: 35± 4 years) with persistent developmental stuttering and no known history of hearing, speech and language (other than stuttering) disorders, and neurological problems participated in this study and received compensation for their participation. Subjects were right-handed (Oldfield, 1971) and their stuttering severity was mild to severe based on the Persian version of Stuttering Severity Instrument-version 4 (SSI-4) (Tahmasebi, Shafie, Karimi, & Mazaheri, 2018) evaluated by an independent speech and language pathologist (SLP). Two participants' signals were not properly marked due to problems with parallel port systems and were therefore excluded from the analysis. All experimental procedures were approved by the institutional ethical review board at Shiraz University of Medical Sciences (IR.SUMS.REC.1397.712). Written informed consent was obtained from all participants.
Every verbal trial lasted 15 seconds as shown in Figure 1. Every trial comprised 8 icons, presented on a PC screen with Psych toolbox-3 through open-source Matlab and GNU Octave functions for vision and neuroscience research (available at http://psychtoolbox.org/docs/DownloadPsychtoolbox ) and lasted 15 seconds. Participants were trained to read verbal stimuli in silence and say the best answer immediately after the appearance of speaker icon on the screen and cut the articulation even if they could not finish their answer after disappearance of the icon. Then, they had 1 second time to decide whether they stuttered and push a button if so. The next icon that designed to proclaim the imagine appeared on the screen and stayed for 3 seconds. The next icon that signaled the whisper lasted 3 seconds too. Participants gave written and verbal instructions in a separate session and a train block was introduced before the main experiments. Three events related to 3 tasks in these time-windows were selected to analyze and compare [i.e. 2.5 seconds before the appearance of speaker icon (SP), about 3 seconds of imagine (IS), and about 3 seconds of whisper (SS)]. Since speech, and especially stuttering, produces very intense noise in the EEG, most studies prefer not to include parts of this section in signal analysis, and we therefore used the above-mentioned conditions.
A well trained SLP supervised the procedure and pressed the push-button if she believed that participant stuttered and that trial was correctly accomplished. On the other hand, if a participant did not stutter but had wrong timing in task execution the SLP push the button to mark the trial as rejected. The two available push-buttons were synched to EEG recording system with a trigger box and the marked trials. Two triggers were considered as disfluent trial, no trigger as fluent trial, and 1 trigger as the rejected trial.
Participants were familiar with verbal stimuli 1-3 weeks before the experiment and were asked to score stimuli based on semantic difficulty and anticipation of their stuttering for later investigations. (Figure 1) Figure 1. The study design representing task blocks. Verbal stimuli and tasks were designed such a way that their probable replies encompass various phonological, semantic, and syntactic contexts. Verbal stimuli were randomly repeated 5 times in 50-verbal stimuli blocks and consisted of 5-10 syllables and morphemes and less than 7 words in Persian. 7 2.3. Electrical Neuroimaging-EEG acquisition EEG data was obtained at a sampling rate of 512 Hz using a 64-channel Brainvision amplifier (Mulgrave, AUSTRALIA). The active electrodes were mounted on an elastic cap using the standard 10-20 system of electrode placement, and the electrical impedance of scalp electrodes were set below 10 KΩ. Participants were instructed to minimize eye blinks and body movements during the 3 experiment sessions. About 1-minute pauses between blocks were considered to avoid fatigue and muscle tension. The parallel port system delivered a trigger pulse at the moment each icon was displayed in order to align EEG signals for proper offline analyses. Data of two parallel ports recorded on channels 65 and 66 for push buttons and event triggers respectively.

Filtering and artifact rejection
EEG signals of every subject were imported to MATLAB-based EEGLAB_14_1_2b open source software (Delorme & Makeig, 2004) and bandpass-filtered offline between 0.75 and 45 Hz using a second-order Butterworth filter. Signals were referenced at electrode AFz (Sengupta et al., 2017). The Independent Component Analysis (ICA) training was accomplished with "runica" algorithm in EEGLAB. ICA decomposition yielded 63 ICs component for each participant, corresponding to the number of recording electrodes (64 data channel minus AFz that removed after referencing). ICA is an optimal tool for unmixing volume conducted EEG signals with components yielded from neural and non-neural sources (e.g., muscular, artefactual sources) (Bell & Sejnowski, 1995;Olbrich, Jö dicke, Sander, Himmerich, & Hegerl, 2011). The ADJUST.1.1.1 tool (an open-source EEGLAB plugin available at https://sccn.ucsd.edu/wiki/Plugin_list_all) was used to detect non-neural sources. ADJUST is a completely automatic algorithm that identifies artefactual independent components by combining stereotyped artifact-specific spatial and temporal features. Features are optimized to capture blinks, eye movements, and generic discontinuities on a feature selection dataset. The algorithm then provides a fast, efficient, and automatic way to use ICA for artifact removal (Mognon, Jovicich, Bruzzone, & Buiatti, 2011).
After removing detected components using the plugin, visual inspection for components resulted from muscle tension artifacts and bad channels was conducted by an experienced neuroscientist. Components which corresponded to blinking or speaking tension were also removed. Bad channels, if any, were interpolated for every participant. The scalp electrodes above the sensory and motor regions supporting the speech motor task were selected. Thus, electrodes over the occipital, frontopolar, and extreme temporal regions were excluded. Data from the remaining 45 electrodes were considered and analyzed. This set of electrodes was in fact less prone to muscle artifacts (Figure 2). MATLAB R2018b software was employed to process and analyze data in each subject.

Separation of trials related to fluent and disfluent utterances
Separation of fluent and disfluent trials was done using the MATLAB, based on the 65-channel data for every participant. The number of verbal stimuli which contained both fluent and disfluent trials were extracted and those which were fluent or disfluent in every 5 blocks were excluded. Accordingly, 325 out 2250 trials (%14.4) were extracted as fluent, 215 (%9.5) as disfluent, 240 (%10.7) as rejected, and 1470(%66) as excluded trials. Signals of the fluent and disfluent trials in every 3 tasks (i.e. SP, IS and SS) were separately averaged in all electrodes.

Estimation of the EEG spectral powers and source-localization
The spectral power was averaged for the fluent and disfluent answers in every subject over nonrejected repetitions of the non-excluded verbal stimuli for each task separately. Again, power was averaged over all participants. Consequently, two 45*1537 and one 45*1280 matrices were generated for IS, SS, and SP respectively, where 45 was the number of selected electrodes and 1537 and 1280 values were obtained from sampling rate*tasks time-windows. Signals were filtered by a Butterworth IIR filter using MATLAB function to obtain power of the 7 band frequencies including delta (0.75-4 Hz), theta (4-8 Hz), alpha (8-12 Hz), beta1 (12.5-16 Hz), beta2 (16.5-20 Hz), beta3 , and gamma (30-45 Hz). Then sLORETA and SSLOFO algorithms were used to estimate the location of distributed intracerebral sources creating band powers of EEG signals after bandpass filter was applied. Given the above, there were 3*7 conditions to be compared in fluent and disfluent states. As for the statistical analyses, Paired t-test was used to derive statistical significance of 21 conditions between the two states. Localization algorithms were applied in conditions where meaningful differences were observed (р≤0.05).

Results
The present investigation yearned to: 1-compare the brain oscillation dynamics upon fluent and disfluent speech signals over 7 band frequency spectra in SP, IS, and SS tasks; and 2-compare brain oscillation dynamics of the 3 tasks. According to paired t-test, localized power of band frequencies were found to have significant differences over all 21 investigated conditions between the fluent and disfluent signals (р≤0.05).

The speech preparation (SP) dataset
Neural oscillations showed marked differences in delta, beta2, beta3, as well as gamma bands over the two speech states (Figure 3, A). Preparation for fluent speech was accompanied by delta synchronization in superior and medial frontal gyri [the Brodmann area (BA) 8] over the left hemisphere (supplementary motor area) and middle frontal gyri (BA6), the premotor area and BA8 over the right hemisphere. Whereas, this was predominantly observed in in Bas 9, 10, and 11 in the left hemisphere. Moreover, the left middle and inferior frontal gyri (IFG) and BA18 were active in delta band during preparation for disfluent speech (Table 1). While the energy distribution in the theta band did not show a prominent difference between the two states, the energy shifted from the anterior regions of right prefrontal (BAs 9 and 10) in fluent state to more posterior left frontal regions (BAs 6,8, and 9) in preparation for disfluent speech with incremental frequency as shown Figure 3. The oscillatory activity in beta and gamma bands followed a relatively similar pattern in fluent state. Results of SSLOFO application are shown in Table 1 for speech preparation state.
The difference between two speech states in the left hemisphere was noticeable. At a glance, all bands in stuttering state showed a significant power increase in the left hemisphere -relative to the right hemisphere -especially in BAs 9 and 10. This was remarkable for the right hemisphere only in the delta band and in fluent speech preparation condition. On the other hand, speech preparation for fluent state in all bands was associated with more energy-concentrated activity, whereas disfluent state was associated with more distributed yet lower energy levels. There was a notable prefrontal activity, especially in the right hemisphere upon SP and across spectra both in fluent and disfluent speech states.
The power scales for each band (based on dB) and the results of sLORETA application are demonstrated in Figure 3. (Figure 3, Table 1

The imagined speech (IS) dataset
The most prominent distinction between fluent and disfluent states for IS condition was seen in the delta and gamma bands. As such, a peak energy shifting was observed from the supplementary motor areas (SMA) (BA8) of both hemispheres in the fluent state to the frontopolar area (BA10) of the right hemisphere upon disfluent state (Figure 4, A). The other interesting finding was the high activity in the right hemisphere in delta band through the fluent state in the angular gyrus (AG), while this was abrogated in disfluent state (Figure 4, C). Synchronization in delta in left hemisphere and gamma bands in right hemisphere over BA18 during the imagined disfluency was another distinction between two speech states according to SSLOFO algorithm results outlined in Table2. In theta band, energy was very low in the left hemisphere in fluent state, whereas in the disfluent state, BAs 6, 9, and 10 were active in the left hemisphere based on SSLOFO results as shown in Table2. Moreover, the left hemisphere showed gamma desynchronization in fluent state while in disfluent state, the premotor area over both hemispheres and the SMA in left hemisphere were active in gamma frequency band.
In sum, imagined fluent speech was associated with more energy-concentrated activity in all bands except the delta. Meanwhile, disfluent state was associated with more distributed yet lower energy levels. There was a notable prefrontal activity, especially in the right hemisphere upon IS and across spectra both in fluent and disfluent speech states. (Figure 4, Table 2)

The simulated speech (SS) dataset
In delta and theta bands, the SMA was predominantly active in both hemisphere while whispering fluent rather than disfluent speech ( Figure 5, B). Distributed low level activity through the left precentral gyrus and right postcentral gyrus were obtained using the SSLOFO in delta range during disfluency (Table 3). In left Inferior Frontal Gyrus (IFG) through whispering disfluent speech, stronger synchronization was seen in beta1, beta3, and gamma bands. However, alpha and beta2 bands were more synchronous upon whispering fluent speech. The right IFG showed no activity in any of speech states ( Figure 5, D). Meanwhile, the right premotor area was active through disfluent state in gamma band range although motor areas were markedly silent during fluent speech in this frequency range.
The power scales for each band (based on dB) and the results of sLORETA application are demonstrated in Figure 5. (Figure 5, Table 3)

The cross-comparison of three conditions (SP, IS, SS)
The peak activity in motor areas through all three conditions (SP, IS, and SS) was found to be in delta band and over the left hemisphere through fluent state. The SS condition represented real speech with lower amplitude and muscle strength wherein motor areas were active in theta and alpha bands in fluent but not disfluent states. However, IS was an exception because of left SMA activity in disfluent state. With respect to the high frequencies (beta3 and gamma upon SP and IS/ gamma upon SS), the activation of left motor areas were associated more with disfluent than fluent states.
In nutshell, the oscillatory patterns in the fluent speech state were more similar in the three conditions of SP, IS, SS across band spectra as compared to the disfluent speech state. The fluent state was associated with heightened power across spectra in all three conditions. Meanwhile, neural activity in disfluent state was more widely distributed retaining lower energy.

Discussion
This study was an attempt to investigate neural oscillations in stuttering state through 3 conditions of SP, IS, and SS over 7 band frequencies. Some previous studies have found particular neurophysiological markers for stuttering in frequency ranges of theta (Ghaderi et al., 2018), alpha (Jenson, Reilly, Harkrider, Thornton, & Saltuklaroglu, 2018;Saltuklaroglu et al., 2017;Wells & Moore, 1990), beta (Jenson et al., 2018;Mersov et al., 2016;Saltuklaroglu et al., 2017), and gamma (Sengupta et al., 2017) in various speech-related tasks. We however observed different and somewhat conflicting neural oscillation patterns in different frequency bands. Event-related desynchronization (ERD) in the left SMA was prominent in beta and gamma bands upon fluent speech. On note, beta2, beta3, and gamma ranges extended to premotor area during preparation for disfluent speech. This was consistent with anomalous beta suppression theory in stuttering. In stuttering, there appear to be structural and functional abnormalities in brain areas such as the basal ganglia and SMA which provide the substrate for internal timing that is assumed necessary for motor planning and execution. It is proposed that stuttering is a disorder of internal timing represented by modulations of oscillatory power within the beta band (Etchell, Johnson, & Sowman, 2014). However, in our study, this phenomenon spanned through a wider frequency range.
The role of delta band in speech processing has been recently articulated in the literature. Delta band entrainment has been linked to processing of sequential attributes of speech sounds (Boucher, Gilbert, & Jemel, 2019). The bilateral frontal (F3, Fz, and F4) delta activation reflects adaptive neural processes by which vocal production errors are monitored. Some evidence have postulated that delta band synchronization is associated with updating the state of sensory-motor networks to drive and control subsequent verbal functions. (Behroozmand, Ibrahim, Korzyukov, Robin, & Larson, 2015). In agreement with these studies, we found delta synchronization in bilateral SMA through preparation for fluent speech in AWS. Interestingly, this was not found for SP in disfluent speech. However, delta activation in right premotor cortex was more prominent than left in SP condition ( Figure 3A and B). This was consistent with Vanhoutt et al. findings (Vanhoutte et al., 2016;Vanhoutte et al., 2015). They found that increased activation in basal ganglia-thalamo-cortical circuits especially in right hemisphere through SP is a successful strategy resulting in production of fluent words. Delta activation in frontocentral area for IS and in SMA for SS was observed in both hemispheres only through fluent speech ( Figure 4B and Figure 5A). Poor beta suppression seems to be associated with being stuck in current state and delta desynchronization is associated with failure to detect speech-related errors.
According to our findings, left inferior and middle frontal gyri were activated in disfluent state in SP (across band spectra) and SS (in beta1, beta3, and gamma bands). Meanwhile, this was not observed in fluent state. Based on the report by Kell et al., structural changes in the left inferior frontal region is attributed to stuttering pathology (Kell et al., 2009). They results demonstrated a hyperconnectivity, originating from the failure in eliminating rudimentary synapses during development which ultimately result in superfluous and irrelevant information transmission. In other words, recovery from stuttering is not necessarily linked to the reform of such structure. It appears that using these malformed neural network in AWS results in disfluency. Meanwhile, there are revival compensatory networks in bilateral SMA and right premotor area which tend to synchronize in delta band upon recovery from stuttering.
In the same vein, Tian and colleagues investigated the effects of motor simulation and sequential perceptual estimation in mental imagery of speech (Tian & Poeppel, 2012). According to their findings, there appeared to be a noisy perceptual estimation in stuttering which was mismatched with external feedback predominantly originating from somatosensory and auditory domains. Posterior superior temporal and inferior parietal cortices provide a substrate for the integration of articulatory planning and sensory feedback as well as the execution of articulatory movements through the connections with motor cortex (Watkins et al., 2008). On the other hand, it has been known that the AG is involved in the interface of atypical planning and execution in stuttering in virtue of its abnormal connectivity with IFG, putamen, and the premotor area (Lu et al., 2010). The increased IS-related activation of right hemisphere delta band in AG can be attributed to a successful compensatory strategy. However, in a PET imaging study, Ingham and colleagues found that some parietal regions were significantly activated during the imagined-but not overt-stuttering . Such a disparity in results can be partly justified by the methodological differences.
One of our salient findings was the prefrontal activity across spectra, especially in right hemisphere in all 3 experimental conditions in both states of fluent and disfluent speech. The involvement of multiple brain regions of rostral prefrontal cortex concordantly with time-and eventbased prospective memory (Okuda et al., 2007) can potentially justify such a finding. In our study, the complexity of submitted tasks required appropriate attentional, decision-making, working memory, coordinated functions and top-down processes where the role prefrontal cortex is prominent (Miller & Cohen, 2001). Given the importance of the right dorsolateral prefrontal cortex in self-evaluation (Schmitz, Kawahara-Baccus, & Johnson, 2004), most of the right prefrontal activity is potentially correspond to making decisions about speech fluency.

Conclusion
We found a distinct pattern in power spectra especially in SMA, premotor area in AWS upon answering the questions in stuttering state. Of note, the neural oscillatory activities of some bands had unique patterns.
Motor imaginary is commonplace in brain-computer interface (BCI) research. Meanwhile, neurodynamics of IS in fluent and disfluent speech needs to be further investigated. Despite a significant noise in EEG and all the qEEG analysis challenges, the approach seems to be still a proper method in stuttering research. Also, as far as we know, no study has had systematically investigated stuttering in SS condition.
We found some similarities among neural activities of the three conditions, yet the differences still remain noticeable. Another open question to address is the possible role of subcortical circuits as well as cortical-subcortical communication in stuttering. Continued research in the same perspective can further unravel the pathology behind stuttering and help applied neuroscientists introduce novel therapeutic approaches.