Preprint
Article

This version is not peer-reviewed.

Auditory Processing in Musicians: a Basis for Auditory Training Optimization

A peer-reviewed article of this preprint also exists.

Submitted:

24 May 2023

Posted:

26 May 2023

You are already at the latest version

Abstract
Better auditory processing of musicians is observed in previous research. As musicians differentiate their practice method and performance environment, we aimed to assess auditory perception in Greek musicians with respect to their musical specialization. If there are differences, this may provide a basis for better shaping auditory training in individuals with auditory processing disorder. The auditory tests administered were speech in noise (Speech in Babble), with and without rhythmic advantage (Word Recognition—Rhythm Component), short-term and working memory (Digit Span - Forward and Backwards), temporal resolution (Gaps In Noise) and detection of frequency discrimination threshold (DFL). Groups consisted of classical musicians, Byzantine chanters, percussionists, and non-musicians (12 participants/group). Statistical analysis revealed significant difference in: (i) word recognition in noise with precursor synchronized pulse between classical musicians compared to Byzantine musicians, (ii) better frequency discrimination of Byzantine musicians compared to non-musicians for the 2000Hz region and (iii) working memory, an advantage detected in musicians. Considering all the above, we conclude that musicians have a superior auditory perception, regardless of musical specialization. Musical training enhances elements of auditory processing and may be used as an additional rehabilitation during auditory training, focusing on specific types of music for specific auditory processing deficits.
Keywords: 
;  ;  ;  ;  ;  ;  

1. Introduction

Current research provides evidence of enhanced auditory processing in musicians as compared to non-musicians. Capitalizing on this neuroplasticity-based improvement may lead to more focused auditory training for individuals with Auditory Processing Disorder (APD) aiming to better results and faster rehabilitation. Hearing is a prerequisite for communication, work and learning for the average person as well as an essential sense for every musician. Hearing being evaluated by the gold standard pure-tone audiogram may be missing on aspects of hearing that are important for everyday life [1]. Communication through the auditory modality needs intact temporal processing, speech in noise perception, working memory and frequency discrimination [2,3]. Auditory processing is happening at the level of the central auditory nervous system. Hearing (i.e. hearing sensitivity & auditory processing) contributes to the formation of cognition, and cognition contributes to hearing [4]. The superior auditory processing performance in musicians vs non-musicians is explained by the enhanced usage and training of their hearing sense and listening skills. Musical training engages beyond auditory training to reading and comprehending complex symbols into motor activity [5]. Of interest, recent research shows that frequency precision is more correlated with musical sophistication than cognition [6].
Perception of music and speech is thought to be distinct although sharing many acoustic and cognitive characteristics [7]. Pitch, timing and timbre cues may be considered as commonalities for auditory information transfer [8]. Memory and attention are required cognitive skills for both music and speech processing. Pitch is the psychoacoustic analogous of the frequency of the sound. Timing refers to specific turning points in the sound (for example, the beginning and the negation of the sound) and timbre is multidimensional and includes spectral and temporal features. Musicians’ superior auditory processing is attributed to enhanced accuracy of neural sound encoding [7,9,10,11] as well as better cognitive function [12,13]. Musical practice embraces experience of specific sound ingredients as well as joint integration during performance. Extracting meaning from a complex auditory scene may be a transferable skill to tracking a talker’s voice in a noisy environment [14].
Musicians are in an advantageous position in processing pitch, timing and timbre of music compared to non-musicians [15]. They demonstrate strengthened neural encoding of the timbre of their own instrument [16,17,18], but also show enhancements in processing speech [7,19,20,21,22,23] and non-verbal communication sounds [24]. Musical experience promotes more accurate perception of meaningful sounds in communication contexts other than the musical ones [7,10,21,25]. Music training is reported to change brain areas in a specific way that may be predicted from the performance requirements of the specific training instrument [26]. Musicians’ perceptual skills are influenced by the style of music played by them [27,28].
Auditory Processing [29] consists of mechanisms that analyze, preserve, organize, modify, refine, and interpret information from the auditory signal. Skills which support these mechanisms are auditory discrimination, temporal and binaural processing and are known as auditory processing elements. Temporal processing refers to auditory pattern recognition and temporal aspects of audition, divided into four subcomponents: temporal integration, temporal resolution/discrimination (e.g., gap detection), temporal ordering and temporal masking [30]. Sound localization and lateralization and auditory performance with challenging or degraded acoustic signals (including dichotic listening) [31] are included in binaural processing. Auditory discrimination involves the perception of acoustic stimuli in very rapid succession requiring accuracy of information that are carried to the brain [32,33]. These processes may affect phoneme discrimination, speech in noise comprehension, duration discrimination, rhythm perception, and prosodic distinction [34,35]. Temporal resolution, defined as the shortest period of time over which the ear can discriminate two signals [36] may be linked to language acquisition and cognition in both children [37,38,39,40] and adults [41,42,43,44].
American Speech Language Hearing Association (ASHA) uses the term Central Auditory Processing Disorder (CAPD) to refer to deficits in the neural processing, including bottom–up and top–down neural connectivity [45], of auditory information in the Central Auditory Nervous System (CANS) not as a consequence of cognition or higher order language [31]. Deficits in auditory information processing in the central nervous system (CNS) are demonstrated by poor performance in one or more elements of auditory processing [46]. (C)APD may coexist with, but is not derived from, disfunction in other modalities. Despite the absence of any substantial audiometric findings, poor hearing and auditory comprehension is expressed in some cases in CAPD. Moreover, (C)APD can be associated co-exist or lead to difficulties in speech, language, attention, social, learning (e.g., spelling, reading), learning and developmental functions [31,48]. In the international statistical classification of diseases and related health problems, 11th edition (ICD-11), auditory processing disorder (APD) is classified as AB5Y as a hearing impairment. (C)APD affects both children and adults, including the elderly [49] and it is linked to functional disorders beyond the cochlea [50,51]. According to WHO [48] prevalence estimates of APD in children range from 2–10% and can affect psychosocial development, academic achievement, social participation, and career opportunities.

1.1. Speech perception in Noise

Speech perception in noise is at the core of auditory processing as the most easily explainable test with a real life depict. Temporal elements required to perceive speech may be similar with those needed for music with rhythm thought to stand as a bridge between speech and music [51]. Highly trained musicians have been reported in some studies to have superior performance on different measures of speech in noise [20,51,52,53] with this advantage not always being present [54,55,56].
Consolidating of the possible improved speech in noise perception of musicians may have rehabilitation implications for individuals with hearing impairment. [54]. Research outcomes reveal that rhythm perception benefits are present at different levels of speech from words to sentences [57,58]. Percussionists were found to perform relatively better at the sentence in noise level compared to the words in noise one contrasted with vocalists while significantly outperforming non-musicians [51]. There is limited research evaluating speech perception in noise among musicians from different musical styles.

1.2. Temporal resolution

Auditory temporal processing is the alteration of elements of duration within a specific time interval [49]. The ability of the auditory system to respond to rapid changes over time is a component of temporal processing called temporal resolution linked to stop consonants perception during running speech [60,61].
Temporal processes are necessary for auditory processing and perception of rhythm. pitch, duration and separating foreground to background [2,34]. Chermak and Musiek [61] highlighted the role of temporal processing across a range of language processing skills, from phonemic to prosodic distinctions and ambiguity resolution. Temporal resolution underlies the discrimination of voiced from unvoiced stop consonants [62] and is clinically evaluated using Gaps-In-Noise [GIN] or Random Gap Detection Test [RGDT] [63]. Evaluating an individual's ability to perceive a msec gap in noise or between two pure tones provides information on possible deficits in temporal resolution and can lead to better shaping of rahabilitation [49]. Older adults generally are found to have poorer (longer) gap thresholds than younger adults [4].
Early exposure to frequent music training for years improves timing ability across sensory modalities [64]. Musicians present with better temporal resolution [65,66,67,68,69]. Musicians of different instruments and styles were found to have superior timing abilities compared to non-musicians [66,70]. Longer daily training in music leads to better gap detection threshold [71]. Neuroplasticity as a result of music training results in enhanced temporal resolution in children that are comparable to adults [69]. To our knowledge, no research publications exist evaluating possible differences in temporal resolution across musicians from different musical styles.

1.3. Working memory

Auditory and visual memory skills are enhanced in musicians and linked with early, frequent and formal musical training [3,68,69,70,71,72,78]. In rare cases no difference is documented [54] between musicians and non-musicians. A meta-analysis reported on a medium effect for short-term working memory with musicians being better. The advantage was large with tonal stimuli, moderate with verbal stimuli, and small or null with visuospatial stimuli [83]. This points to an auditory-specific working memory advantage rather than a more general one. Working memory improves due to auditory processing being enhanced through music education; hearing improves cognition.

1.4. Frequency discrimination

During speech processing the pitch has hyper-linguistic characteristics that provide information on emotion and intent [88] as well as linguistic characteristics. Musicians outperform non-musicians [11,20,66,70,85,89,90,91,92,93,94,96]. This advantage was hypothesized to be be a contributing factor in better speech-in-noise perception found in musicians [20]. Classical musicians were reported of having superior frequency discrimination abilities when compared to those with contemporary music (e.g., jazz, modern) background [95]. To our knowledge, there is no study researching possible differences across different musical styles that include Byzantine music.

1.5. Different music styles and instruments

The musicians groups selected for the present study differ in styles and music training. Byzantine music (BM), or Byzantine chant (BC), is the traditional ecclesiastical music of the Orthodox church. It is vocal music sung by one or more chanters [97] always having a monophonic character based on eight modes (“echos”) [98]. The chanters are usually male and there is no musical instrument involved apart from the human voice [99,100]. This is in contrast with the western classical music that is polyphonic, frequently including male and female voices in the presence of instruments. Percussionists are vastly trained in rhythmic skills and timing physical flexibility and in this research are experienced in both tuned and untuned percussions.
The ordinary tuning system for Western music is the 12 equal temperament tuning system which subdivides the octave interval into 12 tones (semitones) [100]. By contrast BC tuning system divides the octave into 72 equal subdivisions or “moria”, according to the Patriarchal Music Committee (PMC) [101]. In comparison to Western music, where the octave is based on 12 equal units (semitones), BM has each semitone corresponding to 6 moria [97]. The elementary tone (a minor second) consists of 100 logarithmically equal micro-intervals called cents; thus, the octave consists of (12 semitones x 100 cents) 1200 cents [102]. PMC’s musician experts indicate [101] that the less audible music interval is considered to be 1m or 16.7c, a critically smaller interval relevant to classical music. Likewise, Sunberg [103] argues that an interval of 20 cents (1.2moria) is hardly heard by a listener. In BM, each micro-interval differs from its neighbors by at least 2 moria [100] and the frequency steps made in Byzantine music, compared to Western music, may vary from even 1Hz in the bass voice range.
In the literature, superior auditory processing abilities are documented for musicians on speech in noise recognition, temporal resolution, frequency discrimination, and working memory. This auditory advantage in musicians is explained as a result of neuroplasticity through music training. This study aims (i) to evaluate possible differences across different instrument musicians and/or different musical styles. Could different musical instruments or styles training lead to enhanced auditory processing in specific elements that might not be the same across different style musicians? If this is proved to be the case it would provide more insights towards more individualized rehabilitating approaches for individuals with deficits in auditory processing. A secondary aim is to to verify that musicians have better auditory processing skills compared to non-musicians, specifically when examined with the auditory processing diagnostic test battery used in the Greek population.

2. MATERIAL AND METHODS

2.1. Participants

In the present study, 36 Greek professional musicians participated, divided into three groups according to specialization: 12 in byzantine music (four females), 12 in Western classical music (seven females) and 12 in percussion (four females). Musicians were performing music at a professional level with at least 10 years of musical experience (M = 27.58, SD = 10.83). The control group comprised of 12 non-musicians (10 females). The non-musician group did not get any formal music education apart from music lessons in primary and secondary mainstream education. The 3 experimental groups and 1 control group did not differ in average age (Table 1). All participants were selected by word of mouth, via Facebook posts and did not have a diagnosed neurological, language or attention disorder. They all signed informed consent before testing. All procedures were approved by the Ethics and Bioethics Committee of the Aristotle University of Thessaloniki (6613/14 June 2022).

2.2. Procedure

Tests were administered in a randomised order and included the Speech in Babble test (SinB) [104,105], Gaps-In-Noise (GIN) [49,106], Digit Span [107], Word Recognition Rhythm Component test (WRRC) [57,58], and Frequency Discrimination Limen test (DFL). These tests were administered to all musicians and non-musicians, to assess speech perception, temporal resolution, short-term and working memory, and speech comprehension with rhythm effect. We also conducted a Frequency Discrimination Limen test (DFL) which was created based on other Frequency Discrimination Limen and Just Noticeable Difference tests [92,108,109,110].
A hearing sensitivity threshold evaluation was implemented for all participants evaluating frequencies 250Hz, 500Hz, 1KHz, 2KHz, 4KHz, 8 kHz for each ear. Forty-five of them had typical hearing, and three had elevated hearing thresholds, as shown in Table 2. For the auditory processing evaluation we ensured that all participants had fully understood the given instructions, besides having successfully completed the practice items of each test, before initiating the standard test procedure. All participants were tested in a soundproof booth. Auditory stimuli were presented in each ear at 50 dB HL [101].

2.2.1. Speech-in-Babble

The Speech-in-Babble (SinB) test [104,112] is administered monaurally. It includes two different lists of 50 phonetically balanced bisyllabic Greek words presented in background multi-talker babble. Each word is preceded by a carrier phrase (“pite tin lexi,” i.e., “say the word”) and participants are instructed to repeat the word heard after each trial. Five signal-to-noise ratios (SNR) [+7, +5, +3, +1 and −1] are used and each SNR is applied to ten words in each list. SNRs at which 50% of the items are correctly identified are calculated using the Spearman–Karber formula [112]. Poorer performance is reflected in higher scores (measured in dB SNR).
For each participant, two scores were calculated for each ear based on one administration of the SinB. Word-based scores SinB_RE_words & SinB_LE_words (for right and left ears respectively) and syllable-based scores SinB_RE & SinB_LE (for right and left ears respectively). For word-based scoring, correctly identified bisyllabic words, are provided by the number of items in the Spearman–Karber formula. If the participant repeats only one of the two syllables presented, that word is scored as incorrect. For syllable-based scoring, the number of items in the Spearman–Karber formula is based on the number of correctly identified individual syllables. Therefore, if one syllable of the bisyllabic word is correctly recognized, the particular syllable is scored as correct and the non-recognized syllable as incorrect.

2.2.2. Gaps-in-Noise

The Gaps-in-Noise (GIN) test [49,106] is administered monaurally with a different list of approximately 30 trials for each ear. A practice session of 10 trials is given before the main test. Each trial consists of 6s of white noise with a 5s inter-trial interval. Each broadband noise segment contains 0 to 3 gaps (silent intervals), the location of which varies. Duration of the gaps are: 2,3,5,6,8,10,12,15 and 20 msec and each gap is presented six times during the test. Participants are told to indicate when they detect a gap by knocking their hand on the table. The gap detection threshold is calculated per ear as the shortest gap duration detected on at least four out of six gaps with consistent results for larger gaps. False positives are noted and subtracted from the correct responses as follows: total score = (total number of correct responses- false positives)/the number of trials in the list.

2.2.3. Digit Span

Digit Span is a working memory test that involves evaluation of both short-term memory and working memory binaurally. It consists of two different subtests with an increasing number of digits in set of two per the same number of digits. In the first one participants have to repeat digits heard following each trial, the start off is with two digits. On the second one participants have to repeat digits heard in a backward manner (e.g., “2-9-4-6” is heard and the correct answer is “6-4-9-2”). Testing is ceased when a participant gives an incorrect answer on two trials of the same length or when all trials are exhausted. The test’s result is measured by adding the number of items correctly identified for each subtest per subtest and in total.

2.2.4. Word Recognition – Rhythm Component (WRRC)

Word Recognition - Rhythm Component test [57,58] evaluates the speech in noise perception rhythm benefit. It uses three different lists of words in noise with a preceding 1KHz beats of four sequence in each word. There are three conditions: Rhythm, Unsynchronized and Non-Rhythm (RH, UnSc, and NR respectively). Rhythm (RH) condition: The sequence used in RH condition is isochronous and synchronized with the following word. Unsynchronized (UnSc) condition: The sequence used in UnSc condition is isochronous and the word is not synchronized to it. Non-rhythm (NR) condition: The sequence used in NR condition was not rhythmic (i.e., non-isochronous). To avoid learning effects that might potentially result in some kind of rhythm perception, several sequences were used in a cyclic order, i.e., first A, then B, then C etc. The test’s result is measured by adding the number of items correctly identified per each condition (for syllables, 16 maximum; for words, 32 maximum).

2.2.5. Frequency Discrimination Limen

Frequency discrimination limen is assessed for three different frequency regions (500, 1000 & 2000 Hz). The frequency step varies from 2 Hz to 50 Hz with changes occuring every second between a standard (S) pure tone and a roving (R) pure tone (i.e., S-R-S…, etc.). Roving pure tones are randomized (Table 3). Stimuli are presented binaurally. Participants respond by knocking their hand on the table. The minimum frequency difference in Hz participants perceive is the reported threshold.

2.3. Statistical Analysis

All statistical analysis was done with SPSS 27. Results were evaluated for normal distribution [114,115]. Tests executed were Kruskal Wallis, Mann-Whitney, ANOVA and post-hoc analysis with Bonferroni correction depending on normal or non-normal distribution of variables.

3. Results

3.1. Word Recognition Rhythm Component

A one-way between-subjects ANOVA was executed to compare the effect of music experience, or its absence, on WRRC_RH2 for musicians of classical music, byzantine music, percussionists and non-musicians. This variable was normally distributed.
A significant effect on WRRC_RH2 recognition for the four groups [F(3, 44) = 4.180, p = .011] was found. Post hoc comparisons using the Bonferroni correction indicated that the mean score of classical musicians (M = 12.42, SD = 1.56) was significantly better than that of byzantine musicians (M = 9.83, SD = 2.20) (Figure 1, Table 4).
A Mann-Whitney U test indicated that WRRC_RH1 scores were greater for 36 musicians (Mdn = 27.15) than for 12 non-musicians (Mdn = 16.54), U = 120.50, p = .019.
Another two-sample t-test was performed to compare musicians (M = 13.56, SD = 1.297) to non-musicians (M = 12.58, SD = 1.379). Musicians showed better word recognition [t(46) = 2.214, p = .032, r = .34], 95% CI [0.0963, 1.8637], for the WRRC_UnSc1.
For the Speech in Babble test (SinB_RE and SinB_LE, for right and left ear respectively), an ANOVA test was conducted, to compare the three groups of musicians and non-musicians. Results did not reveal a statistically significant difference among the four groups for SinB_RE [F(3, 44) = .672, p = .574], nor for SinB_LE [F(3, 44) = .599, p = .619] (Table 5).

3.2. Gaps In Noise

For the Gaps in Noise test (GIN_RE and GIN_LE, for right and left ear respectively), an ANOVA test was conducted, to compare the four groups. Results did not reveal a statistically significant difference for GIN_RE [F(3, 44) = .516, p = .673], nor for GIN_LE [F(3, 44) = .248, p = .863] (Table 6).

3.3. Digit Span

A Mann-Whitney U test was conducted to determine whether there is a difference in Digit Span Backwards (DigitB) scores between musicians (Mdn= 8.00, SD = 2.28) and non-musicians (Mdn= 6.50, SD = 2.02). The results indicate a significant difference between groups [U = 123.00, p = .025]. Overall, we conclude that there is a difference in working memory (DigitB) score between musicians and non-musicians, as shown in Figure 2. However, a statistically significant difference was not revealed for DigitB between four groups of participants [H(3) = 5.49, p = .13].
For the Digit Span Forward (DigitF) test, there was no statistically significant difference between musical specialization groups [F(3,44) = .709, p = .552], neither between musicians (M = 10.44, SD = 2.049) and non-musicians (M = 10.00, SD = 2.132; t(46) = .644, p = .522).

3.4. Frequency Discrimination Limen

An ANOVA test was conducted for Frequency Discrimination Limen at 500 Hz (Freq_d_500) and did not reveal a statistically significant difference between groups (p = .678). As the criteria for normal distribution were not satisfied, the Kruskal Wallis test was administered for Frequency Discrimination Limen at 1000 Hz (Freq_d_1000) and 2000 Hz (Freq_d_2000). A statistically significant difference was not revealed for Freq_d_1000 [H(3) = 3.74, p = .29] between four groups of participants. Comparisons of frequency discrimination across groups found that the three groups of musicians had lower thresholds (better discrimination) at a statistically significant level [H(3) = 11.28, p = .010] for the 2000 Hz frequency region compared to non-musicians, as shown in Figure 3. Byzantine musicians achieved a better (lower) threshold (M = 3.17, SD = 2.44, p = .002), following classical musicians (M = 3.67, SD = 2.38, p = .017) and then, percussionists had better threshold (M = 4.42, SD = 3.45, p = .015) than the non-musicians (M = 6.50, SD = 3.09).
A Mann-Whitney test for musicians’ and non-musicians’ groups indicated significantly better frequency discrimination [U = 82.50, p = .001] (see Figure 4) for musicians in general (M = 3.75, SD = 2.77) compared to non-musicians (M = 6.50, SD = 3.09).

4. Discussion

This study primary aim was to assess auditory processing among musicians of three different styles, Byzantine music, Western classical music (melodic instruments), and percussion (tuned and untuned). Any documented differences might provide insight on individualized rehabilitation through specific instrument or style musical education and training. A secondary aim was to compare musicians with non-musicians in the Greek population and verify the musicians’ advantage in auditory skills supported in previous research [20,51,66,116,117].
The present study shows a Western classical musicians advantage in speech in noise recognition of the second syllable with a good use of the rhythm effect, compared to Byzantine musicians. This result is novel as there are no other studies researching auditory processing in Byzantine chanters. It provides information on different levels of improvements regarding auditory processing as a result of music education depending on style of music. The clinical implications of this result are more in favor of a specific tailored auditory training as a rehabilitation tool as opposed to a one fits all approach that is the usual case with software available auditory training.
Musicians' better performance in various auditory processing components was verified. They were better in word in noise recognition in all three conditions (rhythmic, unsynchronized, and non-rhythmic) in the WRRC test with the Western Classical musicians being the best compared to the other two groups in the non-rhythmic component. The results of the present study show that the enhanced performance of speech in noise perception in the WRRC test is most probably not due to a rhythm advantage but due to the musicians' ability to not be easily distracted by other asynchronous stimuli.
Byzantine chanters were found to be better at the Frequency Discrimination Threshold for 2000 Hz compared to the other two groups of musicians. This result is novel and it may be due to the different tuning system compared to western music with each semitone corresponding to 6 moria subdivides. Byzantine chanters also had a better working short term memory for the backwards subtest compared to the other musicians.
However, for speech recognition in the SinB test, as well as, for temporal resolution in the GIN test, there was no significant difference among musicians from the three different styles and non-musicians. The non-musicians’ control group displayed exceptionally good results comparable to other studies on musicians’ outcomes [60].
Results on the WRRC test state that musicians can perceive speech in noise in any condition thus not supporting a specific rhythm effect. They appear to be better at an untrained auditory processing component regardless of the presence of rhythm. However, the advantage of classical musicians compared to the other group of musicians in this study for the second syllable discrimination in noise indicates that there is a rhythm effect benefit for this group of musicians. Enhanced frequency discrimination in musicians revealed in the present study is in accordance with recent research highlighting the presence of a correlation between frequency precision and Goldsmiths Musical Sophistication Index that is specific to the auditory domain and is unrelated to vision or amplitude modulation [6]. The 2000Hz specific frequency discrimination advantage shown by Byzantine chanters in our study should be further investigated in a larger sample. The working memory advantage documented in the more difficult part of the digit span test (backward subtest) is expected when comparing musicians versus non-musicians.
Among the limitations of the present study are the absence of data on the educational profiles of participants. Secondly, although there was no statistical evidence for participants’ mean age, a slight difference in age among the four groups, could be the reason for the absence of differentiation among the three musical groups, or from musicians to non-musicians, in speech in noise comprehension, according to previous research [119,120]. Our subjects were also not required to have pure-tone thresholds better than 20 dB at the audiometric test or fill a questionnaire to assess the impact of their hearing impairment [121], whereas normal hearing was necessary in Varnet et al. [122] as musicians are more likely to experience hearing problems [121,122,123]. Interestingly, out of the three musicians with abnormal pure tone thresholds in one or two high frequencies, only one is having an average pure tone threshold of more than 20dB, which is considered normal. It should be noted that the musician with an average hearing threshold of 20dB, exceeds it by less than 2dB HL. As average pure tone threshold of audiometric tested frequencies is used for auditory processing evaluation determination of needed adjustments, there was no need for any adjustment of intensity when administering the auditory processing test battery for any of the participants in this study.
The fact that musical practice takes many forms, and it is yet unknown which specific elements of musical experience or expertise may direct speech perception advantages, may be a potential contributing component in the mixed experimental outcomes with musician versus non-musician comparisons. Even within categories such as classical or jazz performance, there is great diversity in instruction methods (e.g., learning to play from a score vs. learning to play by ear) that may influence the development of specific aspects of musical competence, such as rhythm perception and auditory memory [51].
Moreover, there is a difference observed in beat alignment and the time spent on musical practice [124]. Studies in musicians have shown that the more years a person is intensively engaged in music, the more areas of the brain are involved in perceiving, analyzing, and recording it, and the more neural networks develop to convey the language of music to wider areas of the brain and make it “musically driven”. Music contributes to the development of many skills and to the activation of many brain centers, which are associated with cognitive functions [99].
Slater et al. [125] attempted to estimate which specific rhythmic skills are associated with speech perception in noise, and whether these relationships extend to measures of rhythm production as well as perception. In that research, percussionists outperformed non-musicians, apart from speech-in-noise perception, on sequence- and beat-based drumming tasks. The speech-in-noise perception was correlated with the two sequence-based tasks (drumming to metrical and jittered sequences) [125]. Percussionists and singers did not differ in their performance on the musical competence test (rhythm or melody subtests), speech-in-noise perception (words or sentences) or auditory working memory [51]. Cognitive factors such as memory intervene unquestionably in the relationships between musical skill and hearing speech in noise [13]. The ability to perceive words in noise (WIN) did not relate to either rhythmic or melodic competence, nor to working memory competence [51]. Interestingly, recent studies substantiated enhanced pitch perception in musicians and melodic discrimination yet did not detect any advantage for speech-in-noise comprehension [51,56].
Although the present study cannot speak to the precise effects of music training in distinct musical styles, our cross-sectional findings provide a basis for further investigation into the potential for music training to reinforce auditory processing skills and building blocks of communication. In parallel, music therapy is the clinical application of music to treat disease in individuals who can benefit from music and thereby improve their quality of life. It is known since the Byzantine era when many of the hospitals of Constantinople applied music therapy to neurological patients [125]. Music therapy is a really pleasant and painless therapeutic method, usually practiced by specialized therapists, who have the appropriate knowledge and experience [126]. Therefore, potentially, we could suggest that for (C)APD cases music training, or music therapy, apart from auditory training, could be a pleasant and effective way to sharpen auditory skills.
Further research ideas have to do with seeking differences in Frequency Discrimination Limen among Byzantine chanters and other musical genres, as well as, their relation to speech in noise comprehension. Likewise, studies applying music training for auditory processing, setting age limits and hearing sensitivity directions, could supplement recommendations, as a way to detect the best approach to train auditory processing disorders.

5. Conclusion

The study reveals different auditory processing elements enhancement as a result of different training in different musical styles and instruments. Neuroplasticity seems to be specific while extrapolating to non-trained elements of auditory processing such as speech in noise perception. This sets the basis for individualized auditory training in individuals with APD.

Author Contributions

K.M., G.P., V.M.I and C.S. conceived and designed the study. K.M. collected the data and wrote the first draft of the manuscript. K.M. performed the statistical analysis. G.P., V.M.I and C.S critically revised the manuscript at all stages of its being written. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was approved by the Aristotle University of Thessaloniki Ethics and Bioethics Committee (protocol number: 6.613).

Data Availability Statement

The data presented in this study are available on request from the first author.

Acknowledgments

The study and data collection took place at the Clinical Psychoacoustics Lab of the Medical School of Aristotle University of Thessaloniki. The author is grateful to all participants in this study.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations:

APD: auditory processing disorder, SNR: signal-to-noise ratio, dB: decibel, RE: right ear, LE: left ear, BM: Byzantine music, BC: Byzantine chant

References

  1. Bamiou, D.-E. Aetiology and Clinical Presentations of Auditory Processing Disorders - a Review. Arch Dis Child 2001, 85, 361–365. [Google Scholar] [CrossRef]
  2. Chermak, G. D.; Lee, J. Comparison of Children’s Performance on Four Tests of Temporal Resolution. J Am Acad Audiol 2005, 16, 554–563. [Google Scholar] [CrossRef]
  3. Rudner, M.; Rönnberg, J.; Lunner, T. Working Memory Supports Listening in Noise for Persons with Hearing Impairment. J Am Acad Audiol 2011, 22, 156–167. [Google Scholar] [CrossRef]
  4. Iliadou, V.; Bamiou, D. E.; Sidiras, C.; Moschopoulos, N. P.; Tsolaki, M.; Nimatoudis, I.; Chermak, G. D. The Use of the Gaps-in-Noise Test as an Index of the Enhanced Left Temporal Cortical Thinning Associated with the Transition between Mild Cognitive Impairment and Alzheimer’s Disease. J Am Acad Audiol 2017, 28, 463–471. [Google Scholar] [CrossRef]
  5. Schlaug, G.; Norton, A.; Overy, K.; Winner, E. Effects of Music Training on the Child’s Brain and Cognitive Development. Ann N Y Acad Sci 2005, 1060, 219–230. [Google Scholar] [CrossRef]
  6. Lad, M.; Billig, A. J.; Kumar, S.; Griffiths, T. D. A Specific Relationship between Musical Sophistication and Auditory Working Memory. Sci Rep 2022, 12. [Google Scholar] [CrossRef]
  7. Kraus, N.; Chandrasekaran, B. Music Training for the Development of Auditory Skills. Nat Rev Neurosci 2010, 11, 599–605. [Google Scholar] [CrossRef] [PubMed]
  8. Kraus, N.; Skoe, E.; Parbery-Clark, A.; Ashley, R. Experience-Induced Malleability in Neural Encoding of Pitch, Timbre, and Timing: Implications for Language and Music. In Annals of the New York Academy of Sciences; Blackwell Publishing Inc., 2009; Volume 1169, pp. 543–557. [Google Scholar] [CrossRef]
  9. Koelsch, S.; Schröger, E.; Tervaniemi, M. Superior Pre-Attentive Auditory Processing in Musicians. Neuroreport 1999, 10, 1309–1313. [Google Scholar] [CrossRef] [PubMed]
  10. Strait, D. L.; Kraus, N. Biological Impact of Auditory Expertise across the Life Span: Musicians as a Model of Auditory Learning. Hear Res 2014, 308, 109–121. [Google Scholar] [CrossRef] [PubMed]
  11. Tervaniemi, M.; Just, V.; Koelsch, S.; Widmann, A.; Schroger, E. Pitch Discrimination Accuracy in Musicians vs Non-musicians: An Event-Related Potential and Behavioral Study. Exp Brain Res 2005, 161, 1–10. [Google Scholar] [CrossRef] [PubMed]
  12. Forgeard, M.; Winner, E.; Norton, A.; Schlaug, G. Practicing a Musical Instrument in Childhood Is Associated with Enhanced Verbal Ability and Nonverbal Reasoning. PLoS One 2008, 3. [Google Scholar] [CrossRef] [PubMed]
  13. Kraus, N.; Strait, D. L.; Parbery-Clark, A. Cognitive Factors Shape Brain Networks for Auditory Skills: Spotlight on Auditory Working Memory. Ann N Y Acad Sci 2012, 1252, 100–107. [Google Scholar] [CrossRef] [PubMed]
  14. Slater, J.; Skoe, E.; Strait, D. L.; O’Connell, S.; Thompson, E.; Kraus, N. Music Training Improves Speech-in-Noise Perception: Longitudinal Evidence from a Community-Based Music Program. Behavioural Brain Research 2015, 291, 244–252. [Google Scholar] [CrossRef]
  15. Tzounopoulos, T.; Kraus, N. Learning to Encode Timing: Mechanisms of Plasticity in the Auditory Brainstem. Neuron 2009. [Google Scholar] [CrossRef] [PubMed]
  16. Margulis, E. H.; Mlsna, L. M.; Uppunda, A. K.; Parrish, T. B.; Wong, P. C. M. Selective Neurophysiologic Responses to Music in Instrumentalists with Different Listening Biographies. Hum Brain Mapp 2009, 30, 267–275. [Google Scholar] [CrossRef]
  17. Pantev, C.; Roberts, L. E.; Schulz, M.; Engelien, A.; Ross, B. Timbre-Specific Enhancement of Auditory Cortical Representations in Musicians. Neuroreport 2001, 12, 169–174. [Google Scholar] [CrossRef]
  18. Strait, D. L.; Chan, K.; Ashley, R.; Kraus, N. Specialization among the Specialized: Auditory Brainstem Function Is Tuned in to Timbre. Cortex 2012, 48, 360–362. [Google Scholar] [CrossRef]
  19. Besson, M.; Chobert, J.; Marie, C. Transfer of Training between Music and Speech: Common Processing, Attention, and Memory. Front Psychol 2011, 2. [Google Scholar] [CrossRef]
  20. Parbery-Clark, A.; Skoe, E.; Lam, C.; Kraus, N. Musician Enhancement for Speech-In-Noise. Ear Hear 2009, 30, 653–661. [Google Scholar] [CrossRef]
  21. Patel, A. D. Why Would Musical Training Benefit the Neural Encoding of Speech? The OPERA Hypothesis. Front Psychol 2011, 2. [Google Scholar] [CrossRef]
  22. Patel, A. D.; Iversen, J. R. The Linguistic Benefits of Musical Abilities. Trends Cogn Sci 2007, 11, 369–372. [Google Scholar] [CrossRef] [PubMed]
  23. Strait, D. L.; Parbery-Clark, A.; Hittner, E.; Kraus, N. Musical Training during Early Childhood Enhances the Neural Encoding of Speech in Noise. Brain Lang 2012, 123, 191–201. [Google Scholar] [CrossRef] [PubMed]
  24. Strait, D. L.; Kraus, N.; Skoe, E.; Ashley, R. Musical Experience and Neural Efficiency - Effects of Training on Subcortical Processing of Vocal Expressions of Emotion. European Journal of Neuroscience 2009, 29, 661–668. [Google Scholar] [CrossRef] [PubMed]
  25. Patel, A. D. Can Nonlinguistic Musical Training Change the Way the Brain Processes Speech? The Expanded OPERA Hypothesis. Hear Res 2014, 308, 98–108. [Google Scholar] [CrossRef] [PubMed]
  26. Elbert, T.; Pantev, C.; Wienbruch, C.; Rockstroh, B.; Taub, E. Increased Cortical Representation of the Fingers of the Left Hand in String Players. Science (1979) 1995, 270, 305–307. [Google Scholar] [CrossRef] [PubMed]
  27. Vuust, P.; Brattico, E.; Seppänen, M.; Näätänen, R.; Tervaniemi, M. Practiced Musical Style Shapes Auditory Skills. Ann N Y Acad Sci 2012, 1252, 139–146. [Google Scholar] [CrossRef] [PubMed]
  28. Vuust, P.; Brattico, E.; Seppänen, M.; Näätänen, R.; Tervaniemi, M. The Sound of Music: Differentiating Musicians Using a Fast, Musical Multi-Feature Mismatch Negativity Paradigm. Neuropsychologia 2012, 50, 1432–1443. [Google Scholar] [CrossRef]
  29. Medwetsky, L. Spoken Language Processing Model: Bridging Auditory and Language Processing to Guide Assessment and Intervention. Lang Speech Hear Serv Sch 2011, 42, 286–296. [Google Scholar] [CrossRef]
  30. American Speech-Language-Hearing Association. Central auditory processing: Current status of research and implications for clinical practice. American Journal of Audiology 1996, 5, 41–54. [CrossRef]
  31. (Central) Auditory Processing Disorders—The Role of the Audiologist. [CrossRef]
  32. Monteiro, A. R. M.; Nascimento, M. F.; Soares, D. C.; Ferreira, I. M. D. D. C. Temporal Resolution Abilities in Musicians and No Musicians Violinists Habilidades de Resolução Temporal Em Músicos Violinistas e Não Músicos; 2010. [Google Scholar]
  33. Samelli, A. G.; Schochat, E. The Gaps-in-Noise Test: Gap Detection Thresholds in Normal-Hearing Young Adults. Int J Audiol 2008, 47, 238–245. [Google Scholar] [CrossRef] [PubMed]
  34. Phillips, D. P. Central Auditory System and Central Auditory Processing Disorders: Some Conceptual Issues; 2002; Volume 23. [Google Scholar]
  35. Chermak, G. D.; Musiek, F. E. Central Auditory Processing Disorders: New Perspectives; Singular Pub Group: San Diego, 1997. [Google Scholar]
  36. Gelfand, S. A. Hearing : An Introduction to Psychological and Physiological Acoustics; Informa Healthcare, 2010. [Google Scholar]
  37. Griffiths, T. D.; Warren, J. D. The Planum Temporale as a Computational Hub. Trends Neurosci 2002, 25, 348–353. [Google Scholar] [CrossRef] [PubMed]
  38. Hautus, M. J.; Setchell, G. J.; Waldie, K. E.; Kirk, I. J. Age-Related Improvements in Auditory Temporal Resolution in Reading-Impaired Children. Dyslexia 2003, 9, 37–45. [Google Scholar] [CrossRef] [PubMed]
  39. Walker, M. M.; Shinn, J. B.; Cranford, J. L.; Givens, G. D.; Holbert, D. Auditory Temporal Processing Performance of Young Adults With Reading Disorders. Journal of Speech, Language, and Hearing Research 2002, 45, 598–605. [Google Scholar] [CrossRef] [PubMed]
  40. Rance, G.; McKay, C.; Grayden, D. Perceptual Characterization of Children with Auditory Neuropathy. Ear Hear 2004, 25, 34–46. [Google Scholar] [CrossRef] [PubMed]
  41. Fingelkurts, A. A.; Fingelkurts, A. A. Timing in Cognition and EEG Brain Dynamics: Discreteness versus Continuity. Cogn Process 2006, 7, 135–162. [Google Scholar] [CrossRef] [PubMed]
  42. Bao, Y.; Szymaszek, A.; Wang, X.; Oron, A.; Pöppel, E.; Szelag, E. Temporal Order Perception of Auditory Stimuli Is Selectively Modified by Tonal and Non-Tonal Language Environments. Cognition 2013, 129, 579–585. [Google Scholar] [CrossRef] [PubMed]
  43. Grube, M.; Kumar, S.; Cooper, F. E.; Turton, S.; Griffiths, T. D. Auditory Sequence Analysis and Phonological Skill. Proceedings of the Royal Society B: Biological Sciences 2012, 279, 4496–4504. [Google Scholar] [CrossRef]
  44. Grube, M.; Cooper, F. E.; Griffiths, T. D. Auditory Temporal-Regularity Processing Correlates with Language and Literacy Skill in Early Adulthood. Cogn Neurosci 2013, 4, (3–4). [Google Scholar] [CrossRef]
  45. Iliadou, V.; Ptok, M.; Grech, H.; Pedersen, E. R.; Brechmann, A.; Deggouj, N.; Kiese-Himmel, C.; Sliwinska-Kowalska, M.; Nickisch, A.; Demanez, L.; Veuillet, E.; Thai-Van, H.; Sirimanna, T.; Callimachou, M.; Santarelli, R.; Kuske, S.; Barajas, J.; Hedjever, M.; Konukseven, O.; Veraguth, D.; Mattsson, T. S.; Martins, J. H.; Bamiou, D. E. A European Perspective on Auditory Processing Disorder-Current Knowledge and Future Research Focus. Front Neurol 2017, 8. [Google Scholar] [CrossRef]
  46. Musiek, F. E.; Shinn, J.; Chermak, G. D.; Bamiou, D.-E. Perspectives on the Pure-Tone Audiogram. J Am Acad Audiol 2017, 28, 655–671. [Google Scholar] [CrossRef]
  47. World report on hearing. Geneva: World Health Organization; 2021. Licence: CC BY-NC-SA 3.0 IGO.
  48. Musiek, F. E.; Baran, J. A.; James Bellis, T.; Chermak, G. D.; Hall III, J. W.; Professor, C.; Keith, R. W.; Medwetsky, L.; Loftus West, K.; Young, M.; Nagle, S.; Volunteer, S. merican Academy of Audiology Clinical Practice Guidelines: Diagnosis, Treatment and Management of Children and Adults with Central Auditory Processing American Academy of Audiology Clinical Practice Guidelines Guidelines for the Diagnosis, Treatment and Management of Children and Adults with Central Auditory Processing Disorder Task Force Members; 2010. [Google Scholar]
  49. Musiek, F. E.; Shinn, J. B.; Jirsa, R.; Bamiou, D.-E.; Baran, J. A.; Zaidan, E. GIN (Gaps-In-Noise) Test Performance in Subjects with Confirmed Central Auditory Nervous System Involvement; 2005. [Google Scholar]
  50. Gilley, P. M.; Sharma, M.; Purdy, S. C. Oscillatory Decoupling Differentiates Auditory Encoding Deficits in Children with Listening Problems. Clinical Neurophysiology 2016, 127, 1618–1628. [Google Scholar] [CrossRef]
  51. Slater, J.; Kraus, N. The Role of Rhythm in Perceiving Speech in Noise: A Comparison of Percussionists, Vocalists and Non-Musicians. Cogn Process 2016, 17, 79–87. [Google Scholar] [CrossRef] [PubMed]
  52. Coffey, E. B. J.; Mogilever, N. B.; Zatorre, R. J. Speech-in-Noise Perception in Musicians: A Review. Hear Res 2017, 352, 49–69. [Google Scholar] [CrossRef] [PubMed]
  53. Hennessy, S.; Mack, W. J.; Habibi, A. Speech-in-noise Perception in Musicians and Non-musicians: A Multi-level Meta-Analysis. Hear Res 2022, 416, 108442. [Google Scholar] [CrossRef] [PubMed]
  54. Boebinger, D.; Evans, S.; Rosen, S.; Lima, C. F.; Manly, T.; Scott, S. K. Musicians and Non-Musicians Are Equally Adept at Perceiving Masked Speech. J Acoust Soc Am 2015, 137, 378–387. [Google Scholar] [CrossRef] [PubMed]
  55. Fuller, C. D.; Galvin, J. J.; Maat, B.; Free, R. H.; Başkent, D. The Musician Effect: Does It Persist under Degraded Pitch Conditions of Cochlear Implant Simulations? Front Neurosci 2014. [Google Scholar] [CrossRef]
  56. Ruggles, D. R.; Freyman, R. L.; Oxenham, A. J. Influence of Musical Training on Understanding Voiced and Whispered Speech in Noise. PLoS One 2014, 9. [Google Scholar] [CrossRef] [PubMed]
  57. Sidiras, C.; Iliadou, V.; Nimatoudis, I.; Reichenbach, T.; Bamiou, D. E. Spoken Word Recognition Enhancement Due to Preceding Synchronized Beats Compared to Unsynchronized or Unrhythmic Beats. Front Neurosci 2017, 11. [Google Scholar] [CrossRef]
  58. Sidiras, C.; Iliadou, V. V.; Nimatoudis, I.; Bamiou, D. E. Absence of Rhythm Benefit on Speech in Noise Recognition in Children Diagnosed With Auditory Processing Disorder. Front Neurosci 2020, 14. [Google Scholar] [CrossRef]
  59. Plack, C. J.; Viemeister, N. F. Suppression and the Dynamic Range of Hearing. J Acoust Soc Am 1993, 93, 976–982. [Google Scholar] [CrossRef]
  60. Iliadou, V.; Bamiou, D. E.; Chermak, G. D.; Nimatoudis, I. Comparison of Two Tests of Auditory Temporal Resolution in Children with Central Auditory Processing Disorder, Adults with Psychosis, and Adult Professional Musicians. Int J Audiol 2014, 53, 507–513. [Google Scholar] [CrossRef] [PubMed]
  61. Chermak, G. D.; Musiek, F. E. Central Auditory Processing Disorders: New Perspectives; Singular Pub Group: San Diego, 1997. [Google Scholar]
  62. Elangovan, S.; Stuart, A. Natural Boundaries in Gap Detection Are Related to Categorical Perception of Stop Consonants. Ear Hear 2008, 29, 761–774. [Google Scholar] [CrossRef] [PubMed]
  63. Keith, R. Random Gap Detection Test. St. Louis, 2000. MO: Auditec.
  64. Rammsayer, T. H.; Buttkus, F.; Altenmüller, E. Musicians Do Better than Non-musicians in Both Auditory and Visual Timing Tasks. Music Percept 2012, 30(1), 85–96. [Google Scholar] [CrossRef]
  65. Donai, J. J.; Jennings, M. B. Gaps-in-Noise Detection and Gender Identification from Noise-Vocoded Vowel Segments: Comparing Performance of Active Musicians to Non-Musicians. J Acoust Soc Am 2016, 139, EL128–EL134. [Google Scholar] [CrossRef] [PubMed]
  66. Kumar, P.; Sanju, H.; Nikhil, J. Temporal Resolution and Active Auditory Discrimination Skill in Vocal Musicians. Int Arch Otorhinolaryngol 2015, 20, 310–314. [Google Scholar] [CrossRef]
  67. Rammsayer, T.; Altenmüller, E. Temporal Information Processing in Musicians and Non-musicians. Music Percept 2006, 24, 37–48. [Google Scholar] [CrossRef]
  68. Ryn Junior, F. Van; Lüders, D.; Casali, R. L.; Amaral, M. I. R. do. Temporal Auditory Processing in People Exposed to Musical Instrument Practice. Codas 2022, 34. [Google Scholar] [CrossRef]
  69. Sangamanatha, V. A.; Bhat, J.; Srivastava, M. Temporal Resolution in Individuals with and without Musical Training Perception of Spectral Ripples and Speech Perception in Noise by Older Adults View Project; 2012. https://www.researchgate.net/publication/230822432.
  70. Tervaniemi, M.; Janhunen, L.; Kruck, S.; Putkinen, V.; Huotilainen, M. Auditory Profiles of Classical, Jazz, and Rock Musicians: Genre-Specific Sensitivity to Musical Sound Features. Front Psychol 2016, 6. [Google Scholar] [CrossRef] [PubMed]
  71. Nascimento, F.; Monteiro, R.; Soares, C.; Ferreira, M. Temporal Sequencing Abilities in Musicians Violinists and Non-Musicians. Arquivos Internacionais de Otorrinolaringologia 2014, 14, 217–224. [Google Scholar] [CrossRef]
  72. Brandler, S.; Rammsayer, T. H. Differences in Mental Abilities between Musicians and Non-Musicians. Psychol Music 2003, 31, 123–138. [Google Scholar] [CrossRef]
  73. Chan, A. S.; Ho, Y.-C.; Cheung, M.-C. Music Training Improves Verbal Memory. Nature 1998, 396, 128–128. [Google Scholar] [CrossRef]
  74. Franklin, M. S.; Sledge Moore, K.; Yip, C.-Y.; Jonides, J.; Rattray, K.; Moher, J. The Effects of Musical Training on Verbal Memory. Psychol Music 2008, 36, 353–365. [Google Scholar] [CrossRef]
  75. George, E. M.; Coch, D. Music Training and Working Memory: An ERP Study. Neuropsychologia 2011, 49, 1083–1094. [Google Scholar] [CrossRef] [PubMed]
  76. Hallam, S.; Himonides, E. The Power of Music; Open Book Publishers: Cambridge, UK, 2022. [Google Scholar] [CrossRef]
  77. Hansen, M.; Wallentin, M.; Vuust, P. Working Memory and Musical Competence of Musicians and Non-Musicians. Psychol Music 2013, 41, 779–793. [Google Scholar] [CrossRef]
  78. Jakobson, L. S.; Lewycky, S. T.; Kilgour, A. R.; Stoesz, B. M. Memory for Verbal and Visual Material in Highly Trained Musicians. Music Percept 2008, 26, 41–55. [Google Scholar] [CrossRef]
  79. Lee, Y.; Lu, M.; Ko, H. Effects of Skill Training on Working Memory Capacity. Learn Instr 2007, 17, 336–344. [Google Scholar] [CrossRef]
  80. Pallesen, K. J.; Brattico, E.; Bailey, C. J.; Korvenoja, A.; Koivisto, J.; Gjedde, A.; Carlson, S. Cognitive Control in Auditory Working Memory Is Enhanced in Musicians. PLoS One 2010, 5. [Google Scholar] [CrossRef] [PubMed]
  81. Parbery-Clark, A.; Strait, D. L.; Anderson, S.; Hittner, E.; Kraus, N. Musical Experience and the Aging Auditory System: Implications for Cognitive Abilities and Hearing Speech in Noise. PLoS One 2011, 6. [Google Scholar] [CrossRef]
  82. Talamini, F.; Carretti, B.; Grassi, M. The Working Memory of Musicians and Non-musicians. An Interdisciplinary Journal 2016, 34, 183–191. [Google Scholar] [CrossRef]
  83. Talamini, F.; Altoè, G.; Carretti, B.; Grassi, M. Musicians Have Better Memory than Non-musicians: A Meta-Analysis. PLoS One 2017, 12. [Google Scholar] [CrossRef]
  84. Taylor, A. C.; Dewhurst, S. A. Investigating the Influence of Music Training on Verbal Memory. Psychol Music 2017, 45, 814–820. [Google Scholar] [CrossRef]
  85. Vasuki, P. R. M.; Sharma, M.; Demuth, K.; Arciuli, J. Musicians’ Edge: A Comparison of Auditory Processing, Cognitive Abilities and Statistical Learning. Hear Res 2016, 342, 112–123. [Google Scholar] [CrossRef] [PubMed]
  86. Wallentin, M.; Nielsen, A. H.; Friis-Olivarius, M.; Vuust, C.; Vuust, P. The Musical Ear Test, a New Reliable Test for Measuring Musical Competence. Learn Individ Differ 2010, 20, 188–196. [Google Scholar] [CrossRef]
  87. Zuk, J.; Benjamin, C.; Kenyon, A.; Gaab, N. Behavioral and Neural Correlates of Executive Functioning in Musicians and Non-Musicians. PLoS One 2014, 9. [Google Scholar] [CrossRef] [PubMed]
  88. Belin, P. Voice Processing in Human and Non-Human Primates. In Philosophical Transactions of the Royal Society B: Biological Sciences; Royal Society, 2006. [Google Scholar] [CrossRef]
  89. Bianchi, F.; Santurette, S.; Wendt, D.; Dau, T. Pitch Discrimination in Musicians and Non-Musicians: Effects of Harmonic Resolvability and Processing Effort. J Assoc Res Otolaryngol 2016, 17, 69–79. [Google Scholar] [CrossRef] [PubMed]
  90. Inabinet, D.; De La Cruz, J.; Cha, J.; Ng, K.; Musacchia, G. Diotic and Dichotic Mechanisms of Discrimination Threshold in Musicians and Non-Musicians. Brain Sci 79, 79. [Google Scholar] [CrossRef] [PubMed]
  91. Magne, C.; Schön, D.; Besson, M. Musician Children Detect Pitch Violations in Both Music and Language Better than Nonmusician Children: Behavioral and Electrophysiological Approaches. J Cogn Neurosci. 2006, 18(2), 199–211. [Google Scholar] [CrossRef]
  92. Micheyl, C.; Delhommeau, K.; Perrot, X.; Oxenham, A. J. Influence of Musical and Psychoacoustical Training on Pitch Discrimination. Hear Res 79, 79, 79–79. [Google Scholar] [CrossRef]
  93. Musacchia, G.; Sams, M.; Skoe, E.; Kraus, N. Musicians Have Enhanced Subcortical Auditory and Audiovisual Processing of Speech and Music. 79 2007, 104. [Google Scholar] [CrossRef]
  94. Toh, X. R.; Tan, S. H.; Wong, G.; Lau, F.; Wong, F. C. K. Enduring Musician Advantage among Former Musicians in Prosodic Pitch Perception. Sci Rep 2023, 13(1), 2657. [Google Scholar] [CrossRef]
  95. Kishon-Rabin, L.; Amir, O.; Vexler, Y.; Zaltz, Y. Pitch Discrimination: Are Professional Musicians Better than Non-Musicians? J Basic Clin Physiol Pharmacol 2001, 12. [Google Scholar] [CrossRef] [PubMed]
  96. Tervaniemi, M.; Huotilainen, M.; Brattico, E. Melodic Multi-Feature Paradigm Reveals Auditory Profiles in Music-Sound Encoding. Front Hum Neurosci 2014, 8. [Google Scholar] [CrossRef] [PubMed]
  97. Delviniotis, D.; Kouroupetroglou, G.; Theodoridis, S. Acoustic Analysis of Musical Intervals in Modern Byzantine Chant Scales. J Acoust Soc Am 2008, 124(4), EL262–EL269. [Google Scholar] [CrossRef] [PubMed]
  98. Wellesz, E. A History of Byzantine Music and Hymnography; Clarendon Press: Oxford, 1961. [Google Scholar]
  99. Baloyianis, St. Psaltic art and the brain: The philosophy of the Byzantine music from the perspectives of the neurosciences. In The Psaltic Art as an Autonomous Science: Scientific Branches – Related Scientific Fields – Interdisciplinary Collaborations and Interaction, Volos, 29 June-3 July 2014; 2015. https://speech.di.uoa.gr/IMC2014/. 3 July.
  100. Delviniotis, D. S. New Method of Byzantine Music (BM) Intervals’ Measuring and Its Application in the Fourth Mode. A New Approach of the Music Intervals’ Definition. MODUS-MODI_MODALITY International Musicological Conference; 6-10 September 2017. [CrossRef]
  101. Patriarchal Music Committee. Στοιχειώδης διδασκαλία της Εκκλησιαστικής Μουσικής - εκπονηθείσα επί τη βάσει του ψαλτηρίου [Elementary Teaching of Ecclesiastical Music – elaborated on the base of the psalter]; Κωνσταντινούπολις, 1881.
  102. Kypourgos, N. Μερικές Παρατηρήσεις Πάνω Στα Διαστήματα Της Ελληνικής Και Aνατολικής Μουσικής. Μουσικολογία 1985, 2. [Google Scholar]
  103. Sundberg, J. The Acoustics of the Singing Voice. Sci Am 1977, 236, 82–91. [Google Scholar] [CrossRef] [PubMed]
  104. Iliadou, V.; Bamiou, D.-E.; Kaprinis, S.; Kandylis, D.; Kaprinis, G. Auditory Processing Disorders in Children Suspected of Learning Disabilities—A Need for Screening? Int J Pediatr Otorhinolaryngol 2009, 73(7), 1029–1034. [Google Scholar] [CrossRef]
  105. Sidiras, C.; Iliadou, V.; Chermak, G. D.; Nimatoudis, I. Assessment of Functional Hearing in Greek-Speaking Children Diagnosed with Central Auditory Processing Disorder. J Am Acad Audiol 2016, 27(5), 395–405. [Google Scholar] [CrossRef]
  106. Shinn, J. B.; Chermak, G. D.; Musiek, F. E. GIN (Gaps-In-Noise) Performance in the Pediatric Population. J Am Acad Audiol 2009, 20(04), 229–238. [Google Scholar] [CrossRef]
  107. Wechsler, D. The Wechsler adult intelligence scale-III; Psychological Corporation: San Antonio, TX, 1997. [Google Scholar]
  108. Flagge, A. G.; Neeley, M. E.; Davis, T. M.; Henbest, V. S. A Preliminary Exploration of Pitch Discrimination, Temporal Sequencing, and Prosodic Awareness Skills of Children Who Participate in Different School-Based Music Curricula. Brain Sci 2021, 11(8), 982. [Google Scholar] [CrossRef] [PubMed]
  109. Sun, Y.; Lu, X.; Ho, H. T.; Thompson, W. F. Pitch Discrimination Associated with Phonological Awareness: Evidence from Congenital Amusia. Sci Rep 2017, 7. [Google Scholar] [CrossRef] [PubMed]
  110. Nelson, D. A.; Stanton, M. E.; Freyman, R. L. A General Equation Describing Frequency Discrimination as a Function of Frequency and Sensation Level. J Acoust Soc Am. 1983, 73(6), 2117–23. [Google Scholar] [CrossRef] [PubMed]
  111. Loutridis, S. Aκουστική: αρχές και εφαρμογές; Tziola: Thessaloniki, 2015. [Google Scholar]
  112. Iliadou, V.; Fourakis, M.; Vakalos, A.; Hawks, J. W.; Kaprinis, G. Bi-Syllabic, Modern Greek Word Lists for Use in Word Recognition Tests. Int J Audiol 2006, 45, 74–82. [Google Scholar] [CrossRef] [PubMed]
  113. Iliadou, V. (Vivian); Apalla, K.; Kaprinis, S.; Nimatoudis, I.; Kaprinis, G.; Iacovides, A. Is Central Auditory Processing Disorder Present in Psychosis? Am J Audiol 2013, 22, 201–208. [Google Scholar] [CrossRef] [PubMed]
  114. Cramer, D.; Howitt, D. The SAGE Dictionary of Statistics; SAGE Publications, Ltd: 1 Oliver’s Yard, 55 City Road, London England EC1Y 1SP United Kingdom, 2004. [Google Scholar] [CrossRef]
  115. Doane, D. P.; Seward, L. E. Measuring Skewness: A Forgotten Statistic? Journal of Statistics Education 2011, 2011. [Google Scholar] [CrossRef]
  116. Du, Y.; Zatorre, R. J. Musical Training Sharpens and Bonds Ears and Tongue to Hear Speech Better. Proceedings of the National Academy of Sciences 2017, 114(51), 13579–13584. [Google Scholar] [CrossRef]
  117. Li, X.; Zatorre, R. J.; Du, Y. The Microstructural Plasticity of the Arcuate Fasciculus Undergirds Improved Speech in Noise Perception in Musicians. Cerebral Cortex 2021, 31, 3975–3985. [Google Scholar] [CrossRef]
  118. Snell, K. B.; Frisina, D. R. Relationships among Age-Related Differences in Gap Detection and Word Recognition. J Acoust Soc Am 2000, 107, 1615–1626. [Google Scholar] [CrossRef]
  119. Snell, K. B.; Mapes, F. M.; Hickman, E. D.; Frisina, D. R. Word Recognition in Competing Babble and the Effects of Age, Temporal Processing, and Absolute Sensitivity. J Acoust Soc Am 2002, 112, 720–727. [Google Scholar] [CrossRef]
  120. Vardonikolaki, A.; Pavlopoulos, V.; Pastiadis, K.; Markatos, N.; Papathanasiou, I.; Papadelis, G.; Logiadis, M.; Bibas, A. Musicians’ Hearing Handicap Index: A New Questionnaire to Assess the Impact of Hearing Impairment in Musicians and Other Music Professionals. JSLHR 2020, 63, 4219–4237. [Google Scholar] [CrossRef]
  121. Varnet, L.; Wang, T.; Peter, C.; Meunier, F.; Hoen, M. How Musical Expertise Shapes Speech Perception: Evidence from Auditory Classification Images. Sci Rep 2015, 5. [Google Scholar] [CrossRef]
  122. Vardonikolaki, A.; Kikidis, D.; Iliadou, E.; Markatos, N.; Pastiadis, K.; Bibas, A. Audiological Findings in Professionals Exposed to Music and Their Relation with Tinnitus; 2021; pp 327–353. [CrossRef]
  123. Spiech, C.; Endestad, T.; Laeng, B.; Danielsen, A.; Haghish, E. F. Beat Alignment Ability Is Associated with Formal Musical Training Not Current Music Playing. Front Psychol 2023, 14. [Google Scholar] [CrossRef] [PubMed]
  124. Slater, J.; Kraus, N.; Woodruff Carr, K.; Tierney, A.; Azem, A.; Ashley, R. Speech-in-Noise Perception Is Linked to Rhythm Production Skills in Adult Percussionists and Non-Musicians. Lang Cogn Neurosci 2018, 33, 710–717. [Google Scholar] [CrossRef] [PubMed]
  125. Baloyianis, St. Aι νευροεπιστήμαι εις το Βυζάντιον [The neurosciences in Byzantium]. ΕΓΚΕΦAΛOΣ, 2012, 49, 34-46. http://www.encephalos.gr/pdf/49-1-04g.pdf. /: 34-46. http.
  126. Garred, R. Music as Therapy : A Dialogical Perspective; Barcelona Publishers, 2006.
Figure 1. Boxplot of WRRC_RH2 by Specialization. Mean score of second syllable recognition in rhythmic condition (RH2 scores) showing that classical musicians are better than byzantine chanters in using the rhythm effect to better perceive words in noise (specifically the second syllable of a word).
Figure 1. Boxplot of WRRC_RH2 by Specialization. Mean score of second syllable recognition in rhythmic condition (RH2 scores) showing that classical musicians are better than byzantine chanters in using the rhythm effect to better perceive words in noise (specifically the second syllable of a word).
Preprints 74622 g001
Figure 2. Boxplot of DigitB for Musicians and Non-musicians. Mean score of Digit Span Backwards, according to musical engagement.
Figure 2. Boxplot of DigitB for Musicians and Non-musicians. Mean score of Digit Span Backwards, according to musical engagement.
Preprints 74622 g002
Figure 3. Boxplot of DFL_2000. Mean score (in Hz) of Frequency Discrimination Limen at 2000Hz for musicians’ and non-musicians’ groups.
Figure 3. Boxplot of DFL_2000. Mean score (in Hz) of Frequency Discrimination Limen at 2000Hz for musicians’ and non-musicians’ groups.
Preprints 74622 g003
Figure 4. Boxplot of DFL_2000. Mean score (in Hz) of Frequency Discrimination Limen at 2000Hz for Musicians and Non-Musicians.
Figure 4. Boxplot of DFL_2000. Mean score (in Hz) of Frequency Discrimination Limen at 2000Hz for Musicians and Non-Musicians.
Preprints 74622 g004
Table 1. Mean and SD for participants’ age. ANOVA did not reveal a statistically significant difference in average age of the four categories of participants [F(3, 44) = .926, p = .436].
Table 1. Mean and SD for participants’ age. ANOVA did not reveal a statistically significant difference in average age of the four categories of participants [F(3, 44) = .926, p = .436].
Age
Byzantine chanters 39.17 (13.361)
Classical musicians 37.92 (11.866)
Percussionists 32.75 (10.635)
Non-musicians 33.25 (10.678)
Total 35.77 (11.661)
Table 2. Pure tone thresholds for 3 participants with mild high-frequency hearing loss. The other frequency thresholds tested were normal, thus they are not presented in Table 2.
Table 2. Pure tone thresholds for 3 participants with mild high-frequency hearing loss. The other frequency thresholds tested were normal, thus they are not presented in Table 2.
Frequency/ear 4kHz LE 8kHz LE Average H. Thr. LE 4kHz RE 8kHz RE Average H. Thr. RE
Participant 1 40 dB 40 dB <20 dB 45 dB 40 dB >20dB
(21.7 dB)
Participant 2 30 dB 20 dB <20 dB 30 dB 20 dB <20 dB
Participant 3 45 dB 0 dB <20 dB 45 dB 20 dB <20 dB
Table 3. Frequency Discrimination Limen procedure example, for three parts.
Table 3. Frequency Discrimination Limen procedure example, for three parts.
S R S R S R S R S R
500 Hz (part1) 500 530 500 510 500 504 500 520 500 501
1000 Hz (part2) 1000 1002 1000 1050 1000 1020 1000 1004 1000 1003
2000 Hz (part3) 2000 2005 2000 2020 2000 2003 2000 2010 2000 2001
Table 4. Mean scores (SDs in parenthesis) for each group on 1st (RH1) and 2nd (RH2) syllable of WRRC on rhythmic condition.
Table 4. Mean scores (SDs in parenthesis) for each group on 1st (RH1) and 2nd (RH2) syllable of WRRC on rhythmic condition.
RH1 RH2
Classical musicians 13.33 (1.30) 12.42 (1.56)
Byzantine musicians 13.17 (1.11) 9.83 (2.20)
Percussionists 13.17 (1.33) 10.92 (1.37)
Non-musicians 12.25 (.96) 11.33 (1.96)
Table 5. Mean (SD) scores for Speech in Babble for the Right and Left ear, respectively.
Table 5. Mean (SD) scores for Speech in Babble for the Right and Left ear, respectively.
SinB_RE SinB_LE
Classical musicians -.183 (.3664) -1.233 (.2309)
Byzantine chanters -.083 (.5357) -1.208 (.1975)
Percussionists -.300 (.3766) -1.142 (.2811)
Non-musicians -.283 (.3950) -1.117 (.2823)
Table 6. Mean (SD) scores in msec for the GIN test.
Table 6. Mean (SD) scores in msec for the GIN test.
GIN_RE GIN_LE
Classical musicians 5.33 (1.43) 5.67 (1.23)
Byzantine chanters 5.50 (1.31) 5.83 (1.46)
Percussionists 5.50 (1.67) 5.58 (1.31)
Non-musicians 4.92 (.66) 5.42 (.66)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated