ARTICLE | doi:10.20944/preprints202104.0234.v1
Subject: Computer Science And Mathematics, Algebra And Number Theory Keywords: sonification apps; auditory displays; torpedo level; spirit level; tools; accessibility; auditory feedback; auditory user interface
Online: 8 April 2021 (11:25:41 CEST)
This paper presents Tiltification, a multi modal spirit level application for smartphones. The non-profit app was produced by students in the master project “Sonification Apps” in winter term 2020/21 at the University of Bremen. In the app, psychoacoustic sonification is used to give feedback on the device’s rotation angles in two plane dimensions, allowing users to level furniture or take perfectly horizontal photos. Tiltification supplements the market of spirit level apps with the unique feature of auditory information processing. This provides for additional benefit in comparison to a physical spirit level and for more accessibility for visu- ally and cognitively impaired people. We argue that the distribution of sonification apps through mainstream channels is a contribution to establish sonification in the market and make it better known to users outside the scientific domain. We hope that the auditory display community will support us by using and recommending the app and by providing valuable feedback on the app functionality and design, and on our communication, advertisement and distribution strategy.
ARTICLE | doi:10.20944/preprints201911.0388.v1
Subject: Public Health And Healthcare, Public, Environmental And Occupational Health Keywords: social noise; auditory, non-auditory noise effects; personal music players; university students
Online: 30 November 2019 (10:07:18 CET)
Purpose: The study is aimed to quantify the effects of social noise (personal music players (PMP), high-intensity noise exposure events) and road traffic noise exposures in the sample of Slovak university students living and studying in Bratislava. Methods: There were 1,003 university students (306 males and 697 females, average age 23.13±2) enrolled in the study; 347 lived in the student housing facility exposed to road traffic noise (LAeq =67.6 dB) and 656 in the control one (LAeq =53.4 dB). Respondents completed a validated ICBEN 5-grade scale “Noise annoyance questionnaire”. The exposure to PMP was objectified by the conversion of the subjective evaluation of the volume setting and duration. With the cooperation of the ENT specialist, we arranged audiometric examinations on the pilot sample of 41 volunteers. Results: From the total sample of 1,003 students, 794 (79.16 %) of them reported the use of PMP in the course of the last week; average time of 285 minutes. There was a significant difference in PMP use between the exposed (85.59 %) and the control group (75.76 %) (p=0.01). Among PMP users 30.7 % exceeded the LAV (lower action value for industry LAeq,8h = 80 dB). On a pilot sample of volunteers (n=41) audiometry testing was performed indicating a hearing threshold shift at higher frequencies in 22% of subjects. Conclusions: The results of the study on a sample of young healthy individuals showed the importance of exposure to environmental noise from different sources (transportation, neighborhood, construction, entertainment facilities, etc.) as well as social noise and the need for prevention and intervention.
REVIEW | doi:10.20944/preprints202207.0132.v1
Subject: Social Sciences, Behavior Sciences Keywords: developmental stuttering; attention; auditory feedback
Online: 8 July 2022 (05:24:28 CEST)
It has been known for a long time that many people who stutter are immediately fluent in certain conditions, for instance, when they speak in unison with others, in sync with the clicking of a metronome, or when they hear themselves speak in an altered manner. To understand why stuttering is reduced or even eliminated in such conditions is desirable because it may help understand why stuttering occurs in normal speaking conditions. However, empirical findings in this area appear conflicting and confusing, especially with regard to the role of auditory feedback. The article gives an overview of the variety and diversity of fluency-enhancing conditions and of theories proposed to explain their effect. These theories are evaluated in the light of recent empirical findings. A new hypothesis is proposed, based on findings showing that speech processing is limited without attention to the auditory channel. It is assumed that fluency-enhancing conditions draw the speaker’s attention to the auditory channel and, thereby, improve the processing of auditory feedback and its use in speech control. Implications of this account for a causal theory of stuttering and for the treatment of the disorder are discussed.
ARTICLE | doi:10.20944/preprints202206.0204.v1
Subject: Social Sciences, Cognitive Science Keywords: sonification evaluation, auditory display evaluation, visualization
Online: 14 June 2022 (11:10:46 CEST)
Comparing sonification with visualization is like comparing apples and oranges. While visualizations are ubiquitous to the public and have established names, principles, application areas, and sophisticated designs, sonifications tend to be unique, self-made and completely new to users. In this study we developed a rudimentary visualization that is related closely to the principle of the sonification designs that we want to evaluate. In addition, we implemented a prototypical sonification that uses the most common mapping principles. Experiment results show that participants perform similarly well using the rudimentary visualization and the prototypical sonification, which is much better than chance but significantly worse than using our new sonification results. We therefore argue that both rudimentary visualization and prototypical sonifications can serve as a suitable benchmark to evaluate new sonifications designs against.
ARTICLE | doi:10.20944/preprints202310.1214.v1
Subject: Engineering, Bioengineering Keywords: finite element model; energy absorbance; auditory system
Online: 19 October 2023 (10:06:08 CEST)
here are different ways to analyse energy absorbance (EA) in the human auditory system. In previous research, we developed a complete finite element model (FEM) of the human auditory system. In this mentioned work, the external auditory canal (EAC), middle ear, and inner ear (spiral cochlea, vestibule, and semi-circular canals) were modelled based on human temporal bone histological sections. Multiple acoustic, structure and fluid-coupled analyses were conducted using the FEM to perform harmonic analyses in the 0.1–10 kHz range. Once the FEM had been validated with published experimental data, the FEM numerical results were used to calculate the EA or energy reflected (ER) by the tympanic membrane. This EA was also measured in clinical audiology tests which were used as a diagnostic parameter. A mathematical approach was developed to calculate the EA and ER, with numerical and experimental results showing adequate correlation up to 1 kHz. Another published FEM had adapted its boundary conditions to replicate experimental results. Here, we recalculated those numerical results by applying the natural boundary conditions of human hearing and found that the results almost totally agreed with our FEM. This boundary problem is frequent and problematic in experimental hearing test protocols: the more invasive they are, the more the results are affected. One of the main objectives of using FEMs is to explore how the experimental test conditions influence the results. Further work will still be required to uncover the relationship between the middle ear structure and EA to clarify how to best use FEMs. Moreover, the FEM boundary conditions must be more representative in future work to ensure their adequate interpretation.
Subject: Engineering, Electrical And Electronic Engineering Keywords: dentary bone conduction; photoelectric conversion; auditory ossicles
Online: 5 July 2020 (06:36:57 CEST)
Using the headphone jack of a mobile phone, the proposed device connects mobile music playback through a customized red laser pointer that is cascaded to batteries and the 3.5-mm stereo plug. The red laser pointer flashes according to the frequency of the music currently playing on the mobile phone. The self-made laser pointer which wavelength is 630–650 nm and maximum output is 5 mw and it will light up when the smart phone’s music starts playing at a music frequency matching the light frequency. The frequency signal of the light received by a solar panel is converted to an electrical analog signal, and the analog current signal is amplified through the energy conversion panel and then output to the direct current motor. The motor shaft does not rotate under a small current, but rather only slightly vibrates according to the magnitude of the currents’ analog frequency. Through gripping the motor shaft with teeth, users can transmit audio to the auditory ossicles (i.e., malleus, incus, and stapes) through the dentary bones. After receiving a music signal, the auditory ossicles enable people with congenital or acquired hearing loss to access external audio.
CASE REPORT | doi:10.20944/preprints202011.0710.v1
Subject: Medicine And Pharmacology, Immunology And Allergy Keywords: 22q13.3 duplication; Auditory steady state response, ASSR; SHANK3; biomarker; auditory event-related potential, ERP; autism spectrum disorders; intellectual disabilities
Online: 30 November 2020 (08:33:26 CET)
SHANK3 encodes scaffold protein involved in postsynaptic receptor density in glutamatergic synapses, including those in the parvalbumin (PV)+inhibitory neurons – the key players in generation of sensory gamma oscillations, such as 40-Hz auditory steady-state response(ASSR). Here we describe a clinical and neurophysiological phenotype of a 15-years old girl (SH01) with microduplication of 16389 bp in 22q13.33, affecting the SHANK3 gene in comparison to typically developing children (n=32). EEG were recorded during the binaurally presentation of 40-Hz clicks’ trains lasting for 500 ms with inter-trial intervals 500-800 ms. SH01 was diagnosed with mild mental retardation and learning disabilities(F70.88) and had problems with reading and writing, as well as smaller vocabulary than TD peers. Her clinical phenotype generally resembled the phenotype of previously described patients with 22q13.33 microduplication. SH01 had mild autistic symptoms but below the threshold for ASD diagnosis. No seizures or MRI abnormalities were reported. While SH01 had relatively preserved auditory event-related potential(ERP) with slightly attenuated P1, her 40-Hz ASSR was totally absent significantly deviating from TD’s ASSR. Absence of 40-Hz ASSR in patient with microduplication, affected SHANK3 gene, indicates deficient temporal resolution of the auditory system, that might underlie language problems, and represent neurophysiological biomarker of SHANK3 abnormalities.
ARTICLE | doi:10.20944/preprints202305.1817.v1
Subject: Medicine And Pharmacology, Otolaryngology Keywords: hearing; auditory processing; cognition; music; byzantine; percussion; rhythm
Online: 26 May 2023 (02:35:33 CEST)
Better auditory processing of musicians is observed in previous research. As musicians differentiate their practice method and performance environment, we aimed to assess auditory perception in Greek musicians with respect to their musical specialization. If there are differences, this may provide a basis for better shaping auditory training in individuals with auditory processing disorder. The auditory tests administered were speech in noise (Speech in Babble), with and without rhythmic advantage (Word Recognition—Rhythm Component), short-term and working memory (Digit Span - Forward and Backwards), temporal resolution (Gaps In Noise) and detection of frequency discrimination threshold (DFL). Groups consisted of classical musicians, Byzantine chanters, percussionists, and non-musicians (12 participants/group). Statistical analysis revealed significant difference in: (i) word recognition in noise with precursor synchronized pulse between classical musicians compared to Byzantine musicians, (ii) better frequency discrimination of Byzantine musicians compared to non-musicians for the 2000Hz region and (iii) working memory, an advantage detected in musicians. Considering all the above, we conclude that musicians have a superior auditory perception, regardless of musical specialization. Musical training enhances elements of auditory processing and may be used as an additional rehabilitation during auditory training, focusing on specific types of music for specific auditory processing deficits.
ARTICLE | doi:10.20944/preprints202106.0292.v1
Subject: Social Sciences, Psychology Keywords: sonification; gamification; auditory display; smartphone apps; video games
Online: 10 June 2021 (13:21:22 CEST)
As sonification is supposed to communicate information to users, experimental evaluation of the subjective appropriateness and effectiveness of the sonification design is often desired and sometimes indispensable. Experiments in the laboratory are typically restricted to short-term usage by a small sample size under unnatural conditions. We introduce the multi-platform CURAT Sonification Game that allows us to evaluate our sonification design by a large population during long-term usage. Gamification is used to motivate users to interact with the sonification regularly and conscientiously over a long period of time. In this paper we present the sonification game and some initial analyses of the gathered data. Furthermore, we hope to reach more volunteers to play the CURAT Sonification Game and help us evaluate and optimize our psychoacoustic sonification design and give us valuable feedback on the game and recommendations for future developments.
ARTICLE | doi:10.20944/preprints202106.0103.v1
Subject: Social Sciences, Media Studies Keywords: COVID-19; Clubhouse; Social Media; Auditory Learning; Technology
Online: 3 June 2021 (11:35:30 CEST)
Clubhouse is an auditory app that allows users to host various rooms surrounding a diverse range of topics from Artificial Intelligence to Philosophy. Along with its educational and serene approach, it is known for its popularity amongst celebrities, including Elon Musk, Mark Zuckerburg, CEO of Shopify, and its elusive invite and ios-only pass into gaining access into Clubhouse. Waiting lists are available in the case of not achieving an invite, but to further speed the process, various sellers on eBay, Reddit, Twitter, etc., charging invites from $10-$200. This research paper covers the phenomenon of Clubhouse and the emergence of audio-only rooms, along with a hypothesis of why Clubhouse and other apps of a similar kind are experiencing a harsh downfall despite its seemingly successful business model.
ARTICLE | doi:10.20944/preprints202011.0442.v1
Subject: Medicine And Pharmacology, Immunology And Allergy Keywords: auditory; deafness; acoustic trauma; hair cells; antioxidant; otoprotection
Online: 17 November 2020 (09:40:53 CET)
Noise induces oxidative stress in the cochlea followed by sensory cell death and hearing loss. The proof of principle that injections of antioxidant vitamins and Mg2+ prevent noise-induced hearing loss (NIHL) has been established. However, effectiveness of oral administration remains controversial and otoprotection mechanisms unclear. Using auditory evoked potentials, quantitative PCR and immunocytochemistry, we explored effects of oral administration of vitamins A, C, E and Mg2+ (ACEMg) on auditory function and sensory cell survival following NIHL in rats. Oral ACEMg reduced auditory thresholds shifts after NIHL. Improved auditory function correlated with increased survival of sensory outer hair cells. In parallel, oral ACEMg modulated the expression timeline of antioxidant enzymes in the cochlea after NIHL. There was increased expression of Glutathione peroxidase-1 and Catalase at 1 and 10 days, respectively. Also, pro-apoptotic Caspase-3 and Bax levels were diminished in ACEMg-treated rats, at 10 and 30 days, respectively, following noise overstimulation, whereas, at day 10 after noise exposure, the levels of anti-apoptotic Bcl-2, were significantly increased. Therefore, oral ACEMg improves auditory function by limiting sensory hair cell death in the auditory receptor following NIHL. Regulation of the expression of antioxidant enzymes and apoptosis-related proteins in cochlear structures is involved in such otoprotective mechanism.
ARTICLE | doi:10.20944/preprints201807.0106.v1
Subject: Social Sciences, Cognitive Science Keywords: auditory-visual speech perception; bipolar disorder; speech perception
Online: 6 July 2018 (05:21:19 CEST)
The focus of this study was to investigate how individuals with bipolar disorder integrate auditory and visual speech information compared to non-disordered individuals and whether there were any differences in auditory and visual speech integration in the manic and depressive episodes in bipolar disorder patients. It was hypothesized that bipolar groups’ auditory-visual speech integration would be less robust than the control group. Further, it was predicted that those in the manic phase of bipolar disorder would integrate visual speech information more than their depressive phase counterparts. To examine these, the McGurk effect paradigm was used with typical auditory-visual speech (AV) as well as auditory-only (AO) speech perception on visual-only (VO) stimuli. Results. Results showed that the disordered and non-disordered groups did not differ on auditory-visual speech (AV) integration and auditory-only (AO) speech perception but on visual-only (VO) stimuli. The results are interpreted to pave the way for further research whereby both behavioural and physiological data are collected simultaneously. This will allow us understand the full dynamics of how, actually, the auditory and visual (relatively impoverished in bipolar disorder) speech information are integrated in people with bipolar disorder.
REVIEW | doi:10.20944/preprints202304.0164.v1
Subject: Social Sciences, Cognitive Science Keywords: Synesthesia; Auditory Tactile Synesthesia; Dopamine; Creativity; Savant; Autism; tDCs
Online: 10 April 2023 (10:11:22 CEST)
This review paper explores auditory tactile (AT) synesthesia, a rare neurological condition where sounds evoke tactile sensations. The paper provides a historical overview of the condition and discusses its epidemiology, with a prevalence of less than 1% of the general population. The neurological basis of AT synesthesia is explored, including the role of cross-modal processing and hyperconnectivity within the brain. The paper also describes the phenomenology of the condition, including the range of tactile sensations that can be experienced in response to different sounds. The occurrence of AT synesthesia in the present-day world is discussed, including its relationship to music and art. Various hypotheses surrounding the development and maintenance of AT synesthesia are reviewed, focusing on genetic and environmental factors. The implications for clinical practice are explored, including potential benefits for individuals with sensory processing disorders. Finally, the paper concludes with a discussion of future directions for research in this field, including the need to explore further the underlying neural mechanisms of AT synesthesia and potential therapeutic interventions.
ARTICLE | doi:10.20944/preprints202301.0202.v1
Subject: Medicine And Pharmacology, Clinical Medicine Keywords: Alpha synchronisation; Working memory; Auditory processing; Modulation detection threshold
Online: 12 January 2023 (01:48:00 CET)
Background: Despite hearing aids adequately compensate for hearing loss, a substantial proportion of the population leave their hearing difficulties untreated. Even though this is a well-known clinical issue, the optimal approach to address this issue during the hearing rehabilitation process is still unclear. Purpose: The present study aims to characterise behavioural and neurophysiological auditory and cognitive processing skills in experienced hearing aid users versus those with normal hearing, aimed at providing clinicians with the evidence required to adequately manage the expectations of their clients, thus indirectly reinforcing hearing-aid adoption within the adult population with hearing loss. Research design: Behavioural tests included auditory, cognitive, and speech-in-noise tasks; and neurophysiological testing included cortical auditory evoked potentials evoked by a /da/ stimulus presented at 65 dB SPL. The tests were selected based on previous literature supporting specific speech-understanding skills. Study sample: Ten participants (7 female, 21—68 years) with bilateral, mild-moderate to moderately-severe sensorineural hearing loss (HL), and 10 with clinically normal hearing (NH, 8 female, 19—62 years) participated in the study. Data collection and analysis: Behavioural data was analysed using multivariate analysis of variance (MANOVA) and neurophysiological analysis used measurements such as independent t-test, time-frequency analysis and inter-trial phase coherence. Results: The NH and HL groups presented similar scores in all the behavioural tasks. Time-frequency analysis revealed a statistically significantly reduction in alpha (8–12 Hz) synchronisation at the centro-frontal electrodes in the HL group – a brain activity pattern that has been associated with listening effort, inhibition and selective attention. Conclusions and significance: Results support the conclusion that hearing aids are effective in compensating for the audibility of their users, enabling them to perform at similar levels than their normal-hearing peers. However, the reduced alpha synchronisation observed in the HL population indicates that adequate audibility does not extend to improved neural responses. Future studies need to investigate the induced activity in speech understanding paradigms to explore the auditory processing differences at cortical level. The results are only from a small sample size but the findings have the potential to support clinicians in managing adequately the expectations of their clients in regards the benefits of hearing aid technologies.
ARTICLE | doi:10.20944/preprints202307.0368.v1
Subject: Public Health And Healthcare, Health Policy And Services Keywords: Hearing loss; neonatal hearing screening; rescreening; otoacoustic emissions; auditory potentials.
Online: 6 July 2023 (04:06:24 CEST)
Second-level hospitals face peculiarities that hinder the implementation of the hearing rescreening protocol, which are not uncommon in other settings. This study analyzes the hearing rescreening process in this kind of hospital. A total of 1130 individuals were included. In this cohort, 61.07% were newborns in the hospital who failed their first otoemission test after birth (n=679) or were unable to have the test performed (n=11), then being referred to outpatient clinic. The remaining 38.93% were individuals who were born in another hospital with their first test conducted in the outpatient clinic (n=440). A high amount of rescreenings were made outside the recommended time frame, mainly in children referred from another hospital. There was a high rate of lost to follow-up especially with otolaryngologist referrals. Neonatal Hearing screening in second level hospitals is difficult because of the staffing and time constraints. This results in longer than recommended turnaround times and interferes with the timely detection of hearing loss. This is particularly serious in outpatients. Referral of children with impaired screening to out-of-town centers leads to unacceptable loss of follow-up. A legislative support for all these rescreening issues is necessary. We discuss these findings and propose some solutions.
ARTICLE | doi:10.20944/preprints202305.0359.v1
Subject: Business, Economics And Management, Marketing Keywords: Visual Signal; Auditory Signal; Interaction Effect; Purchase Behavior; AISAS Model
Online: 5 May 2023 (10:59:16 CEST)
This study, based on the AISAS model, explores the impact of the interaction effect between visual and auditory signals on consumer purchase behavior. Using experimental methods, 120 participants were randomly assigned to four different visual and auditory signal combinations, and their purchase intentions and actual purchase behavior were measured. The results show that the interaction effect between visual and auditory signals has a significant impact on both purchase intentions and actual purchase behavior, and there is a significant positive relationship. Specifically, when visual and auditory signals are mutually consistent, consumers have the highest purchase intentions and actual purchase behavior; when both visual and auditory signals are absent, consumers have the lowest purchase intentions and actual purchase behavior; when either the visual or auditory signal is missing, consumers' purchase intentions and actual purchase behavior are in between the two extremes. This study provides a new perspective for understanding consumers' decision-making processes in multi-sensory environments and offers valuable insights for the development of marketing strategies.
ARTICLE | doi:10.20944/preprints202207.0098.v1
Subject: Medicine And Pharmacology, Otolaryngology Keywords: tinnitus; normal hearing; Evoked potentials; auditory; brain stem; otoacoustic emission
Online: 6 July 2022 (13:54:44 CEST)
In patients with unilateral tinnitus with normal hearing, several studies have compared the ipsilateral and contralateral ears; however, few studies have investigated its relationship with the duration of tinnitus. We compared the auditory brainstem response and otoacoustic emission parameters between ipsilateral and contralateral ears in adults with unilateral tinnitus and normal hearing. This retrospective review included 84 patients with unilateral tinnitus and normal hearing who underwent auditory brainstem response and otoacoustic emission; they were categorized according to the duration of tinnitus. The latencies and amplitudes of waves I, III, and V, and V/I ratio of both ears in auditory brainstem response, and the results of distortion-product otoacoustic emission and transient evoked otoacoustic emission were examined. The auditory brainstem response parameters, distortion-product otoacoustic emission parameters, and transient evoked otoacoustic emission parameters between the ipsilateral and contralateral ears along the duration of tinnitus were analyzed. Moreover, the failure rates of both distortion-product otoacoustic emission and transient evoked otoacoustic emission between the ears along with the duration and the effects of the variables on the amplitude and latency of each wave were examined. Although there was little significant difference between the ipsilateral and contralateral ears, laterality seemed to have an effect on wave I latency in the multiple linear regression analysis. The distortion-product otoacoustic emission failure rate of the ipsilateral ear was higher than that of the contralateral ear in all patients. However, there was no remarkable difference between the ears in the distortion-product otoacoustic emission and transient evoked otoacoustic emission parameters throughout the duration. We found that outer hair cells and the distal portion of the cochlear nerve are possible pathologic lesions in tinnitus with normal hearing and cochlear synaptopathy could be suspected. Further studies, including those on inner hair cells and higher central cortex, are needed.
REVIEW | doi:10.20944/preprints202307.0327.v1
Subject: Medicine And Pharmacology, Neuroscience And Neurology Keywords: Brainstem auditory evoked potentials; microvascular decompression; hemifacial spasm; vestibulocochlear nerve damage
Online: 6 July 2023 (03:41:17 CEST)
Brainstem auditory evoked potentials (BAEPs) testing is very important when microvascular decompression (MVD) is performed with hemifacial spasm (HFS). The reason for this is that the vestibulocochlear nerve is located right next to the facial nerve, so the vestibulocochlear nerve may be affected by manipulation by surgery. The BAEPs test method for detecting vestibulocochlear nerve damage during surgery has been developed a lot and is helping a lot during surgery. In most HFS patients with normal vestibulocochlear nerve, the degree of vestibulocochlear nerve damage caused by surgery is reflected very well in the BAEPs test waveforms. Therefore, a real time test is the best way to minimize the damage to the vestibulocochlear nerve. The purpose of this study was to review the most recently published BAEPs test used in MVD surgery and to find out in detail the relationship between the vestibulocochlear nerve damage and the BAEPs waveforms.
ARTICLE | doi:10.20944/preprints201808.0522.v1
Subject: Social Sciences, Cognitive Science Keywords: speech-to-song illusion, auditory illusion, perception, pace, emotion, language tonality
Online: 30 August 2018 (10:37:13 CEST)
The speech-to-song illusion is a type of auditory illusion that the repetition of a part of a sentence would change people’s perception tendency from speech-like to song-like. The study aims to examine how pace, emotion, and language tonality affect people’s experience of the speech-to-song illusion. It uses a between-subject (Pace: fast, normal, vs. slow) and within-subject (Emotion: positive, negative, vs. neutral; language tonality: tonal language vs. non-tonal language) design. Sixty Hong Kong college students were randomly assigned to one of the three conditions characterized by pace. They listened to 12 audio stimuli, each with repetitions of a short excerpt, and rated their subjective perception of the presented phrase, whether it sounded like a speech or a song, on a five-point Likert-scale. Paired-sample t-tests and repeated measures ANOVAs were used to analyze the data. The findings reveal that a faster speech pace could strengthen the tendency of the speech-to-song illusion. Neither emotion nor language tonality show a statistically significant influence on the speech-to-song illusion. This study suggests that the perception of sound should be in a continuum and facilitates the understanding of song production in which speech can turn into music by having repetitive phrases and to be played in a relatively fast pace.
ARTICLE | doi:10.20944/preprints201703.0199.v1
Subject: Arts And Humanities, Music Keywords: auditory arts; psychiatry; heavy metal music; mental disorder; bipolar; education; awareness
Online: 27 March 2017 (10:45:03 CEST)
1) Background: Bipolar or manic-depressive disorder is a malign mental disease that frequently faces social stigma. Educational and thinking models are needed to increase people’s awareness and understanding of the disorder. The arts have potential to achieve this goal. 2) Methods: This paper builds on the recent use of heavy metal music as a thinking and education model. It emphasizes the artistic component of heavy metal and its potential to characterize the symptomatology during the episodes of (hypo)mania and depression and the recurrence of these episodes. Heavy metal music has diversified into subgenres that become allegorical to both the symptoms of episodes and the recurrence of bipolar cycles. 3) Results: Examples of songs are given that mirror distinct facets of the disorder. 4) Conclusion: Although the links drawn between art (music) and science (psychiatry) are inherently subjective, such connections might be used to trigger a learning process, facilitate judgment and decision-making, and induce affective reactions and memory formation in the listener. The approach may facilitate collaborative efforts and serve healthcare professionals and educators as a communication tool to aid the public’s comprehension of the disease and an associated social paradox: On one hand, bipolar disorder incurs substantial costs to society. On the other hand, it benefits from the creative artistic and scientific endeavors of bipolar individuals from which cultural and political gains may ensue.
ARTICLE | doi:10.20944/preprints202309.1005.v1
Subject: Arts And Humanities, Other Keywords: metaverse environment; auditory presence; AUX evaluation; evaluation questionnaire design; principal component analysis
Online: 15 September 2023 (05:33:24 CEST)
This study aims to develop an auditory experience evaluation questionnaire to improve the presence of metaverse environments, and to derive evaluation components considering auditory presence and auditory user experience (AUX) through a survey. After conducting a survey with a total of 232 participants, five evaluation components were extracted from auditory presence and AUX evaluation factors through principal component analysis (PCA) and reliability analysis (RA): 'realistic auditory background', 'acoustic aesthetics', 'consideration of acoustic control and accessibility', 'auditory utility and minimalist design', and 'auditory consistency'. In particular, although AUX evaluation factors such as 'ease of access to sound control' have limitations in improving the sense of presence, negative factors of presence such as 'distraction due to sound' can be improved by utilizing AUX evaluation factors, so it is judged that the sense of presence in the metaverse environments can be improved by enhancing the auditory sense of presence and AUX evaluation factors according to the composition of the five evaluation components derived in this study. This study can be used as a basis for developing an auditory experience evaluation questionnaire for the metaverse platform, creating sound design guidelines, and identifying sound development priorities.
CASE REPORT | doi:10.20944/preprints202306.0714.v1
Subject: Public Health And Healthcare, Public Health And Health Services Keywords: adenoideocystic carcinoma; tumours of the external auditory meatus; medical liability; forensic sciences
Online: 9 June 2023 (11:59:10 CEST)
Ceruminous gland tumours are rare injuries of the external auditory canal (EAC). This tumor, even if considered a slow-growing carcinoma, has high rate of perineural invasion and metastasis. For this reason, it must be promptly diagnosed, then it must be treated with aggressive surgery combined with postoperative radiation. The present case report is about a case of an adenoid cystic carcinoma arising the external auditory canal of 46 years old female patient, who complained hypoacusis and pain. The profile of medical liability is substantiated on the delayed diagnosis when the lately observed tumor progression by the invasion of both surrounding bone and vascular structures.
ARTICLE | doi:10.20944/preprints202305.0003.v1
Subject: Computer Science And Mathematics, Artificial Intelligence And Machine Learning Keywords: Categorical emotion recognition; Auditory signal processing; Modulation-filtered cochleagram; Multi-level attention
Online: 1 May 2023 (03:06:38 CEST)
Speech emotion recognition is a critical component for achieving natural human-robot interaction. The modulation-filtered cochleagram is a feature based on auditory modulation perception, which contains multi-dimensional spectral-temporal modulation representation. In this study, we propose an emotion recognition framework that utilizes a multi-level attention network to recognize emotions from the modulation-filtered cochleagram. The channel-level attention and spatial-level attention modules are used to capture emotional saliency maps of channel and spatial feature representations from the 3D convolution feature maps, respectively. Furthermore, the temporal-level attention module captures significant emotional regions from the concatenated feature sequence of the emotional saliency maps. Our experiments on the IEMOCAP dataset demonstrate that the modulation-filtered cochleagram significantly improves the prediction performance of categorical emotion compared to other evaluated features. Moreover, our emotion recognition framework achieves a better unweighted accuracy of 71% in categorical emotion recognition than several existing approaches in the experiments. In summary, our study demonstrates the effectiveness of the modulation-filtered cochleagram in speech emotion recognition, and our proposed multi-level attention framework provides a promising direction for future research in this field.
ARTICLE | doi:10.20944/preprints202109.0187.v1
Subject: Medicine And Pharmacology, Otolaryngology Keywords: Smoking; DPOAEs; Contralateral supression; OAE; Non-smokers; auditory system; ill-effects on efferent system
Online: 10 September 2021 (11:29:00 CEST)
The present study compared the contralateral suppression and the amplitude of distortion product otoacoutsic emissions (DPOAEs) between smokers and non-smokers to determine the influence of smoking. Thirty smokers and thirty non-smokers within the age range of 18-40 years with a normal hearing sensitivity were considered for the study. For both the groups, DPOAEs were measured and the efferent auditory system functioning was measured by presenting the white noise of 50 dB HL to the contralateral side, while recording the DPOAEs. There was no significant effect of age on the amplitude of DPOAEs in both the groups. However, there were significant differences in the amplitude of DPOAEs between smokers and non-smokers. The amount of suppression and DPOAE amplitude were reduced in smokers when compared to non-smokers. The study found no significant correlation between the amount of smoking and amount of suppression between smokers and non-smokers. However, there were significant correlations between the amount of smoking and DPOAE at low and mid frequencies between smokers and non-smokers. Therefore, the present study highlights the increased damage to the efferent auditory system risk and the smoking ill-effects on the efferent auditory system.
Subject: Medicine And Pharmacology, Neuroscience And Neurology Keywords: hearing loss; aging; hyperactivity; excitability; loss of inhibition; neurophysiology; auditory perception; neural plasticity; speech processing
Online: 15 April 2021 (13:34:54 CEST)
Many aging adults experience some form of hearing problems that may arise from auditory peripheral damage. However, it has been increasingly acknowledged that hearing loss is not only a dysfunction of the auditory periphery but results from changes within the entire auditory system, from periphery to cortex. Damage to the auditory periphery is associated with an increase in neural activity at various stages throughout the auditory pathway. Here, we review neurophysiological evidence of hyperactivity, auditory perceptual difficulties that may result from hyperactivity, and outline open conceptual and methodological questions related to the study of hyperactivity. We suggest that hyperactivity alters all aspects of hearing – including spectral, temporal, spatial hearing – and, in turn, impairs speech comprehension when background sound is present. By focusing on the perceptual consequences of hyperactivity and the potential challenges of investigating hyperactivity in humans, we hope to bring animal and human electrophysiologists closer together to better understand hearing problems in older adulthood.
REVIEW | doi:10.20944/preprints202311.1906.v1
Subject: Biology And Life Sciences, Neuroscience And Neurology Keywords: Arousal threshold; NREM sleep; REM sleep; auditory system; visual system; olfactory system; pain; OFF periods
Online: 29 November 2023 (15:27:57 CET)
When we are asleep we lose the ability to promptly respond to external stimuli and yet we spend many hours every day in this inherently risky behavioral state. This simple fact strongly suggests that sleep must serve essential functions that rely on the brain going offline, on a daily basis, and for long periods of time. If these functions did not require partial sensory disconnection, it would be difficult to explain why they are not performed during waking. Paradoxically, despite its central role in defining sleep and what sleep does, sensory disconnection during sleep remains a mystery. We have a limited understanding of how it is implemented along the sensory pathways, we do not know whether the same mechanisms apply to all sensory modalities, nor do we know to which extent these mechanisms are shared between non-rapid eye movement (NREM) sleep and REM sleep. The main goal of this contribution is to review some knowns and unknowns about sensory disconnection during sleep as a first step to fill this gap.
ARTICLE | doi:10.20944/preprints202303.0517.v1
Subject: Biology And Life Sciences, Behavioral Sciences Keywords: Autism spectrum disorder; Auditory stream segregation; Hearing assistive technology; Speech-in-noise perception; Tonal language speakers
Online: 30 March 2023 (02:52:15 CEST)
Purpose: Hearing assistive technology (HAT) has been shown to be a viable solution to the speech-in-noise perception (SPIN) issue in children with autism spectrum disorder (ASD); however, little is known about its efficacy in tonal language speakers. This study compared sentence-level SPIN performance between Chinese children with ASD and neurotypical (NT) children and evaluated HAT use in improving SPIN performance and easing SPIN difficulty. Methods: Children with ASD (n=26) and NT children (n=19) aged 6-12 performed two adaptive tests in steady-state noise and three fixed-level tests in quiet and steady-state noise with and without using HAT. Speech recognition thresholds (SRT) and accuracy rates were assessed using adaptive and fixed-level tests, respectively. Parents or teachers of the ASD group completed a questionnaire regarding children’s listening difficulty under six circumstances before and after a ten-day trial period of HAT use. Results: Although the two groups of children had comparable SRTs, the ASD group showed a significantly lower SPIN accuracy rate than the NT group. Also, a significant impact of noise was found in the ASD group’s accuracy rate, but not in the NT group’s. There was a general improvement in the ASD group’s SPIN performance with HAT and a decrease in their listening difficulty ratings across all conditions after the device trial. Conclusion: The findings indicated inadequate SPIN in the ASD group using a relatively sensitive measure to gauge SPIN performance among children. The markedly increased accuracy rate in noise during HAT-on sessions for the ASD group confirmed the feasibility of HAT for improving SPIN performance in controlled laboratory settings, and the reduced post-use ratings of listening difficulty further confirmed the benefits of HAT use in daily scenarios.
ARTICLE | doi:10.20944/preprints202208.0277.v1
Subject: Medicine And Pharmacology, Pediatrics, Perinatology And Child Health Keywords: Hearing loos; conductive; sensorineural; outer ear; middle ear; inner ear; SNHL; Cochlear; auditory; physical examination; history
Online: 16 August 2022 (04:04:24 CEST)
Hearing loss in infancy leads to preventable speech, language, and cognitive developmental delay [1, 2]. Sensorineural hearing loss (SNHL) is caused by damages, problems, or issues related to the inner ear such as the cochlea with or without the auditory nerve; cranial nerve VIII, involvement. There are three anatomic areas which include the outer ear: composed of the auricle and external auditory canal and the middle ear: which includes the tympanic membrane, ossicles, and the middle ear space, the inner ear: composed of the cochlea, semi-circular canals, and internal auditory canals. The unique anatomical shape of the auricle catches the incoming sound waves to send them down the external auditory canal. Hearing risk assessment should be part of all health visits while regular hearing screening checks are done for all children from 4 to 21 years [1, 2]. Assessment of hearing loss includes history, physical examination and specific hearing assessment tests.
ARTICLE | doi:10.20944/preprints202211.0046.v1
Subject: Biology And Life Sciences, Animal Science, Veterinary Science And Zoology Keywords: zebrafish; classical conditioning; operant-conditioning; software; auditory discrimination; learning; spatial working memory; decision making; reward; vision; hearing
Online: 2 November 2022 (06:08:45 CET)
Directed movement towards a target requires spatial working memory, including processing of sensory inputs and motivational drive. In a stimulus-driven, operant conditioning paradigm designed to train zebrafish, we present a pulse of light via LED’s and/or sounds via an underwater transducer. A webcam placed below a glass tank records fish swimming behavior. During operant conditioning, a fish must interrupt an infrared beam at one location to obtain a small food reward at the same or different location. A timing-gated interrupt activates robotic-arm and feeder stepper motors via custom software controlling a microprocessor (Arduino). “Ardulink”, a JAVA facility, implements Arduino-computer communication protocols. In this way, full automation of stimulus-conditioned directional swimming is achieved. Precise multiday scheduling of training, including timing, location and intensity of stimulus parameters, and feeder control is accomplished via a user-friendly interface. Our training paradigm permits tracking of learning by monitoring, turning, location, response times and directional swimming of individual fish. This facilitates comparison of performance within and across a cohort of animals. We demonstrate the ability to train and test zebrafish using visual and auditory stimuli. Current methods used for associative conditioning often involve human intervention, which is labor intensive, stressful to animals, and introduces noise in the data. Our relatively simple yet flexible paradigm requires a simple apparatus and minimal human intervention. Our scheduling and control software and apparatus (NemoTrainer) can be used to screen neurologic drugs and test the effects of CRISPR-based and optogenetic modification of neural circuits on sensation, locomotion, learning and memory.
ARTICLE | doi:10.20944/preprints202307.0834.v1
Subject: Engineering, Mechanical Engineering Keywords: emotion recognition, auditory stimulation, EEG signals, convolutional neural network (CNN), long short term memory (LSTM), brain-computer interface (BCI)
Online: 13 July 2023 (04:53:51 CEST)
Emotions play a vital role in understanding human behavior and interpersonal relationships. The ability to recognize emotions through Electroencephalogram (EEG) signals offers an alternative to traditional methods, such as questionnaires, enabling the identification of emotional states in a non-intrusive manner. Automatic emotion recognition holds great potential, eliminating the need for clinical examinations or physical visits, thereby contributing significantly to the advancement of Brain-Computer Interface (BCI) technology. However, one of the key challenges lies in effectively selecting and extracting relevant features from the EEG signal to establish meaningful distinctions between different emotional states. The process of feature selection is often time-consuming and demanding. In this research, we propose a groundbreaking approach for automatically identifying three emotional states (positive, negative, and neutral) by leveraging auditory stimulation of EEG signals. Our novel method directly applies the raw EEG signal to a Convolutional Neural Network-Long Short Term Memory (CNN-LSTM) architecture, bypassing the conventional feature extraction and selection steps. This unconventional approach offers a significant departure from existing literature. Our proposed network architecture comprises ten convolutional layers, followed by three LSTM layers and two fully connected layers. Through extensive simulations and evaluations on 12 active channels, our algorithm demonstrates exceptional performance, achieving an accuracy of 97.42\% and 95.23\% for the binary classification of negative and positive emotions, as well as a Cohen's Kappa coefficient of 0.96 and 0.93 for the three-class classification (negative, neutral, and positive), respectively. These promising results highlight the efficacy of our novel methodology and its potential implications in advancing emotion recognition using EEG signals.
ARTICLE | doi:10.20944/preprints202205.0167.v1
Subject: Physical Sciences, Acoustics Keywords: sonification evaluation; psychoacoustics; just noticeable difference; difference limen; discrimination threshold; comparison of sonification designs; maximum likelihood procedure; auditory display
Online: 12 May 2022 (09:50:51 CEST)
The sonification of data to communicate information to a user is a relatively new approach that established itself around the 1990s. To date many researchers design their individual sonification from scratch. There are no standards in sonification design and evaluation. But researchers and practitioners have formulated several requirements and established several methods. There is wide consensus that psychoaocustics could play an important role in the sonification design and evaluation phase. But this requires an adaption of psychoacoustic methods to the signal types and the requirements of sonification. In this method paper we present PAMPAS, a PsychoAcoustical Method for the Perceptual Analysis of multidimensional Sonification. A well-defined and well-established, efficient, reliable and replicable Just Noticeable Difference experiment using the Maximum Likelihood Procedure serves as the basis to achieve linearity of parameter mapping during the sonification design stage and to identify and quantify perceptual effects during the sonification evaluation stage, namely the perceptual resolution, hysteresis effects and perceptual interferences. The experiment results are universal scores from a standardized data space and a standardized procedure. These scores can serve to compare multiple sonification designs of a single researcher, or even between different research groups. The method can supplement other sonification design and evaluation methods from a perceptual viewpoint.
ARTICLE | doi:10.20944/preprints201808.0523.v1
Subject: Social Sciences, Cognitive Science Keywords: frequency difference limens; blindfold; visual cues; auditory-visual synesthesia; gliding frequencies; perceptual limit, common resource theory; multiple resource model
Online: 30 August 2018 (10:40:28 CEST)
How perceptual limits can be overcome has long been examined by psychologists. This study investigated whether visual cues, blindfolding, visual-auditory synesthetic experience and music training could facilitate a smaller frequency difference limen (FDL) in a gliding frequency discrimination test. It was hoped that the auditory limits could be overcome through visual facilitation, visual deprivation, involuntary cross-modal sensory experience or music practice. Ninety university students, with no visual or auditory impairment, were recruited for this one-between (blindfold/visual cue) and one-within (control/experimental session) designed study. A MATLAB program was prepared to test their FDL by an alternative forced-choice task (gliding upwards/gliding downwards/no change) and two questionnaires (Vividness of Mental Imagery Questionnaire & Projector-Associator Test) were used to assess their tendency to synesthesia. Participants with music training showed a significantly smaller FDL; on the other hand, being blindfolded, being provided with visual cues or having synesthetic experience before could not significantly reduce the FDL. However, the result showed a trend of reduced FDLs through blindfolding. This indicated that visual deprivation might slightly expand the limits in auditory perception. Overall, current study suggests that the inter-sensory perception can be enhanced through training but not though reallocating cognitive resources to certain modalities. Future studies are recommended to verify the effects of music practice on other perceptual limits.
ARTICLE | doi:10.20944/preprints202308.1727.v1
Subject: Medicine And Pharmacology, Otolaryngology Keywords: third window effect; otosclerosis; osteogenesis imperfecta; cavitating otosclerosis; cavitating osteogenesis imperfecta; pseudo-CHL; internal auditory canal diverticulum; double ring effect; cochlear implant
Online: 25 August 2023 (04:57:24 CEST)
There are several pathologies that can change the anatomy of the otic capsule and that can distort the bone density of the bony structures of the inner ear, but otosclerosis is one of the most frequent. Similar behavior has been shown in patients affected by osteogenesis imperfecta (OI), a genetic disease due to a mutation in the genes coding for type I (pro) collagen. In particular, we note that otosclerosis and OI can lead to bone resorption creating pericochlear cavitations in contact with the internal auditory canal (IAC). In this regard, in our experience we have collected 5 cases presenting this characteristic and their audiological data and clinical history were analyzed. This feature can be defined as a potential cause of a third-window effect, because it causes an energy loss during the transmission of sound waves from the OW away from the basilar membrane.
REVIEW | doi:10.20944/preprints202308.0104.v1
Subject: Medicine And Pharmacology, Neuroscience And Neurology Keywords: blink reflex; brainstem auditory evoked potentials; brainstem reflexes; Chiari type 1 malformation; electromyography; evoked potentials; intraoperative monitoring; motor evoked potentials; somatosensory evoked potentials; syringomyelia.
Online: 2 August 2023 (10:08:37 CEST)
Chiari malformation type 1 (CM1) includes a series of congenital anomalies that share ectopia of the cerebellar tonsils below the foramen magnum, in some cases associated with syringomyelia or hydrocephalus. CM1 can cause dysfunction of the brainstem, spinal cord, and cranial nerves. This functional alteration of the nervous system can be detected by various modalities of neurophysiological tests, such as brainstem auditory evoked potentials, somatosensory evoked potentials, motor evoked potentials, electromyography and nerve conduction studies of the cranial nerves and spinal roots, as well as brainstem reflexes. The main goal of this study is to review the findings of multimodal neurophysiological examinations in published studies of patients with CM1 and their indication in the diagnosis, treatment, and follow-up of these patients, as well as their utility in intraoperative monitoring.