ARTICLE | doi:10.20944/preprints202104.0234.v1
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: sonification apps; auditory displays; torpedo level; spirit level; tools; accessibility; auditory feedback; auditory user interface
Online: 8 April 2021 (11:25:41 CEST)
This paper presents Tiltification, a multi modal spirit level application for smartphones. The non-profit app was produced by students in the master project “Sonification Apps” in winter term 2020/21 at the University of Bremen. In the app, psychoacoustic sonification is used to give feedback on the device’s rotation angles in two plane dimensions, allowing users to level furniture or take perfectly horizontal photos. Tiltification supplements the market of spirit level apps with the unique feature of auditory information processing. This provides for additional benefit in comparison to a physical spirit level and for more accessibility for visu- ally and cognitively impaired people. We argue that the distribution of sonification apps through mainstream channels is a contribution to establish sonification in the market and make it better known to users outside the scientific domain. We hope that the auditory display community will support us by using and recommending the app and by providing valuable feedback on the app functionality and design, and on our communication, advertisement and distribution strategy.
ARTICLE | doi:10.20944/preprints201911.0388.v1
Subject: Medicine & Pharmacology, Other Keywords: social noise; auditory, non-auditory noise effects; personal music players; university students
Online: 30 November 2019 (10:07:18 CET)
Purpose: The study is aimed to quantify the effects of social noise (personal music players (PMP), high-intensity noise exposure events) and road traffic noise exposures in the sample of Slovak university students living and studying in Bratislava. Methods: There were 1,003 university students (306 males and 697 females, average age 23.13±2) enrolled in the study; 347 lived in the student housing facility exposed to road traffic noise (LAeq =67.6 dB) and 656 in the control one (LAeq =53.4 dB). Respondents completed a validated ICBEN 5-grade scale “Noise annoyance questionnaire”. The exposure to PMP was objectified by the conversion of the subjective evaluation of the volume setting and duration. With the cooperation of the ENT specialist, we arranged audiometric examinations on the pilot sample of 41 volunteers. Results: From the total sample of 1,003 students, 794 (79.16 %) of them reported the use of PMP in the course of the last week; average time of 285 minutes. There was a significant difference in PMP use between the exposed (85.59 %) and the control group (75.76 %) (p=0.01). Among PMP users 30.7 % exceeded the LAV (lower action value for industry LAeq,8h = 80 dB). On a pilot sample of volunteers (n=41) audiometry testing was performed indicating a hearing threshold shift at higher frequencies in 22% of subjects. Conclusions: The results of the study on a sample of young healthy individuals showed the importance of exposure to environmental noise from different sources (transportation, neighborhood, construction, entertainment facilities, etc.) as well as social noise and the need for prevention and intervention.
REVIEW | doi:10.20944/preprints202207.0132.v1
Subject: Behavioral Sciences, Behavioral Neuroscience Keywords: developmental stuttering; attention; auditory feedback
Online: 8 July 2022 (05:24:28 CEST)
It has been known for a long time that many people who stutter are immediately fluent in certain conditions, for instance, when they speak in unison with others, in sync with the clicking of a metronome, or when they hear themselves speak in an altered manner. To understand why stuttering is reduced or even eliminated in such conditions is desirable because it may help understand why stuttering occurs in normal speaking conditions. However, empirical findings in this area appear conflicting and confusing, especially with regard to the role of auditory feedback. The article gives an overview of the variety and diversity of fluency-enhancing conditions and of theories proposed to explain their effect. These theories are evaluated in the light of recent empirical findings. A new hypothesis is proposed, based on findings showing that speech processing is limited without attention to the auditory channel. It is assumed that fluency-enhancing conditions draw the speaker’s attention to the auditory channel and, thereby, improve the processing of auditory feedback and its use in speech control. Implications of this account for a causal theory of stuttering and for the treatment of the disorder are discussed.
ARTICLE | doi:10.20944/preprints202206.0204.v1
Subject: Behavioral Sciences, Cognitive & Experimental Psychology Keywords: sonification evaluation, auditory display evaluation, visualization
Online: 14 June 2022 (11:10:46 CEST)
Comparing sonification with visualization is like comparing apples and oranges. While visualizations are ubiquitous to the public and have established names, principles, application areas, and sophisticated designs, sonifications tend to be unique, self-made and completely new to users. In this study we developed a rudimentary visualization that is related closely to the principle of the sonification designs that we want to evaluate. In addition, we implemented a prototypical sonification that uses the most common mapping principles. Experiment results show that participants perform similarly well using the rudimentary visualization and the prototypical sonification, which is much better than chance but significantly worse than using our new sonification results. We therefore argue that both rudimentary visualization and prototypical sonifications can serve as a suitable benchmark to evaluate new sonifications designs against.
Subject: Engineering, Electrical & Electronic Engineering Keywords: dentary bone conduction; photoelectric conversion; auditory ossicles
Online: 5 July 2020 (06:36:57 CEST)
Using the headphone jack of a mobile phone, the proposed device connects mobile music playback through a customized red laser pointer that is cascaded to batteries and the 3.5-mm stereo plug. The red laser pointer flashes according to the frequency of the music currently playing on the mobile phone. The self-made laser pointer which wavelength is 630–650 nm and maximum output is 5 mw and it will light up when the smart phone’s music starts playing at a music frequency matching the light frequency. The frequency signal of the light received by a solar panel is converted to an electrical analog signal, and the analog current signal is amplified through the energy conversion panel and then output to the direct current motor. The motor shaft does not rotate under a small current, but rather only slightly vibrates according to the magnitude of the currents’ analog frequency. Through gripping the motor shaft with teeth, users can transmit audio to the auditory ossicles (i.e., malleus, incus, and stapes) through the dentary bones. After receiving a music signal, the auditory ossicles enable people with congenital or acquired hearing loss to access external audio.
CASE REPORT | doi:10.20944/preprints202011.0710.v1
Subject: Medicine & Pharmacology, Allergology Keywords: 22q13.3 duplication; Auditory steady state response, ASSR; SHANK3; biomarker; auditory event-related potential, ERP; autism spectrum disorders; intellectual disabilities
Online: 30 November 2020 (08:33:26 CET)
SHANK3 encodes scaffold protein involved in postsynaptic receptor density in glutamatergic synapses, including those in the parvalbumin (PV)+inhibitory neurons – the key players in generation of sensory gamma oscillations, such as 40-Hz auditory steady-state response(ASSR). Here we describe a clinical and neurophysiological phenotype of a 15-years old girl (SH01) with microduplication of 16389 bp in 22q13.33, affecting the SHANK3 gene in comparison to typically developing children (n=32). EEG were recorded during the binaurally presentation of 40-Hz clicks’ trains lasting for 500 ms with inter-trial intervals 500-800 ms. SH01 was diagnosed with mild mental retardation and learning disabilities(F70.88) and had problems with reading and writing, as well as smaller vocabulary than TD peers. Her clinical phenotype generally resembled the phenotype of previously described patients with 22q13.33 microduplication. SH01 had mild autistic symptoms but below the threshold for ASD diagnosis. No seizures or MRI abnormalities were reported. While SH01 had relatively preserved auditory event-related potential(ERP) with slightly attenuated P1, her 40-Hz ASSR was totally absent significantly deviating from TD’s ASSR. Absence of 40-Hz ASSR in patient with microduplication, affected SHANK3 gene, indicates deficient temporal resolution of the auditory system, that might underlie language problems, and represent neurophysiological biomarker of SHANK3 abnormalities.
ARTICLE | doi:10.20944/preprints202106.0292.v1
Subject: Behavioral Sciences, Applied Psychology Keywords: sonification; gamification; auditory display; smartphone apps; video games
Online: 10 June 2021 (13:21:22 CEST)
As sonification is supposed to communicate information to users, experimental evaluation of the subjective appropriateness and effectiveness of the sonification design is often desired and sometimes indispensable. Experiments in the laboratory are typically restricted to short-term usage by a small sample size under unnatural conditions. We introduce the multi-platform CURAT Sonification Game that allows us to evaluate our sonification design by a large population during long-term usage. Gamification is used to motivate users to interact with the sonification regularly and conscientiously over a long period of time. In this paper we present the sonification game and some initial analyses of the gathered data. Furthermore, we hope to reach more volunteers to play the CURAT Sonification Game and help us evaluate and optimize our psychoacoustic sonification design and give us valuable feedback on the game and recommendations for future developments.
ARTICLE | doi:10.20944/preprints202106.0103.v1
Online: 3 June 2021 (11:35:30 CEST)
Clubhouse is an auditory app that allows users to host various rooms surrounding a diverse range of topics from Artificial Intelligence to Philosophy. Along with its educational and serene approach, it is known for its popularity amongst celebrities, including Elon Musk, Mark Zuckerburg, CEO of Shopify, and its elusive invite and ios-only pass into gaining access into Clubhouse. Waiting lists are available in the case of not achieving an invite, but to further speed the process, various sellers on eBay, Reddit, Twitter, etc., charging invites from $10-$200. This research paper covers the phenomenon of Clubhouse and the emergence of audio-only rooms, along with a hypothesis of why Clubhouse and other apps of a similar kind are experiencing a harsh downfall despite its seemingly successful business model.
ARTICLE | doi:10.20944/preprints202011.0442.v1
Subject: Medicine & Pharmacology, Allergology Keywords: auditory; deafness; acoustic trauma; hair cells; antioxidant; otoprotection
Online: 17 November 2020 (09:40:53 CET)
Noise induces oxidative stress in the cochlea followed by sensory cell death and hearing loss. The proof of principle that injections of antioxidant vitamins and Mg2+ prevent noise-induced hearing loss (NIHL) has been established. However, effectiveness of oral administration remains controversial and otoprotection mechanisms unclear. Using auditory evoked potentials, quantitative PCR and immunocytochemistry, we explored effects of oral administration of vitamins A, C, E and Mg2+ (ACEMg) on auditory function and sensory cell survival following NIHL in rats. Oral ACEMg reduced auditory thresholds shifts after NIHL. Improved auditory function correlated with increased survival of sensory outer hair cells. In parallel, oral ACEMg modulated the expression timeline of antioxidant enzymes in the cochlea after NIHL. There was increased expression of Glutathione peroxidase-1 and Catalase at 1 and 10 days, respectively. Also, pro-apoptotic Caspase-3 and Bax levels were diminished in ACEMg-treated rats, at 10 and 30 days, respectively, following noise overstimulation, whereas, at day 10 after noise exposure, the levels of anti-apoptotic Bcl-2, were significantly increased. Therefore, oral ACEMg improves auditory function by limiting sensory hair cell death in the auditory receptor following NIHL. Regulation of the expression of antioxidant enzymes and apoptosis-related proteins in cochlear structures is involved in such otoprotective mechanism.
ARTICLE | doi:10.20944/preprints201807.0106.v1
Subject: Behavioral Sciences, Cognitive & Experimental Psychology Keywords: auditory-visual speech perception; bipolar disorder; speech perception
Online: 6 July 2018 (05:21:19 CEST)
The focus of this study was to investigate how individuals with bipolar disorder integrate auditory and visual speech information compared to non-disordered individuals and whether there were any differences in auditory and visual speech integration in the manic and depressive episodes in bipolar disorder patients. It was hypothesized that bipolar groups’ auditory-visual speech integration would be less robust than the control group. Further, it was predicted that those in the manic phase of bipolar disorder would integrate visual speech information more than their depressive phase counterparts. To examine these, the McGurk effect paradigm was used with typical auditory-visual speech (AV) as well as auditory-only (AO) speech perception on visual-only (VO) stimuli. Results. Results showed that the disordered and non-disordered groups did not differ on auditory-visual speech (AV) integration and auditory-only (AO) speech perception but on visual-only (VO) stimuli. The results are interpreted to pave the way for further research whereby both behavioural and physiological data are collected simultaneously. This will allow us understand the full dynamics of how, actually, the auditory and visual (relatively impoverished in bipolar disorder) speech information are integrated in people with bipolar disorder.
ARTICLE | doi:10.20944/preprints202207.0098.v1
Subject: Medicine & Pharmacology, Other Keywords: tinnitus; normal hearing; Evoked potentials; auditory; brain stem; otoacoustic emission
Online: 6 July 2022 (13:54:44 CEST)
In patients with unilateral tinnitus with normal hearing, several studies have compared the ipsilateral and contralateral ears; however, few studies have investigated its relationship with the duration of tinnitus. We compared the auditory brainstem response and otoacoustic emission parameters between ipsilateral and contralateral ears in adults with unilateral tinnitus and normal hearing. This retrospective review included 84 patients with unilateral tinnitus and normal hearing who underwent auditory brainstem response and otoacoustic emission; they were categorized according to the duration of tinnitus. The latencies and amplitudes of waves I, III, and V, and V/I ratio of both ears in auditory brainstem response, and the results of distortion-product otoacoustic emission and transient evoked otoacoustic emission were examined. The auditory brainstem response parameters, distortion-product otoacoustic emission parameters, and transient evoked otoacoustic emission parameters between the ipsilateral and contralateral ears along the duration of tinnitus were analyzed. Moreover, the failure rates of both distortion-product otoacoustic emission and transient evoked otoacoustic emission between the ears along with the duration and the effects of the variables on the amplitude and latency of each wave were examined. Although there was little significant difference between the ipsilateral and contralateral ears, laterality seemed to have an effect on wave I latency in the multiple linear regression analysis. The distortion-product otoacoustic emission failure rate of the ipsilateral ear was higher than that of the contralateral ear in all patients. However, there was no remarkable difference between the ears in the distortion-product otoacoustic emission and transient evoked otoacoustic emission parameters throughout the duration. We found that outer hair cells and the distal portion of the cochlear nerve are possible pathologic lesions in tinnitus with normal hearing and cochlear synaptopathy could be suspected. Further studies, including those on inner hair cells and higher central cortex, are needed.
ARTICLE | doi:10.20944/preprints201808.0522.v1
Subject: Behavioral Sciences, Cognitive & Experimental Psychology Keywords: speech-to-song illusion, auditory illusion, perception, pace, emotion, language tonality
Online: 30 August 2018 (10:37:13 CEST)
The speech-to-song illusion is a type of auditory illusion that the repetition of a part of a sentence would change people’s perception tendency from speech-like to song-like. The study aims to examine how pace, emotion, and language tonality affect people’s experience of the speech-to-song illusion. It uses a between-subject (Pace: fast, normal, vs. slow) and within-subject (Emotion: positive, negative, vs. neutral; language tonality: tonal language vs. non-tonal language) design. Sixty Hong Kong college students were randomly assigned to one of the three conditions characterized by pace. They listened to 12 audio stimuli, each with repetitions of a short excerpt, and rated their subjective perception of the presented phrase, whether it sounded like a speech or a song, on a five-point Likert-scale. Paired-sample t-tests and repeated measures ANOVAs were used to analyze the data. The findings reveal that a faster speech pace could strengthen the tendency of the speech-to-song illusion. Neither emotion nor language tonality show a statistically significant influence on the speech-to-song illusion. This study suggests that the perception of sound should be in a continuum and facilitates the understanding of song production in which speech can turn into music by having repetitive phrases and to be played in a relatively fast pace.
ARTICLE | doi:10.20944/preprints201703.0199.v1
Subject: Arts & Humanities, Music Studies Keywords: auditory arts; psychiatry; heavy metal music; mental disorder; bipolar; education; awareness
Online: 27 March 2017 (10:45:03 CEST)
1) Background: Bipolar or manic-depressive disorder is a malign mental disease that frequently faces social stigma. Educational and thinking models are needed to increase people’s awareness and understanding of the disorder. The arts have potential to achieve this goal. 2) Methods: This paper builds on the recent use of heavy metal music as a thinking and education model. It emphasizes the artistic component of heavy metal and its potential to characterize the symptomatology during the episodes of (hypo)mania and depression and the recurrence of these episodes. Heavy metal music has diversified into subgenres that become allegorical to both the symptoms of episodes and the recurrence of bipolar cycles. 3) Results: Examples of songs are given that mirror distinct facets of the disorder. 4) Conclusion: Although the links drawn between art (music) and science (psychiatry) are inherently subjective, such connections might be used to trigger a learning process, facilitate judgment and decision-making, and induce affective reactions and memory formation in the listener. The approach may facilitate collaborative efforts and serve healthcare professionals and educators as a communication tool to aid the public’s comprehension of the disease and an associated social paradox: On one hand, bipolar disorder incurs substantial costs to society. On the other hand, it benefits from the creative artistic and scientific endeavors of bipolar individuals from which cultural and political gains may ensue.
ARTICLE | doi:10.20944/preprints202109.0187.v1
Subject: Medicine & Pharmacology, Other Keywords: Smoking; DPOAEs; Contralateral supression; OAE; Non-smokers; auditory system; ill-effects on efferent system
Online: 10 September 2021 (11:29:00 CEST)
The present study compared the contralateral suppression and the amplitude of distortion product otoacoutsic emissions (DPOAEs) between smokers and non-smokers to determine the influence of smoking. Thirty smokers and thirty non-smokers within the age range of 18-40 years with a normal hearing sensitivity were considered for the study. For both the groups, DPOAEs were measured and the efferent auditory system functioning was measured by presenting the white noise of 50 dB HL to the contralateral side, while recording the DPOAEs. There was no significant effect of age on the amplitude of DPOAEs in both the groups. However, there were significant differences in the amplitude of DPOAEs between smokers and non-smokers. The amount of suppression and DPOAE amplitude were reduced in smokers when compared to non-smokers. The study found no significant correlation between the amount of smoking and amount of suppression between smokers and non-smokers. However, there were significant correlations between the amount of smoking and DPOAE at low and mid frequencies between smokers and non-smokers. Therefore, the present study highlights the increased damage to the efferent auditory system risk and the smoking ill-effects on the efferent auditory system.
Subject: Keywords: hearing loss; aging; hyperactivity; excitability; loss of inhibition; neurophysiology; auditory perception; neural plasticity; speech processing
Online: 15 April 2021 (13:34:54 CEST)
Many aging adults experience some form of hearing problems that may arise from auditory peripheral damage. However, it has been increasingly acknowledged that hearing loss is not only a dysfunction of the auditory periphery but results from changes within the entire auditory system, from periphery to cortex. Damage to the auditory periphery is associated with an increase in neural activity at various stages throughout the auditory pathway. Here, we review neurophysiological evidence of hyperactivity, auditory perceptual difficulties that may result from hyperactivity, and outline open conceptual and methodological questions related to the study of hyperactivity. We suggest that hyperactivity alters all aspects of hearing – including spectral, temporal, spatial hearing – and, in turn, impairs speech comprehension when background sound is present. By focusing on the perceptual consequences of hyperactivity and the potential challenges of investigating hyperactivity in humans, we hope to bring animal and human electrophysiologists closer together to better understand hearing problems in older adulthood.
ARTICLE | doi:10.20944/preprints202208.0277.v1
Subject: Medicine & Pharmacology, Pediatrics Keywords: Hearing loos; conductive; sensorineural; outer ear; middle ear; inner ear; SNHL; Cochlear; auditory; physical examination; history
Online: 16 August 2022 (04:04:24 CEST)
Hearing loss in infancy leads to preventable speech, language, and cognitive developmental delay [1, 2]. Sensorineural hearing loss (SNHL) is caused by damages, problems, or issues related to the inner ear such as the cochlea with or without the auditory nerve; cranial nerve VIII, involvement. There are three anatomic areas which include the outer ear: composed of the auricle and external auditory canal and the middle ear: which includes the tympanic membrane, ossicles, and the middle ear space, the inner ear: composed of the cochlea, semi-circular canals, and internal auditory canals. The unique anatomical shape of the auricle catches the incoming sound waves to send them down the external auditory canal. Hearing risk assessment should be part of all health visits while regular hearing screening checks are done for all children from 4 to 21 years [1, 2]. Assessment of hearing loss includes history, physical examination and specific hearing assessment tests.
ARTICLE | doi:10.20944/preprints202205.0167.v1
Subject: Physical Sciences, Acoustics Keywords: sonification evaluation; psychoacoustics; just noticeable difference; difference limen; discrimination threshold; comparison of sonification designs; maximum likelihood procedure; auditory display
Online: 12 May 2022 (09:50:51 CEST)
The sonification of data to communicate information to a user is a relatively new approach that established itself around the 1990s. To date many researchers design their individual sonification from scratch. There are no standards in sonification design and evaluation. But researchers and practitioners have formulated several requirements and established several methods. There is wide consensus that psychoaocustics could play an important role in the sonification design and evaluation phase. But this requires an adaption of psychoacoustic methods to the signal types and the requirements of sonification. In this method paper we present PAMPAS, a PsychoAcoustical Method for the Perceptual Analysis of multidimensional Sonification. A well-defined and well-established, efficient, reliable and replicable Just Noticeable Difference experiment using the Maximum Likelihood Procedure serves as the basis to achieve linearity of parameter mapping during the sonification design stage and to identify and quantify perceptual effects during the sonification evaluation stage, namely the perceptual resolution, hysteresis effects and perceptual interferences. The experiment results are universal scores from a standardized data space and a standardized procedure. These scores can serve to compare multiple sonification designs of a single researcher, or even between different research groups. The method can supplement other sonification design and evaluation methods from a perceptual viewpoint.
ARTICLE | doi:10.20944/preprints201808.0523.v1
Subject: Behavioral Sciences, Cognitive & Experimental Psychology Keywords: frequency difference limens; blindfold; visual cues; auditory-visual synesthesia; gliding frequencies; perceptual limit, common resource theory; multiple resource model
Online: 30 August 2018 (10:40:28 CEST)
How perceptual limits can be overcome has long been examined by psychologists. This study investigated whether visual cues, blindfolding, visual-auditory synesthetic experience and music training could facilitate a smaller frequency difference limen (FDL) in a gliding frequency discrimination test. It was hoped that the auditory limits could be overcome through visual facilitation, visual deprivation, involuntary cross-modal sensory experience or music practice. Ninety university students, with no visual or auditory impairment, were recruited for this one-between (blindfold/visual cue) and one-within (control/experimental session) designed study. A MATLAB program was prepared to test their FDL by an alternative forced-choice task (gliding upwards/gliding downwards/no change) and two questionnaires (Vividness of Mental Imagery Questionnaire & Projector-Associator Test) were used to assess their tendency to synesthesia. Participants with music training showed a significantly smaller FDL; on the other hand, being blindfolded, being provided with visual cues or having synesthetic experience before could not significantly reduce the FDL. However, the result showed a trend of reduced FDLs through blindfolding. This indicated that visual deprivation might slightly expand the limits in auditory perception. Overall, current study suggests that the inter-sensory perception can be enhanced through training but not though reallocating cognitive resources to certain modalities. Future studies are recommended to verify the effects of music practice on other perceptual limits.