ARTICLE | doi:10.20944/preprints201608.0046.v1
Subject: Behavioral Sciences, Cognitive & Experimental Psychology Keywords: visual symmetry; affine projection; fractals; visual sensation; aesthetics; preference
Online: 5 August 2016 (05:15:32 CEST)
Evolution and geometry generate complexity in similar ways. Evolution drives natural selection while geometry may capture the logic of this selection and express it visually, in terms of specific generic properties representing some kind of advantage. Geometry is ideally suited for expressing the logic of evolutionary selection for symmetry, which is found in the shape curves of vein systems and other natural objects such as leaves, cell membranes, or tunnel systems built by ants. The topology and geometry of symmetry is controlled by numerical parameters, which act in analogy with a biological organism's DNA. The introductory part of this paper reviews findings from experiments illustrating the critical role of two-dimensional design parameters and shape symmetry for visual or tactile shape sensation, and for perception-based decision making in populations of experts and non-experts. Thereafter, results from a pilot study on the effects of fractal symmetry, referred to herein as the symmetry of things in a thing, on aesthetic judgments and visual preference are presented. In a first experiment (psychophysical scaling procedure), non-expert observers had to rate (scale from 0 to 10) the perceived beauty of a random series of 2D fractal trees with varying degrees of fractal symmetry. In a second experiment (two-alternative forced choice procedure), they had to express their preference for one of two shapes from the series. The shape pairs were presented successively in random order. Results show that the smallest possible fractal deviation from "symmetry of things in a thing" significantly reduces the perceived attractiveness of such shapes. The potential of future studies where different levels of complexity of fractal patterns are weighed against different degrees of symmetry is pointed out in the conclusion.
ARTICLE | doi:10.20944/preprints201704.0088.v1
Subject: Keywords: hierarchical video quality assessment; human visual systems; primate visual cortex; full reference
Online: 14 April 2017 (11:52:44 CEST)
Video quality assessment (VQA) plays an important role in video applications for quality evaluation and resource allocation. It aims to evaluate the video quality consistent with the human perception. In this letter, a hierarchical gradient similarity based VQA metric is proposed inspired by the structure of the primate visual cortex, in which visual information is processed through sequential visual areas. These areas are modeled with the corresponding measures to evaluate the overall perceptual quality. Experimental results on the LIVE database show that the proposed VQA metric significantly outperforms the state-of-the-art VQA metrics.
ARTICLE | doi:10.20944/preprints202002.0191.v1
Online: 14 February 2020 (09:24:03 CET)
There is a lack of research based on in-depth theoretical and scientific knowledge to understand the visually impaired, and there has been little effort in the application of strategies for early intervention to minimize risk these people might encounter during development.. This study used semi-structured interviews from eight persons with visual impairments who had various experiences with resiliency. Three resilience processes based on life experiences were identified: 1) Experience and Adaptation: “self-awareness of disability” and “adaptation disability and the environment”; 2) Facing the Circumstances: “the exposure to concealment and abuse,” “the suppression of potential,” “denial and abandonment by family,” “poverty and disability,” “exchange and self-regulation,” and “social integration” themes; and 3) the Positive Reinforcement: “self-disclosure and jump-starting life,” “maintenance of a positive thinking,” and “socioeconomic independence.” These findings expand the understanding of the factors common to the resilience process experienced by individuals with visual impairment and highlight the importance of psychological support, family, education, and social support.
ARTICLE | doi:10.20944/preprints202108.0569.v1
Subject: Behavioral Sciences, Cognitive & Experimental Psychology Keywords: visual short-term memory; repetitive transcranial magnetic stimulation; visual memory precision; serial memory effects
Online: 31 August 2021 (11:43:33 CEST)
We investigated the role of the human medio-temporal complex (hMT+) in the memory encoding and storage of a sequence of four coherently moving RDKs by applying repetitive transcranial magnetic stimulation (rTMS) during an early or late phase of the retention interval. Moreover, in a second experiment we also tested whether disrupting the functional integrity of hMT+ during the early phase impaired the precision of the encoded motion directions. Overall, results showed that both recognition accuracy and precision were worse in middle serial positions, suggesting the occurrence of primacy and recency effects. We found that rTMS delivered during the early (but not the late) phase of the retention interval was able to impair not only recognition of RDKs, but also the precision of the retained motion direction. However, such impairment occurred only for RDKs presented in middle positions along the presented sequence, where performance was already closer to chance level. Altogether these findings suggest an involvement of hMT+ in the memory encoding of visual motion direction. Given that both position sequence and rTMS modulated not only recognition but also precision of the stored information, these findings are in support of a model of visual short-term memory with a variable resolution of each stored item, consistent with the assigned amount of memory resources, and that such item-specific memory resolution is supported by the functional integrity of area hMT+.
ARTICLE | doi:10.20944/preprints202003.0018.v1
Subject: Engineering, Electrical & Electronic Engineering Keywords: visual-inertial integrated navigation system (VINS); visual odometry; autonomous driving; adaptive tuning; urban canyons
Online: 2 March 2020 (00:38:17 CET)
Visual-inertial integrated navigation system (VINS) has been extensively studied over the past decades to provide accurate and low-cost positioning solutions for autonomous systems. Satisfactory performance can be obtained in an ideal scenario with sufficient and static environment features. However, there are usually numerous dynamic objects in deep urban areas, and these moving objects can severely distort the feature tracking process which is fatal to the feature-based VINS. The well-known method mitigates the effects of dynamic objects is to detect the vehicles using deep neural networks and remove the features belongs to the surrounding vehicle. However, excessive exclusion of features can severely distort the geometry of feature distribution, leading to limited visual measurements. Instead of directly eliminating the features from dynamic objects, this paper proposes to adopt the visual measurement model based on the quality of feature tracking to improve the performance of VINS. Firstly, a self-tuning covariance estimation approach is proposed to model the uncertainty of each feature measurements by integrating two parts: 1) the geometry of feature distribution (GFD), 2) the quality of feature tracking. Secondly, an adaptive M-estimator is proposed to correct the measurement residual model to further mitigate the impacts of outlier measurements, such as the dynamic features. Different from the conventional M-estimator, the proposed method effectively alleviates the reliance of excessive parameterization of M-estimator. Experiments are conducted in a typical urban area of Hong Kong with numerous dynamic objects, and the results show that the proposed method could effectively mitigate the effects of dynamic objects and improved accuracy of VINS is obtained when compared with the conventional method.
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: fraud audit; process mining; visual analytics
Online: 2 March 2021 (09:19:01 CET)
Among the knowledge areas in which process mining has had an impact, the audit domain is particularly striking. Traditionally, audits seek evidence in a data sample that allows to make inferences about a population. Mistakes are usually committed when generalizing the results and anomalies, therefore, appear in unprocessed sets. However, there are some efforts to address these limitations using process mining-based approaches for fraud detection. To the best of our knowledge, no fraud audit method exists that combines process mining techniques and visual analytics to identify relevant patterns. This paper presents a fraud audit approach based on the combination of process mining techniques and visual analytics. The main advantages are: (i) a method is included that guides the use of the visual capabilities of process mining to detect fraud data patterns during an audit; (ii) the approach can be generalized to any business domain; (iii) well-known process mining techniques are used (Dotted Chart, Trace Alignment, Fuzzy Miner…). The techniques were selected by a group of experts and were extended to enable filtering for contextual analysis, to handle levels of process abstraction, and to facilitate implementation in the area of fraud audits. Based on the proposed approach, we developed a software solution that is currently being used in the financial sector as well as in the telecommunications and hospitality sector. Finally, for demonstration purposes, we present a real hotel management use case in which we detected suspected fraud behaviors, thus validating the effectiveness of the approach.
CASE REPORT | doi:10.20944/preprints202011.0329.v1
Subject: Medicine & Pharmacology, Allergology Keywords: Two-photon microperimetry; cataracts; visual sensitivity
Online: 12 November 2020 (08:41:43 CET)
Purpose: The accuracy of conventional visual function tests, which emit visible light, decreases in patients with corneal scars, cataracts, and vitreous hemorrhages. In contrast, infrared (IR) light exhibits greater tissue penetrance than visible light and is less susceptible to optical opacities. We therefore compared visual function results obtained using conventional visual function tests and infrared 2-photon microperimetry (2PM-IR) in a subject with a brunescent nuclear sclerotic and posterior subcapsular cataract before and after cataract surgery. Methods: Visual function testing using the cone contrast threshold (CCT) test, conventional microperimetry (cMP), visible light microperimetry from a novel device (2PM-Vis), and 2PM-IR were performed before and after cataract surgery. Results: Cone contrast threshold testing improved for the S-cone, M-cone, and L-cone by 111, 14, and 30. Retinal sensitivity assessed using cMP, 2PM-Vis, and 2PM-IR improved by 18 dB, 17.4 dB, and 3.4 dB, respectively. Conclusions and Importance: 2PM-IR, unlike conventional visual function tests, showed minimal variability in retinal sensitivity before and after surgery. Thus, IR visual stimulation introduces a paradigm shift for measuring visual function in the retina and posterior visual pathways by circumventing optical media opacities.
ARTICLE | doi:10.20944/preprints201807.0243.v1
Subject: Biology, Forestry Keywords: Habitat types, visual differences, landscape characteristics.
Online: 13 July 2018 (17:07:05 CEST)
The unique qualities of areas with natural landscape features help provide sustainability. Moreover, their different vegetation covers and ecosystems contribute to the preservation of their visual attraction. In recent years, the demand for natural areas has not only been seen at a recreational level, but has also become associated with the conservation and sustainability of those areas. Although the concept of sustainability is expressed from an ecological point of view, studies indicate that the visual aspect is also an important component. Thus, in this study, a visual quality assessment was carried out which considered both objective and subjective evaluations of different habitat types. Efteni lake-wetland and Melen Ağzı dunes (Düzce), Anzer, Ayder, and Çat Düzü highlands (Rize), and Sultanmurat and Taşli highlands (Trabzon) were selected as the study areas. A visual quality analysis was conducted with a total of 43 participants (23 students, 16 local inhabitants and four lecturers) in order establish their preferences in areas with different landscape characteristics. For the determination of the visual qualifications of these areas, a total of 24 photographs showing typical images representing each habitat type (three photographs for each) were employed. Taking perceptual parameters into consideration, assessment of visual quality was made according to the points given to each photo by the participants. Consequently, differences in visual quality were found to be influenced by the demographic status of the participants, differences in habitat types, recreational trends and the conservation status of the habitats.
REVIEW | doi:10.20944/preprints202106.0549.v1
Subject: Medicine & Pharmacology, Allergology Keywords: cholinesterase; acetylcholine; visual function; ocular surface; retina
Online: 22 June 2021 (14:28:38 CEST)
The visual system is regulated by the nervous through neurotransmitters, which play an important role in visual and ocular functions. One of those neurotransmitters is acetylcholine, a key molecule that plays a diversity of biological functions. On the other hand, acetylcholinesterase, the enzyme responsible for the hydrolysis of acetylcholine, is implicated in cholinergic function. However, several studies showed that in addition to their enzymatic functions, Acetylcholinesterase exerts non-catalytic functions. In recent years, the importance of evaluating all possible functions of acetylcholine-acetylcholinesterase has been evidenced. Nevertheless, there is evidence that suggests cholinesterase activity in the eye can regulate some biological events both in structures of the anterior and posterior segment of the eye and therefore in the visual information that is processed in the visual cortex. Hence, the evaluation of cholinesterase activity could be a possible marker of alterations in cholinergic activity not only in ocular disease but also in systemic diseases.
ARTICLE | doi:10.20944/preprints202001.0340.v1
Subject: Engineering, Mechanical Engineering Keywords: thermal drilling; material; visual evaluation; macrostructure; microstructure
Online: 28 January 2020 (10:52:21 CET)
The contribution deals with the joining of various types of materials by technology of thermal drilling. In various branches of industries, also in the automotive industry must be joining operations, service, repairing, substitution or protection workpieces, components with various types of materials. Equally, the important role as joint, is also used material, and a product preparation by assembly and disassembly operations. By utilization of new friction hybrid joining technologies we can shortage the production time, provide automation in operations, increase the quality of joints, spare of economical expenses and also we can protect the environment. In this paper authors have investigated the effect of friction drilling on the tested material, aluminium alloy AlMgSi, which was used for material testing. The created joints were evaluated visually and by microscopy methods. The errors of tested joining were documented and described, too. This contribution was made with cooperation of Technical University of Kosice and with U. S. Steel Kosice, s.r.o.
ARTICLE | doi:10.20944/preprints202101.0347.v1
Subject: Behavioral Sciences, Applied Psychology Keywords: interocular suppression; consciousness; color vision; visual search; attentional templates; early visual system; awareness; continuous flash suppression; binocoular rivalry
Online: 18 January 2021 (14:32:29 CET)
Color can direct visual attention to specific locations through bottom-up and top-down mechanisms. Using Continuous Flash Suppression (CFS) as way to investigate the factors that gate access to consciousness, the current study investigated whether color also directly affected the timing of conscious perception. Low or high spatial frequency (SF) gratings with different orientations were shown as targets to the non-dominant eye of human participants. CFS patterns were presented at a rate of 10Hz to the dominant eye to delay conscious perception of the targets, and participants had the task to report the target’s orientation as soon as they could see it. With low-SF targets, two types of color-based effects became evident. First, when the targets and the CFS patterns had different colors, the targets entered consciousness faster than in trials where the targets and CFS patterns had the same color. Second, when participants searched for a specific target color, targets that matched these search settings entered consciousness faster compared to conditions where the target color was irrelevant and could vary from trial to trial. Thus, the current study demonstrates that color is a central feature of human perception and leads to faster conscious perception of visual stimuli through bottom-up and top-down attentional mechanisms.
ARTICLE | doi:10.20944/preprints201808.0523.v1
Subject: Behavioral Sciences, Cognitive & Experimental Psychology Keywords: frequency difference limens; blindfold; visual cues; auditory-visual synesthesia; gliding frequencies; perceptual limit, common resource theory; multiple resource model
Online: 30 August 2018 (10:40:28 CEST)
How perceptual limits can be overcome has long been examined by psychologists. This study investigated whether visual cues, blindfolding, visual-auditory synesthetic experience and music training could facilitate a smaller frequency difference limen (FDL) in a gliding frequency discrimination test. It was hoped that the auditory limits could be overcome through visual facilitation, visual deprivation, involuntary cross-modal sensory experience or music practice. Ninety university students, with no visual or auditory impairment, were recruited for this one-between (blindfold/visual cue) and one-within (control/experimental session) designed study. A MATLAB program was prepared to test their FDL by an alternative forced-choice task (gliding upwards/gliding downwards/no change) and two questionnaires (Vividness of Mental Imagery Questionnaire & Projector-Associator Test) were used to assess their tendency to synesthesia. Participants with music training showed a significantly smaller FDL; on the other hand, being blindfolded, being provided with visual cues or having synesthetic experience before could not significantly reduce the FDL. However, the result showed a trend of reduced FDLs through blindfolding. This indicated that visual deprivation might slightly expand the limits in auditory perception. Overall, current study suggests that the inter-sensory perception can be enhanced through training but not though reallocating cognitive resources to certain modalities. Future studies are recommended to verify the effects of music practice on other perceptual limits.
ARTICLE | doi:10.20944/preprints201808.0462.v1
Subject: Medicine & Pharmacology, Other Keywords: Key words: Occupational therapy, visual perceptual skills, Test of Visual Perceptual Skills-3 (TVPS-3), Human Immunodeficiency Virus (HIV).
Online: 27 August 2018 (13:36:15 CEST)
Abstract Introduction: Visual perceptual skills are essential for independent participation in self-care tasks, educational, work and leisure time activities. The effect of HIV on the visual perceptual skills is not well understood among children in low resource settings like Zimbabwe. Methods: A cross sectional comparative study was done with 30 children living with HIV and 30 children living without HIV residing in Harare urban area. The TVPS-3 was used to assess their visual perceptual skills. SPSS version 22, STATISTICA 13 and Microsoft 2016 were used for data analysis. Results: Both groups of children had mean percentile ranks below 50 on their TVPS-3 scores. Children without HIV generally performed better than those with HIV but the difference was not statistically significant in most cases. Through univariate analysis, only performance on Spatial Relations significantly differed between the two groups. Both groups had lowest scores in Basic visual perceptual skills. Age and school grade were the independent predictors of the children’s performances in the study. Conclusion: There is need for Occupational therapy services in public primary schools and in the pediatric Opportunistic Infections clinics in hospitals to be part of the health team which caters for children with visual perceptual challenges.
BRIEF REPORT | doi:10.20944/preprints202209.0366.v1
Subject: Engineering, Other Keywords: Vibration Detection; Progress; Power Plant; Bibliometrics; Visual Analysis
Online: 23 September 2022 (09:24:21 CEST)
After long years of development, the technology of analyzing the working condition of power units based on vibration signals has had relatively stable applications, but the accuracy and the degree of automation and intelligence for fault diagnosis are still inadequate due to the limitations of the current development of key technologies. With the development of big data and artificial intelligence technology, the involvement of new technologies will be an important boost to the development of this field. To support the subsequent research, bibliometrics is used as a tool to sort out the development of the technology in this field at the macro level; at the micro level, the classical and key literature is studied to grasp the development status at the technical level and prepare for the selection of entry points to continue in-depth innovation afterwards.
ARTICLE | doi:10.20944/preprints202106.0441.v1
Subject: Medicine & Pharmacology, Allergology Keywords: cognition; visual memory; reaction time; alcohol; Bipolar disorder
Online: 16 June 2021 (11:37:43 CEST)
The purpose of this study was to explore the association of cognition with hazardous drinking and alcohol related disorder in persons with bipolar disorder (BD). The study population included 1,268 persons from Finland with bipolar disorder. Alcohol use was assessed through hazardous drinking and alcohol related disorder including alcohol use disorder (AUD). Hazardous drinking was screened with the AUDIT-C (Alcohol Use Disorders Identification Test for Consumption) screening tool. Alcohol related disorder diagnoses were obtained from the national registrar data. Participants performed two computerized tasks from the Cambridge automated neuropsychological test battery (CANTAB) on tablet computer: the 5-choice serial reaction time task, or, reaction time (RT) test and the Paired Associative Learning (PAL) test. Association between RT-test and alcohol use was analyzed with log-linear regression, and eβ with 95% confidence intervals (CI) are reported. PAL first trial memory score was analyzed with linear regression, and β with 95% CI are reported. PAL total errors adjusted was analyzed with logistic regression and odds ratios (OR) with 95% CI are reported. After adjustment for age, education and housing status, hazardous drinking was associated with lower median and less variable RT in females while AUD was associated with a poorer PAL test performance in terms of the total errors adjusted scores in females. Our findings of positive associations between alcohol use and cognition in persons with bipolar disorder are unique.
ARTICLE | doi:10.20944/preprints201809.0509.v1
Subject: Engineering, Other Keywords: visual-inertial odometry; UAV navigation; sensor fusion; optimization
Online: 26 September 2018 (13:23:48 CEST)
Visual inertial odometry (VIO) has recently received much attention for efficient and accurate ego-motion estimation of unmanned aerial vehicle systems (UAVs). Recent studies have shown that optimization-based algorithms achieve typically high accuracy when given enough amount of information, but occasionally suffer from divergence when solving highly non-linear problems. Further, their performance significantly depends on the accuracy of the initialization of inertial measurement unit (IMU) parameters. In this paper, we propose a novel VIO algorithm of estimating the motional state of UAVs with high accuracy. The main technical contributions are the fusion of visual information and pre-integrated inertial measurements in a joint optimization framework, and the stable initialization of scale and gravity using relative pose constraints. To count for ambiguity and uncertainty of VIO initialization, a local scale parameter is adopted in the online optimization. Quantitative comparisons with the state-of-the-art algorithms on the EuRoC dataset verify the efficacy and accuracy of the proposed method.
ARTICLE | doi:10.20944/preprints201807.0126.v1
Subject: Engineering, Electrical & Electronic Engineering Keywords: image enhancement; cuckoo optimization; entropy and visual factor
Online: 9 July 2018 (05:07:27 CEST)
The notion of enhancement of the image is to ameliorate the perceptibility of information contained in an image. In the present research, a novel technique for the enhancement of image quality is propounded using fuzzy logic technique with a cuckoo optimization algorithm. Generally, the image is transformed from RGB domain to HSV domain keeping the color information intact within the image. The image has been categorized into three regions: underexposed, overexposed and mixed region on the basis of two threshold values. For the fuzzification of under and overexposed area the degree of membership is defined by the Gaussian membership, while the mixed area is fuzzified by parametric sigmoid function. The key parameters like visual factors and fuzzy contrast provide the quantitative analysis of an image. An objective function is framed which involves entropy and visual factor has been optimized by a new evolutionary cuckoo optimization algorithm. The results procured after simulation by the cuckoo optimization algorithm are compared with Bacterial foraging algorithm and ant colony optimization based image enhancement and this approach is found to be improved.
ARTICLE | doi:10.20944/preprints201807.0106.v1
Subject: Behavioral Sciences, Cognitive & Experimental Psychology Keywords: auditory-visual speech perception; bipolar disorder; speech perception
Online: 6 July 2018 (05:21:19 CEST)
The focus of this study was to investigate how individuals with bipolar disorder integrate auditory and visual speech information compared to non-disordered individuals and whether there were any differences in auditory and visual speech integration in the manic and depressive episodes in bipolar disorder patients. It was hypothesized that bipolar groups’ auditory-visual speech integration would be less robust than the control group. Further, it was predicted that those in the manic phase of bipolar disorder would integrate visual speech information more than their depressive phase counterparts. To examine these, the McGurk effect paradigm was used with typical auditory-visual speech (AV) as well as auditory-only (AO) speech perception on visual-only (VO) stimuli. Results. Results showed that the disordered and non-disordered groups did not differ on auditory-visual speech (AV) integration and auditory-only (AO) speech perception but on visual-only (VO) stimuli. The results are interpreted to pave the way for further research whereby both behavioural and physiological data are collected simultaneously. This will allow us understand the full dynamics of how, actually, the auditory and visual (relatively impoverished in bipolar disorder) speech information are integrated in people with bipolar disorder.
ARTICLE | doi:10.20944/preprints202101.0292.v1
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: crime; hotspots; Space-Time clustering; New York; Visual analytics
Online: 15 January 2021 (12:47:45 CET)
Pattern recognition has long been regarded as key role for crime prevention and reduction. Crime analysts and policy makers can formulate effective strategies and allocate resources with reference to spatial and temporal pattern of crime. In order the combat and prevent severe crime in New York City (NYC), this study analyzed Felony Crime data of NYC in previous 5 years (2015 2020) and discovered criminal hotspots pattern and temporal patterns with open criminal complaint data provided by New York Police Department (NYPD). This study adapt a human computer interactive appraoch to draw patterns from crime data, whereas computations and visualization are performed by Python libraries, and human to inform the decision of visualization methods, computational parameters and direction of this exploratary analysis. Density based clustering algorithms, Grid Thematic Mapping and Density Heatmap are displayed to identify hotspots and demonstrates their associations with spatial features. Timeline analysis on moments of crime occurance demonstrates seasonality where crimes are mostly commited, while aoristic analysis showed hours of day when crime is mostly committed considering their timespan. Lastly, 3D visualization improved recognition of the displacement of hotspot over time, and suggested long term hotspots in NYC in 3 D visualization. This inform strategic plans for police deployment.
ARTICLE | doi:10.20944/preprints202010.0009.v1
Subject: Behavioral Sciences, Applied Psychology Keywords: visual search; vision loss; incidental learning; macular degeneration; fovea
Online: 1 October 2020 (09:12:00 CEST)
Foveal vision loss has been shown to reduce efficient visual search guidance due to contextual cueing by incidentally learned contexts. However, previous studies used artificial (T among L-shape) search paradigms that prevent the memorization of a target in a semantically meaningful scene. Here, we investigated contextual cueing in real-life scenes that allow explicit memory of target locations in semantically rich scenes. In contrast to the contextual cueing deficits in artificial scenes, contextual cueing in patients with age-related macular degeneration (AMD) did not differ from age-matched normal-sighted controls. We discuss this in the context of visuospatial working memory demands for which both eye-movement control in the presence of central vision loss and for memory-guided search may compete. Memory-guided search in semantically rich scenes may depend less on visuospatial working memory than search in abstract displays, potentially explaining intact contextual cueing in the former but not the latter. In a practical sense, our findings may indicate that Patients with AMD are less deficient than expected after previous lab experiments. This shows the usefulness of realistic stimuli in experimental clinical research.
ARTICLE | doi:10.20944/preprints202002.0093.v1
Subject: Medicine & Pharmacology, Anesthesiology Keywords: Ketamine; Paravertebral block; Posterolateral thoracotomy; Thoracotomy; Visual analog scale
Online: 7 February 2020 (09:28:16 CET)
Severe postoperative pain affects most patients after thoracotomy and is a risk factor for post-thoracotomy pain syndrome (PTPS). This randomized controlled trial compared preemptively administered ketamine versus paravertebral block (PVB) versus control in patients undergoing posterolateral thoracotomy. The primary outcome was acute pain intensity on the visual analog scale (VAS) on the first postoperative day. Secondary outcomes included morphine consumption, patient satisfaction, and PTPS assessment with Neuropathic Pain Syndrome Inventory (NPSI). Acute pain intensity was significantly lower with PVB compared to other groups at four out of six time points. Patients in the PVB group used significantly less morphine via a patient-controlled analgesia pump than participants in other groups. Moreover, patients were more satisfied with postoperative pain management after PVB. PVB, but not ketamine, decreased PTPS intensity at 1, 3, and 6 months after posterolateral thoracotomy. Acute pain intensity at hour 8 and PTPS intensity at month 3 correlated positively with PTPS at month 6. Bodyweight was negatively associated with chronic pain at month 6. Thus, PVB but not preemptively administered ketamine decreases both acute and chronic pain intensity following posterolateral thoracotomies. The trial was prospectively registered at the Australian New Zealand Clinical Trial Registry (https://www.anzctr.org.au/; ACTRN12616000900415; 07 July 2016).
ARTICLE | doi:10.20944/preprints202104.0770.v1
Subject: Social Sciences, Library & Information Science Keywords: Wikipedia, knowledge equity, Wikimedia, open culture, visual arts, cultural bias
Online: 29 April 2021 (09:16:07 CEST)
We explore gaps in Wikipedia's coverage of the visual arts by comparing the representation of 100 artists and 100 artworks from the Western canon against corresponding sets of notable artists and artworks from non-Western cultures. We measure the coverage of these two sets of topics across Wikipedia as a whole and for its individual language versions. We also compare the coverage for Wikimedia Commons and Wikidata, sister-projects of Wikipedia that host digital media and structured data. We show that all these platforms strongly favour the Western canon, giving many times more coverage to Western art. We highlight specific examples of differing coverage of visual art inside and outside the Western canon. We find that European language versions of Wikipedia are generally more "Western" in their coverage and Asian languages more "global", with interesting exceptions. We suggest how both Wikipedia and the wider cultural sector can address this gap in content and thus give Wikipedia a truly global perspective on the visual arts.
ARTICLE | doi:10.20944/preprints202104.0460.v1
Subject: Medicine & Pharmacology, Allergology Keywords: Cognition; Visual memory; Reaction time; Alcohol; schizophrenia and schizoaffective disorder.
Online: 19 April 2021 (11:26:29 CEST)
Purpose of the study was to explore the association of cognition with hazardous drinking, binge drinking and alcohol use disorder in schizophrenia and schizoaffective disorder. Cognitive deficits are common in schizophrenia. Alcohol might be associated with additional cognitive impairment in schizophrenia patients. The study population included 3362 schizophrenia and schizoaffective disorder patients in Finland. Hazardous drinking was screened with the AUDIT-C (Alcohol Use Disorders Identification Test for Consumption) screening tool. Binge drinking was obtained from the AUDIT-C. Alcohol use disorder (AUD) diagnoses were obtained from the national registrar data. Participants performed two computerized tasks from the Cambridge automated neuropsychological test battery (CANTAB) on tablet computer: the 5-choice serial reaction time task (5-CSRTT), or, reaction time (RT) test and the Paired Associative Learning (PAL) test. Association of alcohol use with RT test and PAL test was analyzed with log-linear regression and logistic regression, respectively. After adjustment for age, education and age at first psychotic episode, hazardous drinking in females was associated with lower median RT. Compared to never binge drinkers, male and female participants drinking 6 or more doses of alcohol monthly or less had lower median RT. In the PAL test both first trial memory score (FTMS) and total errors adjusted score (TEAS) were associated with better performance in males drinking 6 or more doses of alcohol weekly or more and in females drinking 6 or more doses monthly or less. Higher PAL TEAS was associated with AUD in females Some positive associations between alcohol and cognition were found in male and female schizophrenia and schizoaffective disorder patients with hazardous drinking and binge drinking.
ARTICLE | doi:10.20944/preprints202104.0191.v1
Subject: Life Sciences, Biochemistry Keywords: retinal pigmented epithelium, exocyst complex component 5, photoreceptor, visual function.
Online: 7 April 2021 (11:15:10 CEST)
To characterize the mechanisms by which the highly-conserved exocyst trafficking complex regulates eye physiology in zebrafish and mice, we focused on exoc5 (aka sec10), a central exocyst component. We analyzed both exoc5 zebrafish mutants and retinal pigmented epithelium (RPE)-specific Exoc5 knockout mice. Exoc5 is present in both the non-pigmented epithelium of the ciliary body and in the RPE. In this study we set out to establish an animal model to study the mechanisms underlying the ocular phenotype and to establish if loss of visual function is induced by postnatal RPE Exoc5-deficiency. Exoc5-/- zebrafish showed smaller eyes, with decreased number of melanocytes in the RPE and shorter photoreceptor outer segments. At 3.5 days post fertilization, loss of rod and cone opsins were observed in zebrafish Tg:exoc5 mutants. Mice with postnatal RPE-specific loss of Exoc5 showed retinal thinning associated with compromised visual function, and loss of visual photoreceptor pigments. This retinal phenotype in Exoc5-/- mice was present at 20-weeks, and the phenotype was more severe at 27-weeks, indicating progressive disease phenotype. We previously showed that the exocyst is necessary for photoreceptor ciliogenesis and retinal development. Here, we report that exoc5 mutant zebrafish and mice with RPE-specific genetic ablation of Exoc5 develop abnormal RPE pigmentation, resulting in retinal cell dystrophy and loss of visual pigments associated with compromised vision. As RPE cells are “downstream” of photoreceptor cells in the visual process, these data suggest exocyst-mediated retrograde communication and dependence between the RPE and photoreceptors.
ARTICLE | doi:10.20944/preprints202103.0744.v1
Subject: Medicine & Pharmacology, Ophthalmology Keywords: glaucoma; independent prescibing; optometrist; visual fields; intraocular pressure; shared care
Online: 30 March 2021 (13:52:26 CEST)
Aim: Reporting 3 year outcomes of a community shared care scheme run by specialised independent prescribing (IP) optometrists for stable glaucoma and ocular hypertension (OHT) patients in West Kent, England. Purpose: Shared Care Schemes for glaucoma exist to alleviate the burden on Hospital Eye Services (HES) glaucoma clinics. We studied the effectiveness of community care by highly trained and qualified IP optometrists in terms of disease stability and referral rate into HES.Methods: Retrospective longitudinal review of 200 eyes with stable early to moderate stage glaucoma and OHT followed-up in two specialist optometry practices. Outcome measures included visual field mean deviation (VFMD), intraocular pressure (IOP), changes to treatment and referral rate into HES. Inclusion criteria included all patients with OHT and glaucoma (open angle and primary angle closure) referred for community follow-up. Incomplete data sets were excluded.Results: Mean age 71yrs (range 28 - 93yrs) and equal male: female ratio. n= 159 at year 3. The results for both outcomes showed no significant change from baseline at 12 or 24-month time points. However, a significant change from baseline at 36 months was observed for both outcomes: mean reduction of 0.7 mmHg in IOP, and a mean reduction of 0.3 dB in VFMD. There was a statistically significant change in the number of drops used at 36 months (p=0.001). 11 patients had a change in medication within 3 years. One patient was referred back to HES for uncontrolled IOP and consideration of trabeculectomy.Conclusion: Community follow-up of stable cases of glaucoma and OHT by highly qualified IP optometrists was safe, with stability of disease maintained and few referrals back to HES.
Subject: Engineering, Biomedical & Chemical Engineering Keywords: visual cortical prosthesis; brain-machine interface; electrical stimulation; prosthetic vision
Online: 23 March 2021 (10:42:30 CET)
The electrical stimulation of the visual cortices has the potential to restore vision to blind individuals. Until now, the results of visual cortical prosthetics has been limited as no prosthesis has restored a full working vision but the field has shown a renewed interest these last years thanks to wireless and technological advances. However, several scientific and technical challenges are still open in order to achieve the therapeutic benefit expected by these new devices. One of the main challenges is the electrical stimulation of the brain itself. In this review, we analyze the results in electrode-based visual cortical prosthetics from the electrical point of view. We first briefly describe what is known about the electrode-tissue interface and safety of electrical stimulation. Then we focus on the psychophysics of prosthetic vision and the state-of-the-art on the interplay between the electrical stimulation of the visual cortex and phosphene perception. Lastly, we discuss the challenges and perspectives of visual cortex electrical stimulation and electrode array design to develop the new generation implantable cortical visual prostheses.
REVIEW | doi:10.20944/preprints202103.0218.v1
Subject: Engineering, Automotive Engineering Keywords: Image accessibility; touchscreen; nonvisual feedback; blind; visual impairment; systematic review
Online: 8 March 2021 (13:41:50 CET)
A number of studies have been conducted to improve the accessibility of images using touchscreen devices for screen reader users. In this study, we conducted a systematic review of 33 papers to get a holistic understanding of existing approaches and to suggest a research road map given identified gaps. As a result, we identified types of images, visual information, input device and feedback modalities that were studied for improving image accessibility using touchscreen devices. Findings also revealed that little has studied how to automate the generation of image-related information, and that screen reader users play important roles during the evaluation but the design process. Then we introduce two of our recent studies on the accessibility of artwork and comics, AccessArt and AccessComics respectively. Based on the identified key challenges, we suggest a research agenda for improving image accessibility for screen reader users.
REVIEW | doi:10.20944/preprints202102.0548.v1
Subject: Biology, Anatomy & Morphology Keywords: Huanglongbing, Candidatus Liberibacter, Asian citrus psyllid, blotchy mottle, visual symptoms
Online: 24 February 2021 (11:45:39 CET)
Citrus Greening, which is mainly caused by bacteria, is one of the severe citrus diseases affecting all citrus cultivars and causing the deliberate abolition of trees worldwide. This infectious disease cannot be spread by wind, rain, or contact by contaminated personnel. The primary vector that spreads this disease through feeding citrus leaves is the Asian citrus psyllid (ACP), a minuscule insect. The management of citrus greening is also very costly as there is no fruitful technique is developed to cure this disease except removing all infected plants from good ones to eliminate the dissemination of the pathogen. Citrus greening identification is also the most difficult job, as the symptoms are similar to other citrus diseases and nutrient deficiency. Asymmetrical blotchy mottling patterns on leaves are the main symptoms to detect this disease. Here we have discussed some visual signs of citrus greening, which will ultimately help root level farmers to identify and prevent this disease before it drastically impacts citrus plants. Whether it is affected by citrus greening or lack of nutrients, we have also discussed the pen test method of determining the symptoms as symmetrical or asymmetrical across the mid-vein.
ARTICLE | doi:10.20944/preprints202102.0016.v1
Subject: Behavioral Sciences, Applied Psychology Keywords: development; adolescents; perceptual inhibition; joint visual search task; executive function
Online: 1 February 2021 (11:38:03 CET)
Recent studies suggest that the developmental curves in adolescence, related to the development of executive functions, could be fitted to a non-linear trajectory of development with progressions and retrogressions. Therefore, the present study proposes to analyze the pattern of development in Perceptual Inhibition (PI), considering all stages of adolescence (early, middle, and late) in intervals of one year. To this aim, we worked with a sample of 275 participants between 10 and 25 years, who performed a joint visual and search task (to measure PI). We have fitted exGaussian functions to the probability distributions of the mean response time across the sample and performed a covariance analysis (ANCOVA). The results showed that the 10- to 13-year-old groups performed similarly in the task and differ from the 14- to 19-year-old participants. We found significant differences between the older group and all the rest of the groups. We discuss the important changes that can be observed in relation to the nonlinear trajectory of development that would show the PI during adolescence.
ARTICLE | doi:10.20944/preprints201908.0228.v1
Subject: Engineering, Electrical & Electronic Engineering Keywords: EEG; luminance; brightness; IAPS; STFT; feature extraction; visual processing; emotion
Online: 22 August 2019 (03:43:25 CEST)
The aim of this study was to examine brightness effect, which is the perceptual property of visual stimuli, on brain responses obtained during visual processing of these stimuli. For this purpose, brain responses of the brain to changes in brightness were explored comparatively using different emotional images (pleasant, unpleasant and neutral) with different luminance levels. Moreover, electroencephalography recordings from 12 different electrode sites of 31 healthy participants were used. The power spectra obtained from the analysis of the recordings using short time Fourier transform were analyzed, and a statistical analysis was performed on features extracted from these power spectra. Statistical findings obtained from electrophysiological data were compared with those obtained from behavioral data. The results showed that the brightness of visual stimuli affected the power of brain responses depending on frequency, time and location. According to the statistically verified findings, the distinctive effect of brightness occurred in the parietal and occipital regions for all the three types of stimuli. Accordingly, the increase in the brightness of pleasant and neutral images increased the average power of responses in the parietal and occipital regions whereas the increase in the brightness of unpleasant images decreased the average power of responses in these regions. However, the increase in brightness for all the three types of stimuli reduced the average power of frontal and central region responses (except for 100-300 ms time window for unpleasant stimuli). The statistical results obtained for unpleasant images were found to be in accordance with the behavioral data. The results also revealed that the brightness of visual stimuli could be represented by changing the activity power of the brain cortex. The main contribution of this research was to comprehensively examine brightness effect on brain activity for images with different emotional content and different frequency bands at different time windows of visual processing for different brain regions. The findings emphasized that the brightness of visual stimuli should be viewed as an important parameter in studies using emotional image techniques such as image classification, emotion evaluation and neuro-marketing.
ARTICLE | doi:10.20944/preprints201804.0313.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: visual question answering; cross-modal multistep fusion network; attention mechanism
Online: 24 April 2018 (09:09:45 CEST)
Visual question answering (VQA) is receiving increasing attention from researchers in both the computer vision and natural language processing fields. There are two key components in the VQA task: feature extraction and multi-modal fusion. For feature extraction, we introduce a novel co-attention scheme by combining Sentence-guide Word Attention (SWA) and Question-guide Image Attention (QIA) in a unified framework. To be specific, the textual attention SWA relies on the semantics of the whole question sentence to calculate contributions of different question words for text representation. For the multi-modal fusion, we propose a “Cross-modal Multistep Fusion (CMF)” network to generate multistep features and achieve multiple interactions for two modalities, rather than focusing on modeling complex interactions between two modals like most current feature fusion methods. To avoid the linear increase of the computational cost, we share the parameters for each step in the CMF. Extensive experiments demonstrate that the proposed method can achieve competitive or better performance than the state-of-the-art.
ARTICLE | doi:10.20944/preprints202107.0385.v1
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Visual Question Generation; Visual Question Answering; Variational Autoencoders; Radiology Images; Domain Knowledge; UMLS; Data Augmentation; Computer Vision; Natural Language Processing; Artificial Intelligence; Medical Domain.
Online: 16 July 2021 (16:18:56 CEST)
Visual Question Generation (VQG) from images is a rising research topic in both fields of natural language processing and computer vision. Although there are some recent efforts towards generating questions from images in the open domain, the VQG task in the medical domain has not been well-studied so far due to the lack of labeled data. In this paper, we introduce a goal-driven VQG approach for radiology images called VQGRaD that generates questions targeting specific image aspects such as modality and abnormality. In particular, we study generating natural language questions based on the visual content of the image and on additional information such as the image caption and the question category. VQGRaD encodes the dense vectors of different inputs into two latent spaces, which allows generating, for a specific question category, relevant questions about the images, with or without their captions. We also explore the impact of domain knowledge incorporation (e.g., medical entities and semantic types) and data augmentation techniques on visual question generation in the medical domain. Experiments performed on the VQA-RAD dataset of clinical visual questions showed that VQGRaD achieves 61.86% BLEU score and outperforms strong baselines. We also performed a blinded human evaluation of the grammaticality, fluency, and relevance of the generated questions. The human evaluation demonstrated the better quality of VQGRaD outputs and showed that incorporating medical entities improves the quality of the generated questions. Using the test data and evaluation process of the ImageCLEF 2020 VQA-Med challenge, we found that relying on the proposed data augmentation technique to generate new training samples by applying different kinds of transformations, can mitigate the lack of data, avoid overfitting, and bring a substantial improvement in medical VQG.
ARTICLE | doi:10.20944/preprints202203.0252.v1
Subject: Mathematics & Computer Science, Other Keywords: Audio-Visual Technologies; Blended Learning; Pedagogy; Virtual Learning Environments; Virtual Reality
Online: 17 March 2022 (11:05:27 CET)
The Covid-19 pandemic caused a shift in teaching practice towards blended learning for many Higher Education institutions. This led to the rapid adoption of certain digital technologies within existing teaching structures as a means to meet student access needs and facilitate learning. Integration of these technologies caused numerous challenges for practitioners and often provided mixed results. This paper is an attempt to summarise and extend pre-Covid pedagogical research to leverage digital immersive technologies for blended teaching in the post-pandemic era. Focus is given towards the evolution of Virtual Learning Environments through elements of immersive audio-visual technologies, which are shown to be effective when coupled in a blended approach. It is both a review of these methodologies and a case study of the I-Ulysses: Virtual Learning Environment as a point of comparison for evaluating the review.
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Data Visualization; Visual Analytics; Natural Language Processing; Dark Data; Pattern Recognition
Online: 28 October 2020 (07:47:26 CET)
Over the years, there has been a significant rise in the world's scientific knowledge. However, most of it lacks structure and is often termed as Dark Data. Both humans and expert systems have continually faced difficulty in analyzing and comprehending such overwhelming amounts of information which is crucial in solving several real-world problems. Information and data visualization techniques proffer a promising solution to explore such data by allowing quick comprehension of information, the discovery of emerging trends, identification of relationships and patterns, etc. In this tutorial, we utilize the rich corpus of PubMed comprising of more than 30 million citations from biomedical literature to visually explore and understand the underlying key-insights using various information visualization techniques. With this study, we aim to diminish the limitation of human cognition and perception in handling and examining such large volumes of data by speeding up the process of decision making and pattern recognition and enabling decision-makers to fully understand data insights and make informed decisions.
ARTICLE | doi:10.20944/preprints201909.0227.v1
Subject: Keywords: non-visual effect; metamerism; light-emitting diodes (LEDs); lighting; spectral optimization
Online: 19 September 2019 (15:43:06 CEST)
The growing awareness of the biological effects of artificial light on humans has stimulated ample research. It is now widely accepted that asynchrony between artificialand natural light-dark cycles can elicit severe detrimental health effects. New research has been devoted to lighting solutions that dynamically change their color to mimic spectral changes of daylight and to account for human needs. However, in some situations, the visual properties of light must be preserved: For example professional TV video editors and shift workers who must work under standardized lighting conditions to do color correction in post-production. We have investigated the possibility to tune circadian effects using white lights that are spectrally different but nonetheless have similar color coordinates thus appear as a similar white tone. Our simulation results indicate that it is possible to modulate circadian light effects by combining LEDs for neutral white (4000 K), a widely used white tone for indoor lighting in europe. The results also show that the solutions combining single-color LEDs do however not meet the quality criteria from the visual point of view because their color rendering ability decays to unacceptable low levels. Combining narrowband LEDs with a broadband white LED improves the color rendering quality and we show how far circadian light effects can be tuned according to common theoretical models. The aim is to reflect daylight situations with artificial lighting, thus having a high melatonin suppression in the morning and low melatonin suppression in the evening. As a consequence our aim is to show what maximum and minimum circadian effect is possible with the same set of LEDs.
ARTICLE | doi:10.20944/preprints201806.0211.v1
Subject: Behavioral Sciences, Cognitive & Experimental Psychology Keywords: metacontrast; attention; exogenous attention; endogenous attention; visual masking; masking attention interactions
Online: 13 June 2018 (11:06:02 CEST)
To efficiently use its finite resources, the visual system selects for further processing only a subset of the rich sensory information. Visual masking and spatial attention control the information transfer from visual sensory-memory to visual short-term memory. There is still a debate whether these two processes operate independently or interact, with empirical evidence supporting both arguments. However, recent studies pointed out that earlier studies showing significant interactions between common-onset masking and attention suffered from ceiling and/or floor effects. Our review of previous studies reporting metacontrast-attention interactions revealed similar artifacts. Therefore, we investigated metacontrast-attention interactions by using an experimental paradigm in which ceiling/floor effects were avoided. We also examined whether metacontrast masking is differently influenced by endogenous and exogenous attention. We analyzed mean absolute-magnitude of response-errors and their statistical distribution. Our results support the hypothesis that metacontrast and endogenous/exogenous attention are largely independent with negligible likelihood for interactions. Moreover, statistical modeling of the distribution of response-errors suggests weak interactions modulating the probability of “guessing” behavior for some observers in both types of attention. Nevertheless, our data suggest that any joint effect of attention and metacontrast can be adequately explained by their independent and additive contributions.
ARTICLE | doi:10.20944/preprints201704.0030.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: Automatic localization; human visual mechanism; superpixel contrast feature; ultrasound breast tumor.
Online: 5 April 2017 (15:50:40 CEST)
Human visual system (HVM) can quickly localize the most salient object in scenes, which has been widely applied on natural image segmentation -. In ultrasound (US) breast images, compared with background areas, tumor is more salient because of its higher contrast. In this paper, we develop a novel automatic localization method based on HVM for automatic segmentation of ultrasound (US) breast tumors. First, the input image is smoothed by convolution with a linearly separable Gaussian filter and then subsampled into a 9-layer Gaussian pyramid. Then intensity, blackness ratio, and superpixel contrast features are combined to compute saliency map, in which Winner Take All algorithm is used to localize the most salient region, presenting with a circle on the localized target. Finally the circle is taken as the initial contour of CV level set to finish the extraction of breast tumor. The localization method has been tested on 400 US beast images, among which 378 images have higher saliency than background areas and succeed in localization, with high accuracy 92.00%. The HVM localization method can be used to localize the tumors, combined with this method, CV level set can achieve the fully automatic segmentation of US breast tumors. By combing intensity, blackness ratio and superpixel contrast features, the proposed localization method can successfully avoid the interference caused by background areas with low echo and high intensity. Moreover, multi-object localization of US breast images can be considered in future employment.
ARTICLE | doi:10.20944/preprints202209.0127.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: Image defogging; visual enhancement evaluation; edge detection; deep neural networks; autonomous systems
Online: 8 September 2022 (15:37:03 CEST)
Fog, haze, or smoke are usual atmospheric phenomena that dramatically compromise the overall visibility of any scene, critically affecting features such as illumination, contrast, and contour detection of objects. The decrease in visibility compromises the performance of computer vision algorithms such as pattern recognition and segmentation, some of them very relevant for decision-making for the security or autonomous vehicle industries. Several dehazing methods have been proposed, however, to the best of our knowledge, all proposed metrics compare the defogged image to its ground truth for evaluation of the defogging algorithms, or need to estimate parameters through physical models. This fact hinders progress in the field as obtaining proper ground truth images is costly and time-consuming, and physical parameters greatly depend on the scene conditions. This paper aims to tackle this issue by proposing a contour-based metric for image defogging evaluation that does not need a ground truth image. The proposed metric only requires the original hazy RGB image and the RGB image after the defogging procedure. A comparison of the proposed metric with metrics currently used in the NTIRE 2018 defogging challenge is performed to prove its effectiveness in a general situation, showing comparable results to conventional metrics.
ARTICLE | doi:10.20944/preprints202208.0282.v2
Subject: Engineering, Civil Engineering Keywords: Keywords: Computer Program; User manual; Visual Basic; Solid Slab, and simply supported
Online: 29 August 2022 (04:57:59 CEST)
This study's main target was to analyze and design rectangular edges supported two ways solid slabs by using ES EN 1992-1-1:2015. Slab design is often carried out either manually or with the use of design and analytic software. The researcher sees that some software cannot accept some countries' standard codes. For example, currently in Ethiopia analysis and design of two-way solid slab is done using readily available Excel sheet template. But working with this might have many problems, firstly the structure that is already analyzed by SAP or SAFE or any other international software application that uses international codes but which cannot design structure using ES EN 1992; for instant euro codes and designed by excel sheet can create failure and uneconomical analysis and design result. In this paper, the slab is designed and analyzed based on the chosen concrete grade, chosen reinforcement bar diameter, chosen steel grade for design and analysis of slabs calculations like load, moment, shear, and deflection checking using the moment coefficient method for analysis and design and Microsoft Visual Basic 2010 for coding. All input values are given by the International Standard units and are also used to represent output values. Using manual calculations delays time and mostly the result is not correct. But using this Computer program can increase computation accuracy and save time. The procedure the researcher followed is, first the manual calculation has been done and then SADSE2021 has been done. The result is that both are 99.9% identical, and the disadvantage of this method is that it cannot be used to determine the detailed drawing.
ARTICLE | doi:10.20944/preprints202107.0624.v1
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: knowledge representation; electronic health records; health information systems; content identification; visual interface
Online: 28 July 2021 (10:42:49 CEST)
Medical records contain many terms which are difficult to process. Our aim in this study is to allow the visual exploration of the information in medical databases where the texts presents a large number of syntactic variations and abbreviations, through an interface which facilitates content identification, navigation and information retrieval. We propose the use of multi-term tag clouds as content representation tools and as assistants for the browsing and querying tasks. The tag cloud generation is achieved through a novelty mathematical method that allows related terms to remain grouped together within the tags To evaluate this proposal, we have used a database with 24,481 records. 23 expert users in the medical field were tasked to complete a survey to evaluate the generated tag clouds properties and we obtained a precision of 0.990, a recall of 0.870 and a F1score of 0.904 in the evaluation of the tag cloud as an information retrieval tool. The main contribution of this approach is that we automatically generate a visual interface over the text capable of capturing the semantics of the information and facilitating access to medical records.
ARTICLE | doi:10.20944/preprints202010.0455.v1
Subject: Engineering, Automotive Engineering Keywords: KINECT; industrial robot; vision system; RobotStudio; Visual Studio; gesture control; voice control
Online: 22 October 2020 (09:57:07 CEST)
The paper presents the possibility of using KINECT v2 module to control an industrial robot by means of gestures and voice commands. It describes elements of creating software for off-line and on-line robot control. The application for KINECT module was developed in C# language in Visual Studio environment, while the industrial robot control program was developed in RAPID language in RobotStudio environment. The development of a two-threaded application in RAPID language allowed to separate two independent tasks for the IRB120 robot. The main task of the robot is performed in thread no. 1 (responsible for movement). Simultaneously working thread no. 2 ensures continuous communication with the KINECT system and provides information about the gesture and voice commands in real time without any interference in thread no. 1. The applied solution allows the robot to work in industrial conditions without negative impact of communication task on the time of robot’s work cycles. Thanks to the development of a digital twin of the real robot station, tests of proper application functioning in off-line mode (without using a real robot) were conducted. Obtained results were verified online (on the real test station). Tests of correctness of gesture recognition were carried out, the robot recognized all programmed gestures. Another test carried out was the recognition and execution of voice commands. A difference in the time of task completion between the actual and virtual station was noticed - the average difference was 0.67 s. The last test carried out was to examine the impact of interference on the recognition of voice commands. With a 10dB difference between the command and noise, the recognition of voice commands was equal to 91.43%. The developed computer programs have a modular structure, which enables easy adaptation to process requirements.
ARTICLE | doi:10.20944/preprints202009.0740.v1
Subject: Life Sciences, Biochemistry Keywords: balance training; real-time visual feedback; smart wearable devices; center of pressure
Online: 30 September 2020 (11:00:33 CEST)
This study aims to explore the effect of real-time visual feedback (VF) information of the pres-sure of center (COP) provided by intelligent insoles on balance training in a one leg stance (OLS) and tandem stance (TS) posture. Thirty healthy female college students were randomly assigned to the visual feedback balance training group (VFT), non-visual feedback balance training group (NVFT), and control group (CG). The balance training includes: OLS, tandem Stance (dominant leg behind, TSDL), tandem stance (non-dominant leg behind, TSNDL). The training lasted 4 weeks, the training lasts 30 minutes at an interval of 1 days. There was a sig-nificant difference in the interaction effect between Groups*Times of the COP parameters (p<0.05) for OLS. There was no significant difference in the interaction effect between Groups*Times of the COP parameters (p>0.05) for TS. The main effect of the COP parameters was a significant difference in Times (p<0.05). The COP displacement, velocity, radius, and area in VFT significantly decreased after training (p < 0.05). Therefore, the visual feedback technology of intelligent auxiliary equipment during balance training can enhance the benefit of training. The use of smart wearable devices in OLS balance training may improve the visual and physical balance integration ability.
REVIEW | doi:10.20944/preprints202003.0020.v1
Subject: Medicine & Pharmacology, Anesthesiology Keywords: visual patient; patient monitoring; avatar-based technology; situation awareness; user-centered design
Online: 2 March 2020 (00:56:03 CET)
Visual Patient technology is a situation awareness–oriented visualization technology that translates numerical and waveform patient monitoring data into a new user-centered visual language. Vital sign values are converted into colors, shapes, and rhythmic movements—a language humans can easily perceive and interpret—on a patient avatar model in real time. In this review, we summarize the current state of the research on the Visual Patient, including the technology, its history, and its scientific context. We also provide a summary of our primary research and a brief overview of research work on similar user-centered visualizations in medicine. In several computer-based studies under various experimental conditions, Visual Patient transferred more information per unit time, increased perceived diagnostic certainty, and lowered perceived workload. Eye tracking showed the technology worked because of the way it synthesizes and transforms vital sign information into new and logical forms corresponding to the real phenomena. The technology could be particularly useful for improving situation awareness in settings with high cognitive demand or when users must make quick decisions. This comprehensive review of Visual Patient research is the foundation for an evaluation of the technology in clinical applications, starting with a high-fidelity simulation study in early 2020.
ARTICLE | doi:10.20944/preprints201804.0040.v1
Subject: Medicine & Pharmacology, Ophthalmology Keywords: Methanol exposure; toxic effects; subcontractor manufacturing; dispatched workers; visual defect; neurobehavioral function
Online: 3 April 2018 (16:11:15 CEST)
An outbreak of occupational methanol poisoning occurred in small-scale 3rd tier factories of large-scale smartphone manufacturer, in the Republic of Korea, in 2016. To investigate the working environment and the health effect of the methanol exposure among co-workers of the methanol poisoning cases, we performed a cross sectional study on 155 workers at the five aluminum CNC cutting factories. Air and urinary methanol concentration were measured by gas chromatography, and health examination included symptoms, ophthalmological examinations and neurobehavioral tests. Multiple logistic regression analyses controlled for age and sex were conducted for revealing association of employment duration with symptoms. Air concentrations of methanol in factory A and E were ranged from 228.5 to 2220.0 ppm. Mean urinary methanol concentrations of the workers in each factory were from 3.5 mg/L up to 91.2 mg/L. The odds ratios for symptom of deteriorating vision and CNS increased, according to the employment duration, after adjusting for age and sex. Four cases with injured optic nerve and two cases with decreased neurobehavioral function were founded among co-workers of the victims. This study showed that the methanol exposure under poor environmental control not only produce eye and CNS symptoms but also affect neurobehavioral function and optic nerve.
ARTICLE | doi:10.20944/preprints201710.0093.v1
Subject: Engineering, Automotive Engineering Keywords: SAR image; Visual attention model; Texture Saliency; Feature map; Focus of attention
Online: 13 October 2017 (17:08:14 CEST)
Targets detection in synthetic aperture radar (SAR) remote sensing images, which is a fundamental but challenging problem in the field of satellite image analysis, plays an important role for a wide range of applications and is receiving significant attention in recent years. Besides, the ability of human visual system to detect visual saliency is extraordinarily fast and reliable. However, computational modeling of SAR image scene still remains a challenge. This paper analyzes the defects and shortcomings of traditional visual models applied to SAR images. Then a visual attention model designed for SAR images is proposed. The model draws the basic framework of classical ITTI model; selects and extracts the texture features and other features that can describe the SAR image better. We proposes a new algorithm for computing the local texture saliency of the input image, then the model constructs the corresponding saliency maps of features; Next, a new mechanism of feature fusion is adopted to replace the linear additive mechanism of classical models to obtain the overall saliency map; Finally, the gray-scale characteristics of focus of attention (FOA) in saliency map of all features are taken into account, our model choose the best saliency representation, Through the multi-scale competition strategy, the filter and threshold segmentation of the saliency maps can be used to select the salient regions accurately, thereby completing this operation for the visual saliency detection in SAR images. In the paper, several types of satellite image data, such as TerraSAR-X (TS-X), Radarsat-2, are used to evaluate the performance of visual models. The results show that our model provides superior performance compared with classical visual models. By further contrasting with the classical visual models, Our model reduce the false alarm caused by speckle noise, and its detection speed is greatly improved, and it is increased by 25% to 45%.
REVIEW | doi:10.20944/preprints202010.0388.v1
Subject: Engineering, Automotive Engineering Keywords: Autism Spectrum Disorder; activity analysis; automated detection; repetitive behavior; abnormal gait; visual saliency
Online: 19 October 2020 (14:49:24 CEST)
Autism Spectrum Disorder (ASD) is a neuro-developmental disorder that limits social interactions, cognitive skills, and abilities. Since ASD can last during an affected person's entire life cycle, the diagnosis at the early onset can yield a significant positive impact. The current medical diagnostic systems (e.g., DSM-5/ICD-10) are somewhat subjective; rely purely on the behavioral observation of symptoms, and hence, some individuals often go misdiagnosed or late-diagnosed. Therefore, researchers have focused on developing data-driven automated diagnosis systems with less screening time, low cost, and improved accuracy while significantly reducing professional intervention. Human Activity Analysis (HAA) is considered one of the most promising niches in computer vision research. This paper aims to analyze its potentialities in the automated detection of autism by tracking the exclusive characteristics of autistic individuals such as repetitive behavior, atypical walking style, and unusual visual saliency. This review provides a detailed inspection of HAA-based autism detection literature published in 2011 on-wards depicting core approaches, challenges, probable solutions, available resources, and scopes of future exploration in this arena. According to our study, deep learning outperforms machine learning in ASD detection with a classification accuracy of 76\% to 95\% on different datasets comprise of video, image, or skeleton data that recorded participants performing a large number of actions. However, machine learning provides satisfactory results on datasets with a small number of action classes and has a range of 60\% to 93\% accuracy among numerous studies. We hope this extensive review will provide a comprehensive guideline for researchers in this field.
REVIEW | doi:10.20944/preprints201912.0179.v1
Subject: Life Sciences, Biophysics Keywords: globular set; category theory; multidimensional; visual recognition; drug-resistant epilepsy; transcranial magnetic stimulation.
Online: 13 December 2019 (10:37:03 CET)
Once a wheat sheaf has been sealed and tied up, its packed down straws display the same orientation and zero-divergence. This observation brings us to the mathematical notion of presheaf, i.e., a topological structure in which diverging functions are locally superimposed. We show how the concepts of presheaves and the correlated globular sets, borrowed from category theory and algebraic topology, allow a well-founded mathematical approach to otherwise elusive activities of the brain. The mathematical assessment of brain functions in terms of presheaves: a) explains why spontaneous random spikes synchronize; b) leads to the counterintuitive intuition of antidromic effects in neuronal spikes: when an entrained oscillation propagates from A to B, changes in B lead to changes in A. We provide testable previsions: a) we suggest the proper locations of transcranial magnetic stimulation’s coils to improve the clinical outcomes of drug-resistant epilepsy; b) we advocate that axonal stimulation by external sources backpropagates and alters the neuronal electric oscillatory frequency. Further, we describe how the hierarchical information transmission inside globular sets provides fresh insights concerning different issues at various coarse-grained scales, such as object persistence, memory reinforcement in spite of random noise, Bayesian inferential circuits.
Subject: Engineering, Marine Engineering Keywords: unmanned surface vehicles; optical visual perception; image stabilization; defogging; target detection; target tracking
Online: 24 November 2019 (16:54:46 CET)
Unmanned surface vehicles have the advantages of maneuverability, concealment, wide activity area and low cost of use. Therefore, they have broad application prospects. This makes unmanned surface vehicles a research hotspot at home and abroad, and the sensing technology is the basis for the unmanned surface vehicles to perform tasks. The perception technology based on optical vision has the advantages of convenient application, relatively low cost, easy data acquisition and large amount of information, and has been widely studied by scholars at home and abroad. This paper mainly discusses the research of optical vision in unmanned surface vehicles from five aspects: Firstly, the water surface image preprocessing based on unmanned surface vehicles, mainly including water surface image stabilization research and defogging enhancement research; two water boundary detection; It is the use of light vision target detection; the fourth is the surface target tracking method. Finally, the light vision research of unmanned surface vehicles is summarized and forecasted.
Subject: Engineering, Biomedical & Chemical Engineering Keywords: steady-state visual evoked potential; brain-computer interface; direction; eccentricity; canonical correlation analysis
Online: 15 October 2019 (12:21:12 CEST)
The feasibility of a steady-state visual evoked potential (SSVEP) brain-computer interface (BCI) with a single flicker stimulus for multiple-target decoding has been demonstrated in a number of recent studies. The single-flicker BCIs have mainly employed the direction information for encoding the targets, i.e. different targets are placed at different spatial directions relative to the flicker stimulus. The present study explored whether visual eccentricity information can also be used to encode target for the purpose of increasing the number of targets in the single-flicker BCIs. A total number of 16 targets were encoded, placed at eight spatial directions, and two eccentricities (2.5° and 5°) relative to a 12 Hz flicker stimulus. Whereas distinct SSVEP topographies were elicited when participants gazed at targets of different directions, targets of different eccentricities were mainly represented by different signal-to-noise ratios (SNRs). Using a canonical correlation analysis-based classification algorithm, simultaneous decoding of both direction and eccentricity information was achieved, with an average offline 16-class accuracy of 66.8±16.4% averaged over 12 participants and a best individual accuracy of 90.0%. Our results demonstrate a single-flicker BCI with a substantially increased target number towards practical applications.
REVIEW | doi:10.20944/preprints202005.0348.v2
Subject: Behavioral Sciences, Applied Psychology Keywords: emotion; visual thalamus; initial evaluation; lateral geniculate nucleus; thalamic reticular nucleus; pulvinar; superior colliculus
Online: 11 May 2021 (10:32:30 CEST)
Current proposals on the temporal sequence in the processing of emotional visual stimuli are partially incompatible with growing empirical data. In the majority of them, the initial evaluation structures (IES) postulated to be in charge of the earliest detection of emotional stimuli (i.e., salient for the individual), are high order structures (i.e., those receiving visual inputs after several synapses). Thus, their latency of response cannot account for the first visual cortex response to emotional stimuli (peaking 80 ms in humans). Additionally, these proposed structures lack the necessary infrastructure to locally analyze the visual features of the stimulus (shape, color, motion, etc.) that define a stimulus as emotional. In particular, the amygdala is defended as the cornerstone IES also in humans, and cortical areas such as the ventral prefrontal cortex or the insula have been proposed as well to intervene in this initial evaluation process. The present review describes several first-order brain structures (i.e., receiving visual inputs after one synapsis), and second order structures (two synapses) that may complement the former, that accomplish with both prerequisites: presenting response latencies compatible with the observed activity at the visual cortex and possessing the necessary architecture to rudimentarily analyze in situ relevant features of the visual stimulation. The visual thalamus, and particularly the lateral geniculate nucleus (LGN), a first-order thalamic nucleus that actively processes visual information, is a good candidate to be the core IES, with the complementary action of the thalamic reticular nucleus (TRN). This LGN-TRN tandem could be supported, also in an ascending, initial evaluation phase, by the pulvinar, a second order thalamic structure, and first-order extra-thalamic nuclei (superior colliculus and certain nuclei of pretectum and the accessory optic system). In sum, the visual thalamus, scarcely studied in relation to emotional processing, is a serious candidate to be the missing link in early emotional evaluation and, in any case, is worth exploring in future research.
Subject: Engineering, Electrical & Electronic Engineering Keywords: automated visual inspection; convolutional neural network; deep learning; pattern classification; semiconductor inspection; wafer map
Online: 7 April 2020 (11:30:38 CEST)
This article presents an automated vision-based algorithm for the die-scale inspection of wafer images captured using scanning acoustic tomography (SAT). This algorithm can find defective and abnormal die-scale patterns, and produce a wafer map to visualize the distribution of defects and anomalies on the wafer. The main procedures include standard template extraction, die detection through template matching, pattern candidate prediction through clustering, and pattern classification through deep learning. To conduct the template matching, we first introduce a two-step method to obtain a standard template from the original SAT image. Subsequently, a majority of the die patterns are detected through template matching. Thereafter, the columns and rows arranged from the detected dies are predicted using a clustering method; thus, an initial wafer map is produced. This map is composed of detected die patterns and predicted pattern candidates. In the final phase of the proposed algorithm, we implement a deep learning-based model to determine defective and abnormal patterns in the wafer map. The experimental results verified the effectiveness and efficiency of our proposed algorithm. In conclusion, the proposed method performs well in identifying defective and abnormal die patterns, and produces a wafer map that presents important information for solving wafer fabrication issues.
ARTICLE | doi:10.20944/preprints202002.0187.v1
Subject: Biology, Plant Sciences Keywords: dioecious; DNA quality; flower type; sample preservation method; sex genotype; sex phenotype; visual assay
Online: 14 February 2020 (04:22:20 CET)
Methods for high-quality DNA extraction and knowledge of sex expression and flowering time are essential for applying genomic-assisted breeding and improve the success with hybridization in Guinea yam. A dioecious or monoecious pattern of flowering and sometimes non-flowering is a common phenomenon within and between the Dioscorea species. The flowering in yam plants raised from botanical seeds often takes an extended period, mostly till the first clonal generation after propagation from the tubers. The prolonged process of testing required to identify plant sex and flowering intensity in yam breeding often poses a challenge to realize reduced breeding cycle and apply genomic selection. This study assessed sample preservation methods for DNA quality during extraction and potential of DNA marker to diagnose plant sex at the early seedling stage in white Guinea yam. The predicted sex at the seedling stage was further validated with the visual score for the sex phenotype at the flowering stage. DNA extracted from leaf samples preserved in liquid nitrogen, silica gel, dry ice, and oven drying methods was similar in quality with a high molecular weight than samples stored in ethanol solution. Yam plant sex diagnosis with the DNA marker (sp16) identified a higher proportion of ZW genotypes (female or monoecious phenotypes) than the ZZ genotypes (male phenotype) in the studied materials with 74% prediction accuracy. The results from this study provided valuable insights on suitable sample preservation methods for quality DNA extraction and the potential of DNA marker sp16 to predict sex in white Guinea yam.
ARTICLE | doi:10.20944/preprints201904.0084.v1
Subject: Biology, Ecology Keywords: abundance; detection; diamondback terrapin; Malaclemys terrapin; monitoring; N-mixture; salt marsh; visual head count
Online: 8 April 2019 (10:55:00 CEST)
Generating a range-wide population status of the diamondback terrapin (Malaclemys terrapin spp.) is challenging due to a combination of species ecology and behavior, and limitations associated with traditional sampling methods. Visual counting of emergent heads offers an efficient, non-invasive and promising method for generating abundance estimates of terrapin populations across broader spatial scales and can be used to explain spatial variation in population size. We conducted repeated visual head count surveys at 38 predetermined sites along the shoreline of Wellfleet Bay in Wellfleet, Massachusetts. We analyzed the count data using a hierarchical modeling framework designed specifically to analyze repeated count data: the so-called N-mixture model. This approach allows for simultaneous modeling of imperfect detection to generate estimates of true terrapin abundance. We found detection probability was lowest when skies were overcast and when wind speed was highest. Site specific abundance varied but we found that abundance estimates were, on average, higher in unexposed sites compared to exposed sites. We demonstrate the utility of pairing visual head counts and N-mixture models as an efficient method for estimating terrapin abundance and show how the approach can be used to identifying environmental factors that influence detectability and distribution.
ARTICLE | doi:10.20944/preprints201804.0213.v2
Subject: Behavioral Sciences, Cognitive & Experimental Psychology Keywords: Temporal-order judgments; modeling; theory of visual attention; TVA; range of indecision; encoding reset
Online: 8 June 2018 (16:12:09 CEST)
Humans are incapable of judging the temporal order of visual events at brief temporal separations with perfect accuracy. Their performance---which is of much interest in visual cognition and attention research---can be measured with the temporal-order judgment task, which typically produces S-shaped psychometric functions. Occasionally, researchers reported plateaus within these functions, and some theories predict such deviation from the basic S shape. However, the centers of the psychometric functions result from the weakest performance at the most difficult presentations and therefore fluctuate strongly, leaving existence and exact shapes of plateaus unclear. This study set out to investigate whether plateaus disappear if the data accuracy is enhanced, or if we are ``stuck on a plateau'', or rather with it. For this purpose, highly accurate data were assessed by model-based analysis. The existence of plateaus is confidently confirmed and two plausible mechanisms derived from very different models are presented. Neither model, however, performs well in the presence of a strong attention manipulation, and model comparison remains unclear on the question which of the models describes the data best. Nevertheless, the present study includes the highest accuracy in visual TOJ data and the most explicit models of plateaus in TOJ studied so far.
ARTICLE | doi:10.20944/preprints201802.0152.v1
Subject: Medicine & Pharmacology, Ophthalmology Keywords: glaucoma; lamina cribrosa; optic nerve head; optical coherence tomography; corneal hysteresis; visual field; trabeculectomy
Online: 24 February 2018 (11:06:05 CET)
Purpose: To investigate the relationship of lamina cribrosa displacement to corneal biomechanical properties and visual function after mitomycin C-augmented trabeculectomy. Method: Eighty-one primary open angle eyes were imaged before and after trabeculectomy using an enhanced depth spectral-domain optical coherence tomography (SDOCT). Corneal biomechanical properties were measured with the Ocular Response Analyser before the surgery. The anterior lamina cribrosa (LC) was marked at several points in each of six radial scans to evaluate LC displacement in response to Intraocular pressure (IOP) reduction. A Humphrey visual field test (HVF) was performed before the surgery as well as three and six months postoperatively. Results: Factors associated with a deeper baseline anterior lamina cribrosa depth (ALD) were cup-disc ratio (P=0.04), baseline IOP (P= 0.01), corneal hysteresis (P= 0.001), and corneal resistance factor (P= 0.001). After the surgery, the position of LC became more anterior (negative), posterior (positive) or remained unchanged. The mean LC displacement was -42 μm (P= 0.001) and was positively correlated with the magnitude of IOP reduction (regression coefficient: 0.251, P=0.02), and negatively correlated with age (regression coefficient: - 0.224, P= 0.04) as well as baseline cup-disk ratio (Regression coefficient: -0.212,P= 0.05) Eyes with a larger negative LC displacement were more likely to experience an HVF improvement of more than 3 dB gain in mean deviation (P= 0.002). Conclusion: A lower SDOCT cup-disc ratio, younger age, and a larger IOP reduction were correlated with a larger negative LC displacement and improving HVF. Corneal biomechanics did not predict LC displacement.
ARTICLE | doi:10.20944/preprints201609.0126.v2
Subject: Engineering, Biomedical & Chemical Engineering Keywords: Brain-computer interface (BCI); visual motion perception; neurotechnology application; EEG; realtime brain signal decoding
Online: 4 October 2016 (14:43:48 CEST)
The paper presents a study of two novel visual motion onset stimulus-based brain–computer interfaces (vmoBCI). Two settings are compared with afferent and efferent to a computer screen center motion patterns. Online vmoBCI experiments are conducted in an oddball event–related potential (ERP) paradigm allowing for “aha–responses” decoding in EEG brainwaves. A subsequent stepwise linear discriminant analysis classification (swLDA) classification accuracy comparison is discussed based on two inter–stimulus–interval (ISI) settings of 700 and 150 ms in two online vmoBCI applications with six and eight command settings. A research hypothesis of classification accuracy non–significant differences with various ISIs is confirmed based on the two settings of 700 ms and 150 ms, as well as with various numbers of ERP response averaging scenarios.The efferent in respect to display center visual motion patterns allowed for a faster interfacing and thus they are recommended as more suitable for the no–eye–movements requiring visual BCIs.
ARTICLE | doi:10.20944/preprints202110.0282.v1
Subject: Behavioral Sciences, Behavioral Neuroscience Keywords: science education; science literacy; scientific literacy; visual scientific literacy; machine learning: neurocognition; fNIRS; science assessment
Online: 19 October 2021 (15:38:19 CEST)
The primary barrier to understanding visual and abstract information in STEM fields is representational competence the ability to generate, transform, analyze and explain representations. The relationship is known between the foundational visual literacy and the domain specific science literacy, however how science literacy is a function of science learning is still not well understood despite investigation across many fields. To support the improvement of students’ representational competence and promote learning in science, identification of visualization skills is necessary. This project details the development of an artificial neural network (ANN) capable of measuring and modeling visual science literacy (VSL) via neurological measurements using functional near infrared spectrometry (fNIRS). The developed model has the capacity to classify levels of scientific visual literacy allowing educators and curriculum designers the ability to create more targeted and immersive classroom resources such as virtual reality, to enhance the fundamental visual tools in science.
ARTICLE | doi:10.20944/preprints202008.0293.v1
Subject: Earth Sciences, Geoinformatics Keywords: ICESat-2; photon-counting lidar; photon labeling; visualization; ATL03; ATL08; visual interpretation; solar-induced noise
Online: 13 August 2020 (08:08:40 CEST)
NASA’s ICESat-2space-borne photon-counting lidar mission is providing global elevation measurements that will provide significant benefits to a variety of bio-geoscience research applications. Given the novelty of elevation and the derived data products from the ICESat-2 mission, the research community needs software tools that can facilitate photon-level analyses to support product validation and development new analysis methods. Here, we describe PhotonLabeler, a free graphic user interface (GUI) for manual labeling and visualization of ICESat-2 Geolocated Photon data (ATL03). Developed in MATLAB, the GUI facilitates the reading and display of ATL03 Hierarchical Data Format (HDF) files, the manual labeling of individual photons into target classes of choice using a number of point selections tools and enables eventual saving of labeled data in ASCII format. Other capabilities include saving and loading of labeling sessions to manage labeling tasks over time. We expect labeled data generated using the application to serve two main purposes. First, serve as ground truth for validating various products from ICESat-2 mission, especially for study sites around the world that do not have existing reference datasets such as airborne lidar. Second, serve as training and validation data in the development of new algorithms for generating various ICESat-2 data products. We demonstrate the first use case through a validation case study for the land and vegetation product (ATL08), which provides canopy and terrain height estimates, over two sites. For the first site, located in northwestern Zambia, we used ICESat-2 ATL03 data acquired at night and for our second site in Texas, US, we used ATL03 data acquired during the day. The PhotonLabeler application is freely available as a compiled MATLAB binary to enable free access and utilization by interested researchers.
ARTICLE | doi:10.20944/preprints202001.0374.v1
Subject: Medicine & Pharmacology, Nursing & Health Studies Keywords: mild traumatic brain injury; mTBI; concussion; cognitive; sensorimotor; visual; postural balance; methylation; 5-mC%; blood
Online: 31 January 2020 (04:28:21 CET)
People who suffer a mild traumatic brain injury (mTBI) have heterogeneous symptoms and disease trajectories, which make it difficult to precisely diagnose and assess complications long-term. Insufficient information is available regarding how to precisely diagnose and assess mTBI. This study identified and compared deficits in cognitive, psychosocial, visual functions, and balance performance between college students with and without histories of mTBI. Global DNA methylation ratio (5-mC%) in blood was also compared as a peripheral epigenetic marker. Twenty-five volunteers participated in this pilot study, including 11 mTBI cases (27.3% females; mean age of 28.7 years, SD=5.92) and 14 healthy controls (64.3% females; mean age of 22.0, SD=4.13). All the participants were assessed for cognitive (by NIH toolbox—executive function, memory, and processing speed), psychological (by PROMIS—depression, anxiety, and sleep disturbances), visual function (by King-Devick and binocular accommodative tests), postural balance performance (by a force plate), and blood 5-mC% (global methylation) levels. Students with mTBI reported significantly poorer episodic memory, severe anxiety, and more sleep disturbance problems. They also had higher blood 5-mC% level (all p’s<.05). No significant differences were found in visual function and postural balance. These findings validate changes in cognitive, psychosocial, and global DNA methylation long after mTBI.
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: video surveillance; visual layer attack; electrical network frequency (ENF) signal; false frame injection (FFI) attack
Online: 1 April 2019 (09:50:05 CEST)
Over the past few years, the importance of video surveillance in securing the national critical infrastructure has significantly increased, whose applications include detecting failures and anomalies. Accompanied by video proliferation is the increasing number of attacks against surveillance systems. Among the attacks, false frame injection (FFI) attacks that replay video frames from a previous recording to mask the live feed has the highest impact. While many attempts have been made to detect FFI frames using features from the video feeds, video analysis is computationally too intensive to be deployed on-site for real-time false frame detection. In this paper, we investigate the feasibility of FFI attacks on compromised surveillance systems at the edge and propose an effective technique to detect the injected false video and audio frames by monitoring the surveillance feed using the embedded Electrical Network Frequency (ENF) signals. An ENF operates at a nominal frequency of 60 Hz/50 Hz based on its geographical location and maintains a stable value across the entire power grid interconnection with minor fluctuations. For surveillance system video/audio recordings connected to the power grid, the ENF signals are embedded. The time-varying nature of the ENF component is used as a forensic application for authenticating the surveillance feed. The paper highlights the ENF signal collection from a power grid creating a reference database and ENF extraction from the recordings using conventional short-time Fourier Transform and spectrum detection for robust ENF signal analysis in the presence of noise and interference caused in different harmonics. The experimental results demonstrate the effectiveness of ENF signal detection and/or abnormalities for FFI attacks.
ARTICLE | doi:10.20944/preprints201807.0063.v1
Subject: Earth Sciences, Space Science Keywords: regional group interaction; similar hotspot flow patterns; spatial interaction; visual analytics; Geo-Information-Tupo; GIS
Online: 4 July 2018 (09:26:18 CEST)
The interaction between different regions normally is reflected by the form of the stream. For example, the interaction of the flow of people and flow of information between different regions can reflect the structure of cities’ network, and also can reflect how the cities function and connect to each other. Since big data has become increasingly popular, it is much easier to acquire flow data for various types of individuals. Currently, it is a hot research topic to apply the regional interaction model, which is based on the summary level of individual flow data mining. So far, previous research on spatial interaction methods focused on point-to-point and area-to-area interaction patterns. However, there are a few scholars who study the hotspot interaction pattern between two regional groups with some predefined neighborhood relationship by starting with two regions. In this paper, a method for identifying a similar hotspot interaction pattern between two regional groups has been proposed, and the Geo-Information-Tupu methods are applied to visualize the interaction patterns. For an example of an empirical analysis, we discuss China’s air traffic flow data, so this method can be used to find and analyze any hotspot interaction patterns between regional groups with adjoining relationships across China. Our research results indicate that this method is efficient in identifying hotspot interaction flow patterns between regional groups. Moreover, it can be applied to any analysis of flow space that is used to excavate regional group hotspot interaction patterns.
COMMUNICATION | doi:10.20944/preprints202207.0333.v1
Subject: Medicine & Pharmacology, Other Keywords: residual dizziness; labyrintholithiasis; cupololithiasis; otocones; BPPV; liberating maneuvers; utricle; VEMPs objective vertical visual VVS; bucket test
Online: 22 July 2022 (08:19:45 CEST)
Residual dizziness after a liberating maneuver is often referred to by patients as a more disabling set of symptoms than the positional vertigo itself. This situation seems to involve more than half of subjects with labyrintholithiasis. The authors examine the hypothesis according to which residual dizziness involves subjects with labyrintholithiasis on the basis of otoconia.
Subject: Engineering, Biomedical & Chemical Engineering Keywords: brain-computer Interface; cognitive aging; steady-state visual evoked potential, neural network; detection accuracy; band power
Online: 13 May 2019 (08:32:23 CEST)
Cognitive deterioration caused by illness or aging often occurs before symptoms arise, and their timely diagnosis is crucial to reducing its medical, personal, and societal impacts. Brain-Computer Interfaces (BCIs) stimulate and analyze key cerebral rhythms, enabling reliable cognitive assessment that can accelerate diagnosis. The BCI system presented analyzes Steady-State Visually Evoked Potentials (SSVEPs) elicited in subjects of varying age to detect cognitive aging, predict its magnitude, and identify its relationship with SSVEP features (band power and frequency detection accuracy), which were hypothesized to indicate cognitive decline due to aging. The BCI system was tested with subjects of varying age to assess its ability to detect aging-induced cognitive deterioration. Rectangular stimuli flickering at theta, alpha, and beta frequencies were presented to subjects, and frontal and occipital EEG responses were recorded. These were processed to calculate detection accuracy for each subject and calculate SSVEP band power. A neural network was trained using the features to predict cognitive age. The results showed potential cognitive deterioration through age-related variations in SSVEP features. Frequency detection accuracy declined after age group 20–40 and band power, throughout all age groups. SSVEPs generated at theta and alpha frequencies, especially 7.5 Hz, were the best indicators of cognitive deterioration. Here, frequency detection accuracy consistently declined after age group 20-40 from an average of 96.64% to 69.23%. The presented system can be used as an effective diagnosis tool for age related cognitive decline.
ARTICLE | doi:10.20944/preprints201804.0093.v1
Subject: Arts & Humanities, Architecture And Design Keywords: community building; quality of life; built form typology; front-yard; physical accessibility; visual permeability; human behaviour
Online: 8 April 2018 (11:29:30 CEST)
The residential built form, including open space, provides the physical environment for social interaction. Understanding urban open space, including semi-public and public domains, through the lens of physical accessibility and visual permeability can potentially facilitate the building of a sense of community contributing to a better quality of life. Using an inner-city suburb in Perth, Western Australia as a case study, this research explores the importance of physical accessibility patterns and visual permeability for socialising in semi-public and public domains, such as the front yard and the residential streets. It argues that maintaining a balance between public and private inter-relationship in inner city residential neighbourhoods is important for creating and maintaining a sense of community.
ARTICLE | doi:10.20944/preprints202012.0222.v1
Subject: Medicine & Pharmacology, Allergology Keywords: event-related potentials; visual evoked potentials; component P300; brain-computer interface; speller; oddball paradigm; categorization of images.
Online: 9 December 2020 (11:56:41 CET)
The objective of this study was aimed to study the sensory processes of the “human-computer interaction” model when classifying visual images with an incomplete set of signs based on the analysis of early, middle, late and slow components of event-related potentials (ERPs). 26 healthy subjects (men) aged 20-22 years were investigated. ERPs in 19 monopolar sites according to the 10/20 system were recorded. Discriminant and factor analysis were applied. The component N450 is the most specialized indicator of the perception of unrecognizable (oddball) visual images. The amplitude of the ultra-late components N750 and N900 is also higher under conditions of presentation of the oddball image, regardless of the location of the registration points. The results of the study are discussed in the light of the paradigm of the P300 wave application in brain-computer interface systems, as well as with the peculiarities in brain pathology. Promising directions for the development of studies of the “Brain Computer Interface” (BCI) P300 systems are to increase the throughput of information flows. To extend the application of the P300 ERPs to multiple modalities, the underlying physiological mechanisms and responses of the brain for a particular sensory system and mental function must be carefully examined.
ARTICLE | doi:10.20944/preprints202008.0707.v1
Subject: Behavioral Sciences, Cognitive & Experimental Psychology Keywords: Anxiety; Audio-Visual stimulation; COVID-19; Environmental enrichment; Forest environments; Forest therapy; Lockdown; Mental health; Stress; Quarantine
Online: 31 August 2020 (05:20:50 CEST)
The prolonged lockdown imposed to contain the COVID-19 pandemic prevented many people from direct contact with nature and greenspaces, raising alarms for a possible worsening of mental health. This study investigates the effectiveness of a simple and affordable remedy for improving psychological well-being, based on audio-visual stimuli brought by a short computer video showing forest environments, with an urban video as a control. Randomly selected participants were assigned the forest or urban video, to look at and listen early in the morning, and filled questionnaires. In particular, the State-Trait Anxiety Inventory (STAI) Form Y, collected in baseline condition and at the end of the study, and the Part II of the Sheehan Patient Rated Anxiety Scale (SPRAS), collected every day immediately before and after watching the video. The virtual exposure to forest environments showed effective to reduce perceived anxiety levels in in people forced by lockdown in limited spaces and environmental deprivation. Although significant, the effects were observed only in the short term, highlighting the limitation of the virtual experiences. The reported effects might also represent a benchmark to disentangle the determinants of health effects due to real forest experiences, for example, the inhalation of biogenic volatile organic compounds (BVOC).
ARTICLE | doi:10.20944/preprints202002.0086.v1
Subject: Engineering, Civil Engineering Keywords: Buildings; earthquake safety assessment; extreme events; urban sustainability; seismic 16 assessment; rapid visual screening; reinforced concrete buildings
Online: 6 February 2020 (10:50:33 CET)
Earthquake is among the most devastating natural disasters causing severe economic, environmental, and social destruction. Earthquake safety assessment and building hazard monitoring can highly contribute to urban sustainable development through identification and insight into optimum materials and structures. While the vulnerability of structures mainly depends on the structural resistance, the safety assessment of buildings can be highly challenging. In this paper, we consider Rapid Visual Screening (RVS) method which is a qualitative procedure for estimating structural scores for buildings suitable for medium- to high-seismic cases. This paper presents an overview of the common RVS methods, i.e., FEMA P-154, IITK-GGSDMA, and EMPI. To examine the accuracy and validation, a practical comparison is performed between their assessment and observed damage of reinforced concrete buildings from a street survey in the Bingöl region, Turkey, after the 11 May 2003 earthquake. The results demonstrate that the application of RVS methods for preliminary damage estimation is a vital tool. Furthermore, the comparative analysis showed that FEMA P-154 creates an assessment that overestimates damage states and is not economically viable while EMPI and IITK-GGSDMA provide for more accurate and practical estimation, respectively.
ARTICLE | doi:10.20944/preprints201904.0013.v1
Subject: Engineering, Biomedical & Chemical Engineering Keywords: cancer nodules detection; phantom; stiffness analysis; ultrasound analysis; visual analysis; automatic robotic platform; remote support for pathologists
Online: 1 April 2019 (13:26:13 CEST)
This study presents a platform for ex-vivo detection of cancer nodules, addressing automation of medical diagnoses in surgery and associated histological analyses. The proposed approach takes advantage of the property of cancer to alter the mechanical and acoustical properties of tissues, because of changes in stiffness and density. A force sensor and an ultrasound probe were combined to detect such alterations during force-regulated indentations. To explore the specimens, regardless of their orientation and shape, a scanned area of the test sample was defined using shape recognition applying optical background subtraction to the images captured by a camera. The motorized platform was validated using seven phantom tissues, simulating the mechanical and acoustical properties of ex-vivo diseased tissues, including stiffer nodules that can be encountered in pathological conditions during histological analyses. Results demonstrated the platform’s ability to automatically explore and identify the inclusions in the phantom. Overall, the system was able to correctly identify up to 90.3% of the inclusions by means of stiffness in combination with ultrasound measurements, paving pathway towards robotic palpation during intraoperative examinations.
Subject: Biology, Physiology Keywords: vitamin A transporters; all-trans retinol; retinyl esters; LRAT; STRA6; RBPR2; RBP4; retinol-binding proteins; photoreceptors; visual function
Online: 11 October 2021 (14:15:27 CEST)
Vitamins are essential compounds obtained through diet that are necessary for normal devel-opment and function in an organism. One of the most important vitamins for human physiology is Vitamin A, a group of retinoid compounds and carotenoids which generally function as a mediator for cell growth, differentiation, immunity, and embryonic development, as well as serving as a key component in the phototransduction cycle in vertebrate retina. For humans, vitamin A is obtained through the diet, where provitamin A carotenoids like β-carotene, or preformed vitamin A such as retinyl esters are absorbed into the body via the small intestine and converted into all-trans retinol within the intestinal enterocytes. Specifically, once absorbed, carotenoids are cleaved by carote-noid cleavage oxygenases (CCOs), such as BCO1, to produce all-trans retinal that subsequently gets converted into all-trans retinol. CRBP2 bound retinol is then converted into retinyl esters (REs) by the enzyme lecithin retinol acyltransferase (LRAT) in the endoplasmic reticulum, which is then packaged into chylomicrons and sent into the bloodstream for storage in hepatic stellate cells in the liver or for functional use in peripheral tissues such as the retina. All-trans retinol also travels through the bloodstream bound to retinol binding protein 4 (RBP4), where it enters cells with the assistance of the transmembrane transporters, stimulated by retinoic acid 6 (STRA6) in peripheral tissues or retinol binding protein receptor 2 (RBPR2) in systemic tissues (e.g. in the retina and the liver respectively). Much is known about the intake, metabolism, storage, and function of vitamin A compounds, especially with regard to its impact on eye development and visual function in the retinoid cycle. However, there is much to learn about the role of vitamin A as a transcription factor in development and cell growth, as well as how peripheral cells signal hepatocytes to secrete all-trans retinol into the blood for peripheral cell use. This article aims to review literature re-garding the major known pathways of vitamin A intake from dietary sources into hepatocytes, vitamin A excretion by hepatocytes, as well as vitamin A usage within the retinoid cycle in the RPE and retina to provide insight on future directions of novel membrane transporters for vitamin A in retinal cell physiology and visual function.
ARTICLE | doi:10.20944/preprints202104.0532.v3
Subject: Behavioral Sciences, Applied Psychology Keywords: Locus Coeruleus; Reserve; Brain Age; Visual Attention; Alzheimer’s Disease; Mild Cognitive Impairment; normal Aging; Neuroimaging; Voxel Based Morphometry
Online: 21 June 2021 (11:41:40 CEST)
The noradrenergic theory of Cognitive Reserve (Robertson, 2013-2014) postulates that the upregulation of the Locus Coeruleus - Noradrenergic System (LC-NA) originating in the Brainstem might facilitate cortical networks involved in attention, and protracted activation of this system throughout the lifespan may enhance cognitive stimulation contributing to Reserve. To test the above-mentioned theory, a study was conducted on a sample of 686 participants (395 controls, 156 Mild Cognitive Impairment, 135 Alzheimer’s Disease) investigating the relationship between LC volume, attentional performance and a biological index of brain maintenance (BrainPAD – an objective measure which compares an individual’s structural brain health, reflected by their voxel-wise grey matter density, to the state typically expected at that individual’s age). Further analyses were carried out on Reserve indices including education and occupational attainment. Volumetric variation across groups was also explored along with gender differences. Control analyses on the Serotoninergic (5-HT), Dopaminergic (DA) and Cholinergic (Ach) systems were contrasted with the Noradrenergic (NA) hypothesis. The antithetic relationships were also tested across the neuromodulatory subcortical systems.Results supported by bayesian modelling showed that LC volume disproportionately predicted higher attentional performance as well as biological brain maintenance across the three groups. These findings lend support to the role of the noradrenergic system as a key mediator underpinning the neuropsychology of Reserve, and they suggest that early prevention strategies focused on the noradrenergic system (e.g. cognitive-attentive training, physical exercise, pharmacological and dietary interventions) may yield important clinical benefits to mitigate cognitive impairment with age and disease.
ARTICLE | doi:10.20944/preprints202101.0313.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: symmetry; shape; local color; self-organized visual map; quantization error; SOM-QE; choice response time; human decision; uncertainty
Online: 18 January 2021 (10:18:03 CET)
Symmetry in biological and physical systems is a product of self-organization driven by evolutionary processes, or mechanical systems under constraints. Symmetry-based feature extraction or representation by neural networks may unravel the most informative contents in large image databases. Despite significant achievements of artificial intelligence in recognition and classification of regular patterns, the problem of uncertainty remains a major challenge in ambiguous data. In this study, we present an artificial neural network that detects symmetry uncertainty states in human observers. To this end, we exploit a neural network metric in the output of a biologically inspired Self-Organizing Map, the Quantization Error (SOM-QE). Shape pairs with perfect geometric mirror symmetry but a non-homogenous appearance, caused by local variations in hue, saturation, or lightness within and/or across the shapes in a given pair produce, as shown here, longer choice RT for ‘yes’ responses relative to symmetry. These data are consistently mirrored by the variations in the SOM-QE from unsupervised neural network analysis of the same stimulus images. The neural network metric is thus capable of detecting and scaling human symmetry uncertainty in response to patterns. Such capacity is tightly linked to the metric’s proven selectivity to local contrast and color variations in large and highly complex image data.
ARTICLE | doi:10.20944/preprints201903.0244.v1
Subject: Life Sciences, Other Keywords: vertebrate retina, mouse, zebrafish, two-photon microscopy, biosensor, activity probes, visual stimulus-evoked activity, laser-evoked retinal activity
Online: 26 March 2019 (14:01:49 CET)
Two-photon imaging of light stimulus-evoked neuronal activity has been used to study all neuron classes in the vertebrate retina, from the photoreceptors to the retinal ganglion cells. Clearly, the ability to study retinal circuits down to the level of single synapses or zoomed out at the level of complete populations of neurons, has been a major asset in our understanding of this beautiful circuit. In this chapter, we discuss the possibilities and pitfalls of using an all-optical approach in this highly light-sensitive part of the brain.
ARTICLE | doi:10.20944/preprints202001.0319.v1
Subject: Arts & Humanities, Architecture And Design Keywords: monocular depth cues; luminance contrast; colour; visual arts; image plane; human perception; brain; 3D structure; figure-ground; Gestalt Theory
Online: 27 January 2020 (01:54:27 CET)
Victor Vasarely’s (1906-1997) important legacy to the study of human perception is brought to the forefront and discussed. A large part of his impressive work conveys the appearance of striking three-dimensional shapes and structures in a large-scale pictorial plane. Current perception science explains such effects by invoking brain mechanisms for the processing of monocular (2D) depth cues. Here in this study, we illustrate and explain the local effects of 2D color and contrast cues on the perceptual organization in terms of figure-ground assignments, i.e. which local surfaces are likely to be seen as “nearer” or “bigger” in the image plane. Paired configurations are embedded in a larger, structurally ambivalent pictorial context inspired by some of Vasarely’s creations. The figure-ground effects these configurations produce reveal a significant correlation between perceptual solutions for “nearer” and “bigger” when no other monocular depth cues are given in the image. In consistency with previous findings on similar, albeit simpler visual displays, a specific color may compete with luminance contrast in resolving the planar ambiguity of a complex pattern context. Vasarely intuitively understood, and successfully exploited, this kind of subtle context effect in his art, well before empirical investigations had set out to study and explain their genesis in terms of information processing by the visual brain.
REVIEW | doi:10.20944/preprints202104.0177.v1
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Hydraulic resistance; Colebrook flow friction; Lambert W-function; Excel Macro Programming; Visual Basic for Applications (VBA); User Defined Functions (UDFs)
Online: 6 April 2021 (13:27:55 CEST)
This review paper gives Excel functions for highly precise Colebrook’s pipe flow friction approximations developed by users. All shown codes are implemented as User Defined Functions – UDFs written in Visual Basic for Applications – VBA, a common programming language for MS Excel spreadsheet solver. Accuracy of the friction factor computed using nine to date the most accurate explicit approximations is compared with the sufficiently accurate solution obtained through an iterative scheme which gives satisfying results after sufficient number of iterations. The codes are given for the presented approximations, for the used iterative scheme and for the Colebrook equation expressed through the Lambert W-function (including its cognate Wright ω-function). The developed code for the principal branch of the Lambert W-function has additional and more general application for solving different problems from variety branches of engineering and physics. The approach from this review paper automates computational processes and speeds up manual tasks.
ARTICLE | doi:10.20944/preprints202105.0401.v1
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Deep neural networks; Disentangled representations; Attention mechanisms; Generative models; Density estimation; Out-of-distribution generalization; Numerical cognition; Visual perception; Cognitive modeling
Online: 18 May 2021 (09:50:01 CEST)
One of the most rapidly advancing areas of deep learning research aims at creating models that learn to disentangle the latent factors of variation from a data distribution. However, modeling joint probability mass functions is usually prohibitive, which motivates the use of conditional models assuming that some information is given as input. In the domain of numerical cognition, deep learning architectures have successfully demonstrated that approximate numerosity representations can emerge in multi-layer networks that build latent representations of a set of images with a varying number of items. However, existing models have focused on tasks requiring to conditionally estimate numerosity information from a given image. Here we focus on a set of much more challenging tasks, which require to conditionally generate synthetic images containing a given number of items. We show that attention-based architectures operating at the pixel level can learn to produce well-formed images approximately containing a specific number of items, even when the target numerosity was not present in the training distribution.
ARTICLE | doi:10.20944/preprints201910.0247.v1
Subject: Behavioral Sciences, Behavioral Neuroscience Keywords: visual contrast; perceived relative object depth; 2D images; sound frequency; two alternative forced-choice; response times; high-probability decision; readiness to respond; probability summation
Online: 22 October 2019 (03:34:45 CEST)
Pieron's and Chocholle’s seminal psychophysical work predicts that human response time to information relative to visual contrast and/or sound frequency decreases when contrast intensity or sound frequency increases. The goal of this study is to bring to the fore the ability of individuals to use visual contrast intensity and sound frequency in combination for faster perceptual decisions of relative depth (“nearer”) in planar (2D) object configurations on the basis of physical variations in luminance contrast. Computer controlled images with two abstract patterns of varying contrast intensity, one on the left and one on the right, preceded or not by a pure tone of varying frequency, were shown to healthy young humans in controlled experimental sequences. Their task (two-alternative forced-choice) was to decide as quickly as possible which of two patterns, the left or the right one, in a given image appeared to “stand out as if it were nearer” in terms of apparent (subjective) visual depth. The results show that the combinations of varying relative visual contrast with sounds of varying frequency exploited here produced an additive effect on choice response times in terms of facilitation, where a stronger visual contrast combined with a higher sound frequency produced shorter forced-choice response times. This new effect is predicted by cross-modal audio-visual probability summation.
ARTICLE | doi:10.20944/preprints202107.0200.v1
Subject: Engineering, Electrical & Electronic Engineering Keywords: image quality assessment; image quality metrics; NR-IQAs; D-IQA; OCR accuracy; OCR prediction; OCR improvements; visual aids; visually impaired; reading aids; document images; text-based images
Online: 8 July 2021 (13:21:49 CEST)
For Visually impaired People (VIPs), the ability to convert text to sound can mean a new level of independence or the simple joy of a good book. With significant advances in Optical Character Recognition (OCR) in recent years, a number of reading aids are appearing on the market. These reading aids convert images captured by a camera to text which can then be read aloud. However, all of these reading aids suffer from a key issue – the user must be able to visually target the text and capture an image of sufficient quality for the OCR algorithm to function – no small task for VIPs. In this work, a Sound-Emitting Document Image Quality Assessment metric (SEDIQA) is proposed which allows the user to hear the quality of the text image and automatically captures the best image for OCR accuracy. This work also includes testing of OCR performance against image degradations, to identify the most significant contributors to accuracy reduction. The proposed No-Reference Image Quality Assessor (NR-IQA) is validated alongside established NR-IQAs and this work includes insights into the performance of these NR-IQAs on document images.
ARTICLE | doi:10.20944/preprints201803.0021.v2
Subject: Earth Sciences, Geoinformatics Keywords: map processing; retrospective landscape analysis; visual data mining, image retrieval, low-level image descriptors, color moments, t-distributed stochastic neighborhood embedding, USGS topographic maps, Sanborn fire insurance maps
Online: 17 April 2018 (09:23:37 CEST)
Historical maps constitute unique sources of retrospective geographic information. Recently, several map archives containing map series covering large spatial and temporal extents have been systematically scanned and made available to the public. The geographic information contained in such data archives allows extending geospatial analysis retrospectively beyond the era of digital cartography. However, given the large data volumes of such archives and the low graphical quality of older map sheets, the processes to extract geographic information need to be automated to the highest degree possible. In order to understand the salient characteristics, data quality variation, and potential challenges in large-scale information extraction tasks, preparatory analytical steps are required to efficiently assess spatio-temporal coverage, approximate map content, and spatial accuracy of such georeferenced map archives across different cartographic scales. Such preparatory steps are often neglected or ignored in the map processing literature but represent highly critical phases that lay the foundation for any subsequent computational analysis and recognition. In this contribution we demonstrate how such preparatory analyses can be conducted using classical analytical and cartographic techniques as well as visual-analytical data mining tools originating from machine learning and data science, exemplified for the United States Geological Survey topographic map and Sanborn fire insurance map archives.
ARTICLE | doi:10.20944/preprints202204.0314.v1
Subject: Medicine & Pharmacology, Pathology & Pathobiology Keywords: Emergency Use Authorization; endemic; false omission; false omission rate; home testing; point-of-care testing (POCT); positive predictive value geometric mean-squared; prevalence boundary; recursive protocol; tier; visual logistics
Online: 30 April 2022 (08:42:08 CEST)
Goals: To use visual logistics for interpreting COVID-19 molecular and rapid antigen test (RAgT) performance, determine prevalence boundaries where risk exceeds expectations, and evaluate benefits of recursive testing along home, community, and emergency spatial care paths. Methods: Mathematica/open access software helped graph relationships, compare performance patterns, and perform recursive computations. Results: Tiered sensitivity/specificity comprise: T1) 90%/95%; T2) 95%/97.5%; and T3) 100%/≥99%, respectively. In emergency medicine, median RAgT performance peaks at 13.2% prevalence, then falls below T1, generating risky prevalence boundaries. RAgTs in pediatric ERs/EDs parallel this pattern with asymptomatic worse than symptomatic performance. In communities, RAgTs display large uncertainty with median prevalence boundary of 14.8% for 1/20 missed diagnoses, and at prevalence >33.3-36.9% risk 10% false omissions for symptomatic subjects. Recursive testing improves home RAgT performance. Home molecular tests elevate performance above T1, but lack adequate validation. Conclusions: Widespread RAgT availability encourages self-testing. Asymptomatic RAgT and PCR-based saliva testing present the highest chance of missed diagnoses. Home testing twice, once just before mingling, and molecular-based self-testing help avoid false omissions. Community and ER/ED RAgTs can identify contagiousness in low prevalence (<22%). Real-world trials of performance, cost-effectiveness, and public health impact could identify home molecular diagnostics as the optimal diagnostic portal.