ARTICLE | doi:10.20944/preprints202302.0160.v2
Subject: Environmental And Earth Sciences, Oceanography Keywords: Keywords: Developmental dyslexia; attentional dyslexia; Hebrew; migrations between words; phonological output buffer; orthographic-visual analyzer; reading
Online: 20 March 2023 (04:08:07 CET)
Abstract: Letter migrations between words in reading aloud (e.g., reading "cane love" as "lane love" or "lane cove") are known to result from a deficit in the visual-orthographic analysis and characterize attentional dyslexia. In spontaneous speech, individuals with impairment in the phonological output buffer may show migrations of phonemes between words. The purpose of this study was to examine whether migrations between words in reading aloud can also result from a deficit in the phonological output buffer, to explore the characteristics of migrations resulting from orthographic input and from phonological output deficits, and to examine methods to distinguish these two sources. Using tasks of reading aloud of 92-182 word pairs, we identified 18 adults and adolescents with developmental dyslexia who made between-word letter migrations in reading aloud, significantly more than age-matched controls (372 adults and 26 7th-graders). To distinguish between orthographic-input and phonological-output sources for these migrations, we administered a test assessing orthographic-input without spoken-output (written-semantic-decision on 140 migratable word pairs) and a test of repetition of 36 auditorily-presented migratable word pairs, assessing spoken output without orthographic-input (and word span tests). These tests indicated that the migrations of ten of the participants with dyslexia resulted from an orthographic-input deficit, and for the other eight participants, migrations resulted from a phonological-output deficit. We identified several differences between the two types of between-word errors: first, whereas the individuals with attentional dyslexia made omissions of a letter that appeared in the same position in the two words, the phonological output buffer group did not make such omissions. In addition, the groups differed in the origin of migration: orthographic input migrations involve letters that are orthographically adjacent, whereas phonological output migrations involve phonemes that have just been spoken or that are prepared together in the phonological buffer for production. This was manifested in that migrations from the line below and from two lines above occurred only in the orthographic input deficit group, and migrations occurred from a word vertically close to the target in the orthographic input group but from a word that has just been spoken (placed diagonally to the target) in the phonological output group. This study thus indicates that between-word migrations in reading-aloud can result not only from attentional dyslexia, but also from a phonological output buffer deficit, and offers ways to distinguish between the two.
ARTICLE | doi:10.20944/preprints201809.0178.v1
Subject: Social Sciences, Language And Linguistics Keywords: orthographic lexicography; European Dictionary Portal; metalexicography
Online: 10 September 2018 (15:43:17 CEST)
This short paper raises and answers a question related to orthographic lexicography in general and its reference to efforts in making contemporary dictionary portals. As orthographic dictionaries have not yet been researched as a specialized lexicographic variety, part of their metalexicographic description in those European languages that have online normative orthographic dictionaries is presented. Metalexicographic elements that are analyzed were chosen from the perspective of casual and professional users and online dictionary visitors. Regardless of the fact that this is a specific kind of dictionary, as well as of the fact that European orthographic tradition and practice is quite heterogeneous, the belief that the European Dictionary Portal should also include available online orthographic dictionaries is defended. An argument in favor of this could contribute to an awareness of the importance of orthography for online dictionary users, even in those languages whose written form greatly corresponds to the spoken form.
ARTICLE | doi:10.20944/preprints201807.0564.v1
Subject: Social Sciences, Language And Linguistics Keywords: orthographic literacy; questionnaire research; Croatian orthography; (de)standardisation
Online: 30 July 2018 (08:05:51 CEST)
This paper discusses the impact of orthographic manuals on the state of literacy, i.e. the relation of orthographic literacy and orthographic standardisation. The established hypothesis claims that frequent changes of orthographic rules during the pupils’ primary and secondary education do not have any considerable impact on their orthographic habits. In other words, the quantity of orthographic mistakes observed during a longer period of time and in conditions of changed orthographic rules would not show significant oscillations in their spelling. In order to confirm the hypothesis, a questionnaire was conducted encompassing 41 tests among 526 students of a technical study programme during four consecutive academic years, pursuant to whose results a writing uniformity index and a categorisation of orthographic controversy into six classes is established. The Croatian language has been selected for the observation due to multiple orthographic changes in the last 30 years in the three major orthographic points: writing of the covered r, writing of d and t in front of c and č in declination of words ending in -tak, -tac, -dak and -dac, and the issue of compound or separate spelling of the negation particle and the auxiliary biti (to be). Moreover, the paper methodologically and quantitatively establishes criteria according to which the second established hypothesis on evolutionary orthographic literacy can be confirmed. The conclusions are expected to be able to contribute to the better understanding of orthographic planning and application of orthographic norms in schools.
ARTICLE | doi:10.20944/preprints202307.1749.v1
Subject: Biology And Life Sciences, Behavioral Sciences Keywords: Visual span; visual attention; Crowding; Reading speed
Online: 26 July 2023 (10:14:39 CEST)
The visual span refers to the number of letters readers can identify in a single fixation without using linguistic skills. Proponents of the visual span hypothesis postulate an influence of early visual processing on reading speed. Given the slowness of reading Arabic texts, the present work aims to study the development of the visual span and its effects on reading speed in the Arabic-speaking context. Thirty-four subjects participated in the study. The trigram task and the rapid serial visual presentation (RSVP) paradigm were used to estimate visual span size and reading speed. In line with our initial assumptions, the results showed a significant effect of grade level on reading speed (F(2,31) = 30.93, p<0.001), visual span size (F(2,31) = 20.57, p<0.001). In good alignment with previous work, our results show that visual span size could explain around 40% of the reading speed variability. Interestingly, our analyses revealed a narrowing of visual span size in our Arabic sample. The results of study 2, suggest that the poor performance in the trigram task is due to poor visual attention capacities in our Arabic readers
REVIEW | doi:10.20944/preprints202306.1596.v1
Subject: Social Sciences, Psychology Keywords: Hemianopia; Real-world conditions; Visual rehabilitation; Visual field defects
Online: 22 June 2023 (10:57:30 CEST)
Hemianopia poses significant challenges and requires effective rehabilitation strategies. Traditional visual restoration methods have focused on low-level vision therapies in controlled environments. This paper proposes the integration of natural and ecologically valid environments, e.g., virtual reality (VR), three-dimensional (3D) settings, and cognitive interactions for visual rehabilitation. We review various studies that employed common practices in laboratory or controlled settings. We also discuss the disadvantages of traditional techniques and advocate for a comprehensive and ecological framework in visual rehabilitation. Instead of correcting visual inputs, we emphasize training the visual system to adapt and restore functionality in real-world contexts. By combining real-world environments and higher-level vision approaches, we can enhance visual recovery, improve daily functioning, and restore the quality of life for individuals with visual field defects. Moreover, we stress the importance of incorporating natural environments, VR, 3D settings, and cognitive interactions to maximize the effectiveness of visual rehabilitation and empower patients to regain their visual abilities in real-world scenarios. Continued research and development in this field are crucial to refine and expand the application of these innovative techniques, ultimately enhancing the lives of individuals affected by visual field defects.
ARTICLE | doi:10.20944/preprints202306.1553.v1
Subject: Computer Science And Mathematics, Computer Vision And Graphics Keywords: Information Visualization; Visual Variables; Evaluation; Occlusion; Overlap; Visual Perception
Online: 21 June 2023 (12:47:10 CEST)
The overlap of visual items in data visualization techniques is a known problem aggravated by data volume and available visual space issues. Several methods have been applied to mitigate occlusion in data visualizations, such as Random Jitter, Transparency, Layout Reconfiguration, Focus+Context Techniques, etc. This paper aims to present a comparative study to read visual variables values with partial overlap. The study focuses on categorical data representations varying the percentage limits of partial overlap and the number of distinct values for each visual variable: Hue, Lightness, Saturation, Shape, Text, Orientation, and Texture. A computational application generates random scenarios for a unique visual pattern target to perform location tasks. Each scenario presented the visual items in a grid layout with 160 elements (10 x16), each visual variable had from 3 to 5 distinct values encoded, and the partial overlap percentage applied, represented by a gray square in the center o each grid element, were 0% (control), 50%, 60%, and 70%. Similar to the preliminary tests, the tests conducted in this study involved 48 participants organized into four groups, with 126 tasks per participant, and the application captured the response and time for each task performed. The result analysis indicates that the Hue, Lightness, and Shape visual variables are robust to high percentages of occlusion and gradual increase in encoded visual values. The Text visual variable show promising results for accuracy, and resolution time was a bit higher than the last visual variables mentioned. In contrast, the Texture visual variable presented lower accuracy to high levels of occlusion and more different visual encoding values. At last, the Orientation and Saturation visual variables got the highest error and worst perfomance rates during the tests.
ARTICLE | doi:10.20944/preprints201608.0046.v1
Subject: Social Sciences, Cognitive Science Keywords: visual symmetry; affine projection; fractals; visual sensation; aesthetics; preference
Online: 5 August 2016 (05:15:32 CEST)
Evolution and geometry generate complexity in similar ways. Evolution drives natural selection while geometry may capture the logic of this selection and express it visually, in terms of specific generic properties representing some kind of advantage. Geometry is ideally suited for expressing the logic of evolutionary selection for symmetry, which is found in the shape curves of vein systems and other natural objects such as leaves, cell membranes, or tunnel systems built by ants. The topology and geometry of symmetry is controlled by numerical parameters, which act in analogy with a biological organism's DNA. The introductory part of this paper reviews findings from experiments illustrating the critical role of two-dimensional design parameters and shape symmetry for visual or tactile shape sensation, and for perception-based decision making in populations of experts and non-experts. Thereafter, results from a pilot study on the effects of fractal symmetry, referred to herein as the symmetry of things in a thing, on aesthetic judgments and visual preference are presented. In a first experiment (psychophysical scaling procedure), non-expert observers had to rate (scale from 0 to 10) the perceived beauty of a random series of 2D fractal trees with varying degrees of fractal symmetry. In a second experiment (two-alternative forced choice procedure), they had to express their preference for one of two shapes from the series. The shape pairs were presented successively in random order. Results show that the smallest possible fractal deviation from "symmetry of things in a thing" significantly reduces the perceived attractiveness of such shapes. The potential of future studies where different levels of complexity of fractal patterns are weighed against different degrees of symmetry is pointed out in the conclusion.
ARTICLE | doi:10.20944/preprints202301.0082.v2
Subject: Social Sciences, Education Keywords: at-risk readers; elementary reading; reading remediation; orthographic mapping; reading fluency; reading comprehension; accelerated learning
Online: 6 January 2023 (09:39:03 CET)
Reading proficiency is requisite in our read-to-learn educational system, yet two-thirds of American students are not proficient readers. Assuring educational equity means supporting all learners with multiple component reading interventions that individually scaffold students while remediating weak literacy skills and providing intensive and sustainable intervention early. This study (N = 855) measured the efficacy of two different multiple component reading programs for students in grades three, four, and five. Grade levels of students were assigned to either the treatment intervention or the typical practice condition; and all students were pre-and post-tested using EasyCBM Reading Benchmarks. Students scoring at/below the 30th percentile on either benchmark were also assessed with the WRMT-3 Passage Reading Comprehension and Oral Reading Fluency measures. Students in the treatment condition received Readable English and students in typical practice condition continued to receive Amplify CKLA during their regular ELA times for 45—60 hours. Students receiving Readable English significantly outperformed students in the typical practice condition on measures of oral reading fluency, reading rate, accuracy, and passage comprehension. Raw scores, growth scale values, and grade equivalents are reported, and implications for practice are discussed. In a school year fraught with pandemic instructional interruptions and learning loss, elementary students in the intervention condition averaged a year’s worth of growth in reading fluency and nine months of growth in reading comprehension compared to three- and five-months fluency and comprehension growth in the typical practice condition. Students in the Readable English condition experienced meaningful gains in reading rate and accuracy that will give exponential word reading volume dividends to students able to read text faster and more accurately going forward. This study adds to accumulating evidence that multiple component reading programs designed to reinforce fluency skills also support reading comprehension gains for all students.
ARTICLE | doi:10.20944/preprints201704.0088.v1
Subject: Engineering, Other Keywords: hierarchical video quality assessment; human visual systems; primate visual cortex; full reference
Online: 14 April 2017 (11:52:44 CEST)
Video quality assessment (VQA) plays an important role in video applications for quality evaluation and resource allocation. It aims to evaluate the video quality consistent with the human perception. In this letter, a hierarchical gradient similarity based VQA metric is proposed inspired by the structure of the primate visual cortex, in which visual information is processed through sequential visual areas. These areas are modeled with the corresponding measures to evaluate the overall perceptual quality. Experimental results on the LIVE database show that the proposed VQA metric significantly outperforms the state-of-the-art VQA metrics.
ARTICLE | doi:10.20944/preprints202211.0134.v1
Subject: Computer Science And Mathematics, Artificial Intelligence And Machine Learning Keywords: Deep Learning; Visual-Language Reasoning; Visual Commonsense Generation; Video-grounded Dialogue; VisualCOMET; AVSD
Online: 8 November 2022 (02:01:28 CET)
“A Picture is worth a thousand words”. Given an image, humans are able to deduce various cause-and-effect captions of past, current, and future events beyond the image. The task of visual commonsense generation aims at generating three cause-and-effect captions (1) what needed to happen before, (2) what is the current intent, and (3) what will happen after for a given image. However, such a task is challenging for machines owing to two limitations: existing approaches (1) directly utilize conventional vision-language transformers to learn relationships between input modalities, and (2) ignore relations among target cause-and-effect captions but consider each caption independently. We propose Cause-and-Effect BART (CE-BART) which is based on (1) Structured Graph Reasoner that captures intra- and inter-modality relationships among visual and textual representations, and (2) Cause-and-Effect Generator that generates cause-and-effect captions by considering the causal relations among inferences. We demonstrate the validity of CE-BART on VisualCOMET and AVSD benchmarks. CE-BART achieves SOTA performances on both benchmarks, while extensive ablation study and qualitative analysis demonstrate the performance gain and improved interpretability.
ARTICLE | doi:10.20944/preprints202002.0191.v1
Subject: Medicine And Pharmacology, Ophthalmology Keywords: experience; resilience; process; visual impairments
Online: 14 February 2020 (09:24:03 CET)
There is a lack of research based on in-depth theoretical and scientific knowledge to understand the visually impaired, and there has been little effort in the application of strategies for early intervention to minimize risk these people might encounter during development.. This study used semi-structured interviews from eight persons with visual impairments who had various experiences with resiliency. Three resilience processes based on life experiences were identified: 1) Experience and Adaptation: “self-awareness of disability” and “adaptation disability and the environment”; 2) Facing the Circumstances: “the exposure to concealment and abuse,” “the suppression of potential,” “denial and abandonment by family,” “poverty and disability,” “exchange and self-regulation,” and “social integration” themes; and 3) the Positive Reinforcement: “self-disclosure and jump-starting life,” “maintenance of a positive thinking,” and “socioeconomic independence.” These findings expand the understanding of the factors common to the resilience process experienced by individuals with visual impairment and highlight the importance of psychological support, family, education, and social support.
ARTICLE | doi:10.20944/preprints202306.0061.v1
Subject: Medicine And Pharmacology, Ophthalmology Keywords: neovascular age-related macular degeneration; visual outcomes; anti-VEGF; visual impairment; population-based cohort
Online: 1 June 2023 (08:10:15 CEST)
Neovascular age-related macular degeneration (nAMD) leads to visual impairment if not treated timely. Intravitreal anti-VEGF drugs have revolutionized nAMD treatment in two decades. We evaluated visual outcomes of anti-VEGF treatment in nAMD. A real-life population-based cohort study. Data included parameters for age, sex, age at the diagnosis, laterality, chronicity, symptoms, visual outcomes, lens status, and history of intravitreal injections. A total of 1088 eyes (827 patients) with nAMD were included. Visual acuity was stable or improved in 984 eyes (90%) after an average 36±25 months follow-up. Bevacitzumab was the first-line drug in 1083 (99.5%) eyes. Vision improved ≥15 ETDRS letters in 377 (35%), > 5 ETDRS letters in 309 (28%) and was stable (±5 ETDRS letters) in 298 (27%) eyes after anti-VEGF treatment. The loss of 5-<15 ETDRS letters in 44 (4%) and ≥15 ETDRS letters in 60 (6%) eyes was noted. At the diagnosis of nAMD, 110 out of 827 patients (13%) fulfilled the criteria of visual impairment, whereas 179 patients (22%) were visually impaired after the follow-up. Improvement or stabilization in vision was noted in 90% of the anti-VEGF-treated eyes with nAMD. In addition, anti-VEGF agents are crucial in diminishing nAMD-related visual impairment.
ARTICLE | doi:10.20944/preprints202108.0569.v1
Subject: Social Sciences, Cognitive Science Keywords: visual short-term memory; repetitive transcranial magnetic stimulation; visual memory precision; serial memory effects
Online: 31 August 2021 (11:43:33 CEST)
We investigated the role of the human medio-temporal complex (hMT+) in the memory encoding and storage of a sequence of four coherently moving RDKs by applying repetitive transcranial magnetic stimulation (rTMS) during an early or late phase of the retention interval. Moreover, in a second experiment we also tested whether disrupting the functional integrity of hMT+ during the early phase impaired the precision of the encoded motion directions. Overall, results showed that both recognition accuracy and precision were worse in middle serial positions, suggesting the occurrence of primacy and recency effects. We found that rTMS delivered during the early (but not the late) phase of the retention interval was able to impair not only recognition of RDKs, but also the precision of the retained motion direction. However, such impairment occurred only for RDKs presented in middle positions along the presented sequence, where performance was already closer to chance level. Altogether these findings suggest an involvement of hMT+ in the memory encoding of visual motion direction. Given that both position sequence and rTMS modulated not only recognition but also precision of the stored information, these findings are in support of a model of visual short-term memory with a variable resolution of each stored item, consistent with the assigned amount of memory resources, and that such item-specific memory resolution is supported by the functional integrity of area hMT+.
ARTICLE | doi:10.20944/preprints202003.0018.v1
Subject: Engineering, Electrical And Electronic Engineering Keywords: visual-inertial integrated navigation system (VINS); visual odometry; autonomous driving; adaptive tuning; urban canyons
Online: 2 March 2020 (00:38:17 CET)
Visual-inertial integrated navigation system (VINS) has been extensively studied over the past decades to provide accurate and low-cost positioning solutions for autonomous systems. Satisfactory performance can be obtained in an ideal scenario with sufficient and static environment features. However, there are usually numerous dynamic objects in deep urban areas, and these moving objects can severely distort the feature tracking process which is fatal to the feature-based VINS. The well-known method mitigates the effects of dynamic objects is to detect the vehicles using deep neural networks and remove the features belongs to the surrounding vehicle. However, excessive exclusion of features can severely distort the geometry of feature distribution, leading to limited visual measurements. Instead of directly eliminating the features from dynamic objects, this paper proposes to adopt the visual measurement model based on the quality of feature tracking to improve the performance of VINS. Firstly, a self-tuning covariance estimation approach is proposed to model the uncertainty of each feature measurements by integrating two parts: 1) the geometry of feature distribution (GFD), 2) the quality of feature tracking. Secondly, an adaptive M-estimator is proposed to correct the measurement residual model to further mitigate the impacts of outlier measurements, such as the dynamic features. Different from the conventional M-estimator, the proposed method effectively alleviates the reliance of excessive parameterization of M-estimator. Experiments are conducted in a typical urban area of Hong Kong with numerous dynamic objects, and the results show that the proposed method could effectively mitigate the effects of dynamic objects and improved accuracy of VINS is obtained when compared with the conventional method.
REVIEW | doi:10.20944/preprints202306.0360.v1
Subject: Biology And Life Sciences, Neuroscience And Neurology Keywords: eye movements; visual system; mouse vision
Online: 6 June 2023 (02:44:55 CEST)
The mouse visual system recently became the most popular model to study the cellular and circuit mechanisms of sensory processing. However, the importance of eye movements in mice only started to be appreciated recently. Eye movements provide a basis for active sensing and deliver insights into various brain functions and dysfunctions. A plethora of knowledge on the central control of eye movements and their role in perception and behaviour arose from work on primates. However, an overview of the known eye movement types in mice and a comparison to primates is missing.Here, we review the eye movement types described to date in mice and compare them to those observed in primates. We discuss the central neuronal mechanisms for their generation and control. Furthermore, we review the mounting literature on eye movements in mice during head-fixed and freely moving behaviours. Finally, we highlight gaps in our understanding and suggest future directions for research.
ARTICLE | doi:10.20944/preprints202211.0333.v1
Subject: Engineering, Control And Systems Engineering Keywords: Visual SLAM; Indoor positioning; Mini-drone
Online: 17 November 2022 (09:59:43 CET)
Mini-drones can be used for a variety of tasks, such as weather monitoring, package delivery, search and rescue, and recreation. Their uses are mostly restricted to outside locations with access to the Global Positioning System (GPS) and/or similar systems since their usefulness, safety, and performance substantially rely on ubiquitously accurate positioning and navigation. Indoor localization is getting better, thanks to technologies like Visual Simultaneous Localization and Mapping (V-SLAM). However, more advancements are still required for mini-drone navigation applications with greater safety standards. In this research, a novel method for enhancing indoor mini-drone localization performance is proposed. By merging Oriented Rotated Brief SLAM (ORB-SLAM2), Semi-Direct Monocular Visual Odometry (SVO), and an Adaptive Complementary Filter, the suggested strategy improves V-SLAM approaches (ACF). The findings demonstrate that, when compared to other widely-used indoor localization algorithms, the suggested methodology performs better at estimating location under various situations (low light, low texture, and dynamic environments).
Subject: Computer Science And Mathematics, Information Systems Keywords: fraud audit; process mining; visual analytics
Online: 2 March 2021 (09:19:01 CET)
Among the knowledge areas in which process mining has had an impact, the audit domain is particularly striking. Traditionally, audits seek evidence in a data sample that allows to make inferences about a population. Mistakes are usually committed when generalizing the results and anomalies, therefore, appear in unprocessed sets. However, there are some efforts to address these limitations using process mining-based approaches for fraud detection. To the best of our knowledge, no fraud audit method exists that combines process mining techniques and visual analytics to identify relevant patterns. This paper presents a fraud audit approach based on the combination of process mining techniques and visual analytics. The main advantages are: (i) a method is included that guides the use of the visual capabilities of process mining to detect fraud data patterns during an audit; (ii) the approach can be generalized to any business domain; (iii) well-known process mining techniques are used (Dotted Chart, Trace Alignment, Fuzzy Miner…). The techniques were selected by a group of experts and were extended to enable filtering for contextual analysis, to handle levels of process abstraction, and to facilitate implementation in the area of fraud audits. Based on the proposed approach, we developed a software solution that is currently being used in the financial sector as well as in the telecommunications and hospitality sector. Finally, for demonstration purposes, we present a real hotel management use case in which we detected suspected fraud behaviors, thus validating the effectiveness of the approach.
CASE REPORT | doi:10.20944/preprints202011.0329.v1
Subject: Medicine And Pharmacology, Immunology And Allergy Keywords: Two-photon microperimetry; cataracts; visual sensitivity
Online: 12 November 2020 (08:41:43 CET)
Purpose: The accuracy of conventional visual function tests, which emit visible light, decreases in patients with corneal scars, cataracts, and vitreous hemorrhages. In contrast, infrared (IR) light exhibits greater tissue penetrance than visible light and is less susceptible to optical opacities. We therefore compared visual function results obtained using conventional visual function tests and infrared 2-photon microperimetry (2PM-IR) in a subject with a brunescent nuclear sclerotic and posterior subcapsular cataract before and after cataract surgery. Methods: Visual function testing using the cone contrast threshold (CCT) test, conventional microperimetry (cMP), visible light microperimetry from a novel device (2PM-Vis), and 2PM-IR were performed before and after cataract surgery. Results: Cone contrast threshold testing improved for the S-cone, M-cone, and L-cone by 111, 14, and 30. Retinal sensitivity assessed using cMP, 2PM-Vis, and 2PM-IR improved by 18 dB, 17.4 dB, and 3.4 dB, respectively. Conclusions and Importance: 2PM-IR, unlike conventional visual function tests, showed minimal variability in retinal sensitivity before and after surgery. Thus, IR visual stimulation introduces a paradigm shift for measuring visual function in the retina and posterior visual pathways by circumventing optical media opacities.
ARTICLE | doi:10.20944/preprints201807.0243.v1
Subject: Biology And Life Sciences, Forestry Keywords: Habitat types, visual differences, landscape characteristics.
Online: 13 July 2018 (17:07:05 CEST)
The unique qualities of areas with natural landscape features help provide sustainability. Moreover, their different vegetation covers and ecosystems contribute to the preservation of their visual attraction. In recent years, the demand for natural areas has not only been seen at a recreational level, but has also become associated with the conservation and sustainability of those areas. Although the concept of sustainability is expressed from an ecological point of view, studies indicate that the visual aspect is also an important component. Thus, in this study, a visual quality assessment was carried out which considered both objective and subjective evaluations of different habitat types. Efteni lake-wetland and Melen Ağzı dunes (Düzce), Anzer, Ayder, and Çat Düzü highlands (Rize), and Sultanmurat and Taşli highlands (Trabzon) were selected as the study areas. A visual quality analysis was conducted with a total of 43 participants (23 students, 16 local inhabitants and four lecturers) in order establish their preferences in areas with different landscape characteristics. For the determination of the visual qualifications of these areas, a total of 24 photographs showing typical images representing each habitat type (three photographs for each) were employed. Taking perceptual parameters into consideration, assessment of visual quality was made according to the points given to each photo by the participants. Consequently, differences in visual quality were found to be influenced by the demographic status of the participants, differences in habitat types, recreational trends and the conservation status of the habitats.
ARTICLE | doi:10.20944/preprints202309.1618.v1
Subject: Medicine And Pharmacology, Neuroscience And Neurology Keywords: Parkinson's disease; visual hallucinations; cerebellum; functional connectivity
Online: 25 September 2023 (13:03:15 CEST)
Recent studies have discovered that functional connections are impaired in patients with Parkinson's disease (PD) accompanied by hallucinations(PD-H), even at the preclinical stage. The cerebellum has been implicated as playing a role in cognitive processes. However, the functional connectivity (FC) between cognitive sub-regions of the cerebellum of PD patients with hallucinations needs to be further clarified. Resting-state functional magnetic resonance imaging (rs-fMRI) data from three groups (17 PD-H, 13 no hallucinations accompanying the (PD-NH), and 26 healthy controls (HC) ) was collected in this study to explore the role of cerebellar FC changes in the cognitive performance. Additionally, we define cerebellar FC as a training feature for classifying all subjects using Support Vector Machine(SVM). We found that PD-H patients' cerebellum(Vermis_4_5 ) FC had increased within the left side of the precuneus(PCUN) compared to HC, cerebellum(Vermis_1_2) had increased within bilateral opercular part of the inferior frontal gyrus(IFGoprec) and triangular part of the inferior frontal gyrus(IFCtriang), left side of postcentral gyrus(PoCG), Inferior parietal lobe(IPL), and PCUNs compared to PD-NH. In the training results of machine learning, cerebellar FC has also been proven to be an effective biomarker feature, with a recognition rate of over 90% for PD-H. These findings indicate that cortico-cerebellar FC in PD-H and PD-NH patients were significantly disrupted with different distributions. The proposed pipeline offers a promising low-cost alternative for the diagnosis of preclinical PD-H and can be useful for other degenerative brain disorders.
ARTICLE | doi:10.20944/preprints202309.1107.v1
Subject: Biology And Life Sciences, Neuroscience And Neurology Keywords: 5-HT; visual cortex; E/I balance
Online: 18 September 2023 (14:02:27 CEST)
Serotonergic neurons constitute one of the main systems of neuromodulators, whose diffuse projections regulate the functions of the cerebral cortex. Serotonin (5-HT) is known to play a crucial role in the differential modulation of cortical activity related to behavioral contexts. Certain aspects of the 5-HT signaling framework hint at its potential role as a modulator of activity-dependent synaptic changes within the critical period of the primary visual cortex (V1). Cells of the serotonergic system are among the first neurons to differentiate and operate. During postnatal development, ramifications from raphe nuclei become massively distributed in the visual cortical area, remarkably increasing the availability of 5-HT for the regulation of excitatory and inhibitory synaptic activity. A substantial amount of evidence has demonstrated that synaptic plasticity at pyramidal neurons of the superficial layers of V1 critically depends on a fine regulation of the balance between excitation and inhibition (E/I). Therefore, 5-HT could play an important role in controlling this balance, providing the appropriate excitability conditions that favor synaptic modifications. In order to explore this possibility, the present work used in vitro intracellular electrophysiological recording techniques to study the effects of 5-HT on the E/I balance of V1 layer 2/3 neurons, during the critical period. Serotonergic modulation of the E/I balance has been analyzed on spontaneous activity, evoked synaptic responses, and long-term depression (LTD). Our results pointed out that the predominant action of 5-HT implies a reduction of the E/I balance. 5-HT promoted LTD at excitatory synapses while blocking LTD at inhibitory synaptic sites, thus shifting the Hebbian alterations of synaptic strength towards lower levels of E/I balance.
ARTICLE | doi:10.20944/preprints202308.1390.v1
Subject: Computer Science And Mathematics, Computer Science Keywords: Programming; Visual Execution Environment; Java; Visualization; Contextualization
Online: 21 August 2023 (09:56:45 CEST)
Students in their first year of computer science (CS1) at universities typically struggle to grasp programming concepts. This paper covers research using a Java programming language-guided visual execution environment (VEE) to teach CS1 students about programming concepts. The topics covered include input and output, conditionals, loops, functions, arrays, recursion, and files, all of which are covered in an introductory programming course. The VEE walks beginner programmers through the fundamentals of programming, utilizing visual metaphors to explain and direct interactive Java tasks. The primary goal of this study is to determine whether a cohort of 105 CS1 students from four different groups who are enrolled in two universities—one in Madrid, Spain, and the other in Galway, Ireland—can advance their programming abilities under the guidance of the VEE. Second, does the improvement vary depending on the programming concept? The findings demonstrate that students' programming knowledge has greatly increased. This improvement is significant for all programming concepts, while it is more pronounced for some topics than others, such as operators, conditionals, and loops. Additionally, it demonstrates that students had little prior understanding of files and recursion. The most well-known concept to them was the sequence concept.
ARTICLE | doi:10.20944/preprints202304.0926.v1
Subject: Environmental And Earth Sciences, Remote Sensing Keywords: artificial intelligence; image analysis; visual language model
Online: 25 April 2023 (10:47:52 CEST)
Recent advancements in Natural Language Processing (NLP), particularly in Large Language Models (LLMs), associated with deep learning-based computer vision techniques, have shown substantial potential for automating a variety of tasks. One notable model is Visual ChatGPT, which combines ChatGPT’s LLM capabilities with visual computation to enable effective image analysis. The model’s ability to process images based on textual inputs can revolutionize diverse fields. However, its application in the remote sensing domain remains unexplored. This is the first paper to examine the potential of Visual ChatGPT, a cutting-edge LLM founded on the GPT architecture, to tackle the aspects of image processing related to the remote sensing domain. Among its current capabilities, Visual ChatGPT can generate textual descriptions of images, perform canny edge and straight line detection, and conduct image segmentation. These offer valuable insights into image content and facilitate the interpretation and extraction of information. By exploring the applicability of these techniques within publicly available datasets of satellite images, we demonstrate the current model’s limitations in dealing with remote sensing images, highlighting its challenges and future prospects. Although still in early development, we believe that the combination of LLMs and visual models holds a significant potential to transform remote sensing image processing, creating accessible and practical application opportunities in the field.
REVIEW | doi:10.20944/preprints202209.0432.v1
Subject: Biology And Life Sciences, Biology And Biotechnology Keywords: eye evolution; opsin; photoreceptor; phototransduction; visual cycle
Online: 28 September 2022 (05:12:35 CEST)
Understanding the molecular underpinnings of the evolution of complex (multi-part) systems is a fundamental topic in biology. One unanswered question is the extent to which similar or different genes and regulatory interactions underlie similar complex systems across species. Animal eyes and phototransduction (light detection) are outstanding systems to investigate this question because some of the genetics underlying these traits are well-characterized in model organisms. However, comparative studies using non-model organisms are also necessary to understand the diversity and evolution of these traits. Here, we compare the characteristics of photoreceptor cells, opsins, and phototransduction cascades in diverse taxa, with particular focus on cnidarians. In contrast to the common theme of deep homology, whereby similar traits develop mainly using homologous genes, comparisons of visual systems - especially in non-model organisms - are beginning to highlight a “deep diversity” of underlying components, illustrating how variation can underlie similar complex systems across taxa. Although using candidate genes from model organisms across diversity was a good starting point to understand the evolution of complex systems, unbiased genome-wide comparisons and subsequent functional validation will be necessary to uncover unique genes that comprise complex systems of non-model groups to better understand biodiversity and its evolution.
REVIEW | doi:10.20944/preprints202106.0549.v1
Subject: Medicine And Pharmacology, Immunology And Allergy Keywords: cholinesterase; acetylcholine; visual function; ocular surface; retina
Online: 22 June 2021 (14:28:38 CEST)
The visual system is regulated by the nervous through neurotransmitters, which play an important role in visual and ocular functions. One of those neurotransmitters is acetylcholine, a key molecule that plays a diversity of biological functions. On the other hand, acetylcholinesterase, the enzyme responsible for the hydrolysis of acetylcholine, is implicated in cholinergic function. However, several studies showed that in addition to their enzymatic functions, Acetylcholinesterase exerts non-catalytic functions. In recent years, the importance of evaluating all possible functions of acetylcholine-acetylcholinesterase has been evidenced. Nevertheless, there is evidence that suggests cholinesterase activity in the eye can regulate some biological events both in structures of the anterior and posterior segment of the eye and therefore in the visual information that is processed in the visual cortex. Hence, the evaluation of cholinesterase activity could be a possible marker of alterations in cholinergic activity not only in ocular disease but also in systemic diseases.
ARTICLE | doi:10.20944/preprints202001.0340.v1
Subject: Engineering, Mechanical Engineering Keywords: thermal drilling; material; visual evaluation; macrostructure; microstructure
Online: 28 January 2020 (10:52:21 CET)
The contribution deals with the joining of various types of materials by technology of thermal drilling. In various branches of industries, also in the automotive industry must be joining operations, service, repairing, substitution or protection workpieces, components with various types of materials. Equally, the important role as joint, is also used material, and a product preparation by assembly and disassembly operations. By utilization of new friction hybrid joining technologies we can shortage the production time, provide automation in operations, increase the quality of joints, spare of economical expenses and also we can protect the environment. In this paper authors have investigated the effect of friction drilling on the tested material, aluminium alloy AlMgSi, which was used for material testing. The created joints were evaluated visually and by microscopy methods. The errors of tested joining were documented and described, too. This contribution was made with cooperation of Technical University of Kosice and with U. S. Steel Kosice, s.r.o.
ARTICLE | doi:10.20944/preprints202101.0347.v1
Subject: Social Sciences, Psychology Keywords: interocular suppression; consciousness; color vision; visual search; attentional templates; early visual system; awareness; continuous flash suppression; binocoular rivalry
Online: 18 January 2021 (14:32:29 CET)
Color can direct visual attention to specific locations through bottom-up and top-down mechanisms. Using Continuous Flash Suppression (CFS) as way to investigate the factors that gate access to consciousness, the current study investigated whether color also directly affected the timing of conscious perception. Low or high spatial frequency (SF) gratings with different orientations were shown as targets to the non-dominant eye of human participants. CFS patterns were presented at a rate of 10Hz to the dominant eye to delay conscious perception of the targets, and participants had the task to report the target’s orientation as soon as they could see it. With low-SF targets, two types of color-based effects became evident. First, when the targets and the CFS patterns had different colors, the targets entered consciousness faster than in trials where the targets and CFS patterns had the same color. Second, when participants searched for a specific target color, targets that matched these search settings entered consciousness faster compared to conditions where the target color was irrelevant and could vary from trial to trial. Thus, the current study demonstrates that color is a central feature of human perception and leads to faster conscious perception of visual stimuli through bottom-up and top-down attentional mechanisms.
ARTICLE | doi:10.20944/preprints201808.0523.v1
Subject: Social Sciences, Cognitive Science Keywords: frequency difference limens; blindfold; visual cues; auditory-visual synesthesia; gliding frequencies; perceptual limit, common resource theory; multiple resource model
Online: 30 August 2018 (10:40:28 CEST)
How perceptual limits can be overcome has long been examined by psychologists. This study investigated whether visual cues, blindfolding, visual-auditory synesthetic experience and music training could facilitate a smaller frequency difference limen (FDL) in a gliding frequency discrimination test. It was hoped that the auditory limits could be overcome through visual facilitation, visual deprivation, involuntary cross-modal sensory experience or music practice. Ninety university students, with no visual or auditory impairment, were recruited for this one-between (blindfold/visual cue) and one-within (control/experimental session) designed study. A MATLAB program was prepared to test their FDL by an alternative forced-choice task (gliding upwards/gliding downwards/no change) and two questionnaires (Vividness of Mental Imagery Questionnaire & Projector-Associator Test) were used to assess their tendency to synesthesia. Participants with music training showed a significantly smaller FDL; on the other hand, being blindfolded, being provided with visual cues or having synesthetic experience before could not significantly reduce the FDL. However, the result showed a trend of reduced FDLs through blindfolding. This indicated that visual deprivation might slightly expand the limits in auditory perception. Overall, current study suggests that the inter-sensory perception can be enhanced through training but not though reallocating cognitive resources to certain modalities. Future studies are recommended to verify the effects of music practice on other perceptual limits.
ARTICLE | doi:10.20944/preprints201808.0462.v1
Subject: Public Health And Healthcare, Public, Environmental And Occupational Health Keywords: Key words: Occupational therapy, visual perceptual skills, Test of Visual Perceptual Skills-3 (TVPS-3), Human Immunodeficiency Virus (HIV).
Online: 27 August 2018 (13:36:15 CEST)
Abstract Introduction: Visual perceptual skills are essential for independent participation in self-care tasks, educational, work and leisure time activities. The effect of HIV on the visual perceptual skills is not well understood among children in low resource settings like Zimbabwe. Methods: A cross sectional comparative study was done with 30 children living with HIV and 30 children living without HIV residing in Harare urban area. The TVPS-3 was used to assess their visual perceptual skills. SPSS version 22, STATISTICA 13 and Microsoft 2016 were used for data analysis. Results: Both groups of children had mean percentile ranks below 50 on their TVPS-3 scores. Children without HIV generally performed better than those with HIV but the difference was not statistically significant in most cases. Through univariate analysis, only performance on Spatial Relations significantly differed between the two groups. Both groups had lowest scores in Basic visual perceptual skills. Age and school grade were the independent predictors of the children’s performances in the study. Conclusion: There is need for Occupational therapy services in public primary schools and in the pediatric Opportunistic Infections clinics in hospitals to be part of the health team which caters for children with visual perceptual challenges.
ARTICLE | doi:10.20944/preprints202308.1419.v1
Subject: Computer Science And Mathematics, Artificial Intelligence And Machine Learning Keywords: Visual SLAM; repaired mask; dynamic environments; ORB-SLAM2
Online: 21 August 2023 (08:54:41 CEST)
The traditional Simultaneous Localization and Mapping (SLAM) systems are based on the 1 strong static assumption, and their performance will degrade significantly due to the presence of 2 dynamic objects located in dynamic environments. To decrease the effects brought by the dynamic 3 objects, based on ORB-SLAM2 system framework, a novel dynamic semantic SLAM system called 4 MOR-SLAM is presented using mask repair method, which can accurately detect dynamic objects 5 and realize high-precision positioning and tracking of the system in dynamic indoor environments. 6 First, an instance segmentation module is added to the front end of ORB-SLAM2 to distinguish 7 dynamic and static objects in the environment and obtains a preliminary mask. Next, to overcome the 8 under-segmentation problem in instance segmentation, a new mask inpainting model is proposed to 9 ensure that the integrity of object masks, which repairs large objects and small objects in the image 10 with depth value fusion method and morphological method respectively. Then, a reliable basic matrix 11 can be obtained based on the above repaired mask. Finally, the potential dynamic feature points 12 in the environment are detected and removed through the reliable basic matrix, and the remaining 13 static feature points are input into the tracking module of the system to realize the high-precision 14 positioning and tracking in dynamic environments. The experiments on the public TUM dataset 15 show that, compared with ORB-SLAM2, the MOR-SLAM improves the absolute trajectory accuracy 16 by 95.55%. In addition, compared with DynaSLAM and DS-SLAM on the high-dynamic sequences 17 (fr3/w/rpy and fr3/w/static), the MOR-SLAM improves the absolute trajectory accuracy by 15.20% 18 and 59.71%, respectively
ARTICLE | doi:10.20944/preprints202302.0453.v1
Subject: Engineering, Civil Engineering Keywords: visual behaviour; bicycle simulator; eye tracking; cyclist safety
Online: 27 February 2023 (07:17:39 CET)
Cyclists are one of the main categories of road users particularly exposed to accident risk. The increasing use of this ecological means of transport requires a specific assessment of cyclist safety in terms of traffic flow and human factors. In this study particular visual tracking tool has been used in order to highlight not only the main critical points of the infrastructure, where a high level of distraction is recorded but also the various interactions with different road users (pedestrians, vehicles, buses, wheelchairs, cyclists). In order to confirm the critical points of the infrastructure and the trend of workload, a similar circuit was reproduced in a bicycle simulator, which also allowed a meaningful comparison of cycling behaviour. The cycling performance was also evaluated both from an objective point of view, with the count of frames related to each category of visualization, and a subjective one, through the questionnaires. The results show the crossing as a critical point because of only 4/3% fixation for both simulated and real tests in order to confirm the significance of the comparison between the two experiments. The high attention rate resulting from frame-by-frame analysis also points to a clear difference in the perception of users, who feel with a low workload.
ARTICLE | doi:10.20944/preprints202212.0451.v1
Subject: Engineering, Civil Engineering Keywords: ADAS; Driving behavior; Cyclists; Road safety; Visual behavior
Online: 23 December 2022 (07:58:49 CET)
The possibility of using some warnings inside modern vehicles should be an aid to the driving activity. However, the information transferred to users is not always received in the expected way due to the variability and complexity of the road environment. This study, therefore, aims to identify a procedure that allows to ascertain whether drivers receive the data in an appropriate way even during particular manoeuvres, such as passing cyclists on a winding road or, on the contrary, if they represent an unnecessary overload. To answer this question, an experimentation in a simulated environment was set up with which the drivers’ visual behaviour was recorded in presence and absence of a driving aid device (On- Board Unit, OBU). The results show that, in some situation, the information provided by the OBU helps to maintain a more virtuous driving behaviour but, in the most complex ones, drivers acquire information from a smaller number of sources, excluding the aid devices present inside the cockpit. This procedure is useful for ADAS designers in order to refine these instruments but also for road managers who can improve safety by inserting appropriate signs or speed limits.
ARTICLE | doi:10.20944/preprints202209.0366.v2
Subject: Computer Science And Mathematics, Computer Science Keywords: Vibration Detection; Progress; Power Plant; Bibliometrics; Visual Analysis
Online: 17 October 2022 (10:50:11 CEST)
After long years of development, the technology of analyzing the working condition of power units based on vibration signals has had relatively stable applications, but the accuracy and the degree of automation and intelligence for fault diagnosis are still inadequate due to the limitations of the current development of key technologies. With the development of big data and artificial intelligence technology, the involvement of new technologies will be an important boost to the development of this field. To support the subsequent research, bibliometrics is used as a tool to sort out the development of the technology in this field at the macro level; at the micro level, the classical and key literature is studied to grasp the development status at the technical level and prepare for the selection of entry points to continue in-depth innovation afterwards.
ARTICLE | doi:10.20944/preprints202106.0441.v1
Subject: Medicine And Pharmacology, Immunology And Allergy Keywords: cognition; visual memory; reaction time; alcohol; Bipolar disorder
Online: 16 June 2021 (11:37:43 CEST)
The purpose of this study was to explore the association of cognition with hazardous drinking and alcohol related disorder in persons with bipolar disorder (BD). The study population included 1,268 persons from Finland with bipolar disorder. Alcohol use was assessed through hazardous drinking and alcohol related disorder including alcohol use disorder (AUD). Hazardous drinking was screened with the AUDIT-C (Alcohol Use Disorders Identification Test for Consumption) screening tool. Alcohol related disorder diagnoses were obtained from the national registrar data. Participants performed two computerized tasks from the Cambridge automated neuropsychological test battery (CANTAB) on tablet computer: the 5-choice serial reaction time task, or, reaction time (RT) test and the Paired Associative Learning (PAL) test. Association between RT-test and alcohol use was analyzed with log-linear regression, and eβ with 95% confidence intervals (CI) are reported. PAL first trial memory score was analyzed with linear regression, and β with 95% CI are reported. PAL total errors adjusted was analyzed with logistic regression and odds ratios (OR) with 95% CI are reported. After adjustment for age, education and housing status, hazardous drinking was associated with lower median and less variable RT in females while AUD was associated with a poorer PAL test performance in terms of the total errors adjusted scores in females. Our findings of positive associations between alcohol use and cognition in persons with bipolar disorder are unique.
ARTICLE | doi:10.20944/preprints201809.0509.v1
Subject: Engineering, Control And Systems Engineering Keywords: visual-inertial odometry; UAV navigation; sensor fusion; optimization
Online: 26 September 2018 (13:23:48 CEST)
Visual inertial odometry (VIO) has recently received much attention for efficient and accurate ego-motion estimation of unmanned aerial vehicle systems (UAVs). Recent studies have shown that optimization-based algorithms achieve typically high accuracy when given enough amount of information, but occasionally suffer from divergence when solving highly non-linear problems. Further, their performance significantly depends on the accuracy of the initialization of inertial measurement unit (IMU) parameters. In this paper, we propose a novel VIO algorithm of estimating the motional state of UAVs with high accuracy. The main technical contributions are the fusion of visual information and pre-integrated inertial measurements in a joint optimization framework, and the stable initialization of scale and gravity using relative pose constraints. To count for ambiguity and uncertainty of VIO initialization, a local scale parameter is adopted in the online optimization. Quantitative comparisons with the state-of-the-art algorithms on the EuRoC dataset verify the efficacy and accuracy of the proposed method.
ARTICLE | doi:10.20944/preprints201807.0126.v1
Subject: Engineering, Electrical And Electronic Engineering Keywords: image enhancement; cuckoo optimization; entropy and visual factor
Online: 9 July 2018 (05:07:27 CEST)
The notion of enhancement of the image is to ameliorate the perceptibility of information contained in an image. In the present research, a novel technique for the enhancement of image quality is propounded using fuzzy logic technique with a cuckoo optimization algorithm. Generally, the image is transformed from RGB domain to HSV domain keeping the color information intact within the image. The image has been categorized into three regions: underexposed, overexposed and mixed region on the basis of two threshold values. For the fuzzification of under and overexposed area the degree of membership is defined by the Gaussian membership, while the mixed area is fuzzified by parametric sigmoid function. The key parameters like visual factors and fuzzy contrast provide the quantitative analysis of an image. An objective function is framed which involves entropy and visual factor has been optimized by a new evolutionary cuckoo optimization algorithm. The results procured after simulation by the cuckoo optimization algorithm are compared with Bacterial foraging algorithm and ant colony optimization based image enhancement and this approach is found to be improved.
ARTICLE | doi:10.20944/preprints201807.0106.v1
Subject: Social Sciences, Cognitive Science Keywords: auditory-visual speech perception; bipolar disorder; speech perception
Online: 6 July 2018 (05:21:19 CEST)
The focus of this study was to investigate how individuals with bipolar disorder integrate auditory and visual speech information compared to non-disordered individuals and whether there were any differences in auditory and visual speech integration in the manic and depressive episodes in bipolar disorder patients. It was hypothesized that bipolar groups’ auditory-visual speech integration would be less robust than the control group. Further, it was predicted that those in the manic phase of bipolar disorder would integrate visual speech information more than their depressive phase counterparts. To examine these, the McGurk effect paradigm was used with typical auditory-visual speech (AV) as well as auditory-only (AO) speech perception on visual-only (VO) stimuli. Results. Results showed that the disordered and non-disordered groups did not differ on auditory-visual speech (AV) integration and auditory-only (AO) speech perception but on visual-only (VO) stimuli. The results are interpreted to pave the way for further research whereby both behavioural and physiological data are collected simultaneously. This will allow us understand the full dynamics of how, actually, the auditory and visual (relatively impoverished in bipolar disorder) speech information are integrated in people with bipolar disorder.
ARTICLE | doi:10.20944/preprints202309.1409.v1
Subject: Medicine And Pharmacology, Ophthalmology Keywords: uveitis-associated cataract; postoperative complications; phacoemulsification; uveitis; visual prognosis
Online: 21 September 2023 (04:56:43 CEST)
Background and Objectives: Uveitis, a prevalent eye disorder characterized by inflammatory processes, often leads to cataract formation and significant visual impairment. This study aimed to evaluate preoperative conditions and postoperative outcomes following cataract surgery in uveitis patients. Materials and Methods: A retrospective study was conducted at the University Hospital Center Rebro Zagreb, Croatia, involving uveitis patients who underwent cataract surgery between 2013 and 2022. Eligible patients had uveitic cataracts affecting visual acuity or posterior segment visualization in a "quiet eye" and were disease-inactive for at least three months. Patients with certain preexisting ocular conditions were excluded. Data collected included patient demographics, uveitis type, preoperative therapy, preexisting lesions, and postoperative outcomes such as visual acuity, intraocular pressure, central macular thickness, and complications. Statistical analysis was performed to identify risk factors associated with complications. Results: The study included 105 patients. After cataract surgery, there was a significant improvement in visual acuity at various time points, with 90% of eyes showing improvement. Intraocular pressure decreased over time. Central macular thickness increased at three months post-surgery but remained stable thereafter. Early and late complications were observed in 52.4% and 63.8% of eyes, respectively. The most common complications were posterior capsular opacification, macular edema, and epiretinal membrane. Factors associated with complications varied between early and late stages but included age, age at onset of uveitis, and uveitis type. Conclusion: Cataract surgery in uveitis patients presents challenges but can lead to significant visual improvement. This study highlights the importance of careful patient selection, preoperative and postoperative inflammation management, and precise surgical techniques. Although complications were common, they were manageable, and most patients experienced improved vision, emphasizing the benefits of cataract surgery in this population. However, future investigations should address the study's limitations and further refine perioperative strategies.
REVIEW | doi:10.20944/preprints202308.1420.v1
Subject: Medicine And Pharmacology, Dentistry And Oral Surgery Keywords: pain perception; discomfort; orthodontic appliances; visual analogue scale (VAS)
Online: 21 August 2023 (07:08:46 CEST)
Pain is a complex multidimensional feeling combined with sensorial and emotional features. The majority of patients undergoing orthodontic treatment reported various degrees of pain, which is perceived as widely variable between individuals, even when the stimulus is the same. Orthodontic pain is considered the main cause of poor-quality outcomes, patients' dissatisfaction, and lack of collaboration up to the interruption of the therapy. A deep understanding of pain and how it influences the patient’s daily life is fundamental to establishing proper therapeutic procedures and obtaining the correct collaboration. Because of its multifaced and subjective nature, pain is a difficult dimension to measure. The use of questionnaires and their relative rating scales is actually considered the gold standard for pain assessment. Depending on the patient’s age and cognitive abilities is possible to choose the most appropriate instrument for self-reported pain records. Although several scales have been proposed and a lot of them are applied, it remains uncertain which of these tools represents the standard and performs the most precise, universal, and predictable task. This review aims to give an overview of the aspects which describe pain, specifically the pain experienced during orthodontic treatment, the main tool to assess self-perceived pain in a better and more efficient way, the different indications for each of them, and their correlated advantages or disadvantages.
ARTICLE | doi:10.20944/preprints202303.0308.v1
Subject: Medicine And Pharmacology, Orthopedics And Sports Medicine Keywords: Anterior Crucial ligament; Rehabilitation; differential learning; Visual-motor therapy
Online: 16 March 2023 (13:59:56 CET)
Variable during practice is widely accepted to be advantageous for motor learning and therefor a valuable strategy to effectively reduce high-risk landing mechanics and prevent primary anterior cruciate ligament (ACL) injury. Few attempts have examined the specific effects of variable train-ing in athletes who have undergone ACL reconstruction. Thereby it is still unclear to what extent the variations in different sensor areas lead to different effects. Accordingly, we compared the effects of versatile movement variations (DL) with variations of movements with emphasis on disrupting visual information (VMT) in athletes who had undergone ACL reconstruction. For-ty-five interceptive sports athletes after ACL reconstruction were randomly allocated to a DL group (n= 15), VT group (n= 15) or control group (n= 15). The primary outcome was functional performance (Triple Hop Test). The secondary outcomes included dynamic balance (Star Excur-sion Balance Test (SEBT)), biomechanics during single-leg drop-landing task hip flexion (HF), knee flexion (KF), ankle dorsiflexion (AD), knee valgus (KV), and vertical ground reaction force (VGRF), and kinesiophobia (Tampa Scale of Kinesiophobia (TSK)) assessed before and after the 8-weeks of interventions. Data were analyzed be means of 3×2 repeated measures ANOVA fol-lowed by post hoc comparison (Bonferroni) at the significance level of p≤0.05. Significant group × time interaction effects, main effect of time and main effect of group were found for triple hop test, all eight directions of SEBT, HF, KF, AD, KV, VGRF and TSK. There was no significant main ef-fect of group in HF and triple hop test. Also, significant differences in triple hop test, seven direc-tions of SEBT, HF, KF, KV, VGRF and TSK were found between control group with the DL and VMT groups. Between group differences in AD and medial direction of SEBT were not signifi-cant. Also, there was no significant differences between VMT and control groups in triple hop test and HF variables. Both motor learning (DL and VMT) programs improved outcomes in patients after ACL reconstruction. Findings suggest that DL and VMT training programs lead to compa-rable improvements in rehabilitation.
ARTICLE | doi:10.20944/preprints202101.0292.v1
Subject: Computer Science And Mathematics, Algebra And Number Theory Keywords: crime; hotspots; Space-Time clustering; New York; Visual analytics
Online: 15 January 2021 (12:47:45 CET)
Pattern recognition has long been regarded as key role for crime prevention and reduction. Crime analysts and policy makers can formulate effective strategies and allocate resources with reference to spatial and temporal pattern of crime. In order the combat and prevent severe crime in New York City (NYC), this study analyzed Felony Crime data of NYC in previous 5 years (2015 2020) and discovered criminal hotspots pattern and temporal patterns with open criminal complaint data provided by New York Police Department (NYPD). This study adapt a human computer interactive appraoch to draw patterns from crime data, whereas computations and visualization are performed by Python libraries, and human to inform the decision of visualization methods, computational parameters and direction of this exploratary analysis. Density based clustering algorithms, Grid Thematic Mapping and Density Heatmap are displayed to identify hotspots and demonstrates their associations with spatial features. Timeline analysis on moments of crime occurance demonstrates seasonality where crimes are mostly commited, while aoristic analysis showed hours of day when crime is mostly committed considering their timespan. Lastly, 3D visualization improved recognition of the displacement of hotspot over time, and suggested long term hotspots in NYC in 3 D visualization. This inform strategic plans for police deployment.
ARTICLE | doi:10.20944/preprints202010.0009.v1
Subject: Social Sciences, Psychology Keywords: visual search; vision loss; incidental learning; macular degeneration; fovea
Online: 1 October 2020 (09:12:00 CEST)
Foveal vision loss has been shown to reduce efficient visual search guidance due to contextual cueing by incidentally learned contexts. However, previous studies used artificial (T among L-shape) search paradigms that prevent the memorization of a target in a semantically meaningful scene. Here, we investigated contextual cueing in real-life scenes that allow explicit memory of target locations in semantically rich scenes. In contrast to the contextual cueing deficits in artificial scenes, contextual cueing in patients with age-related macular degeneration (AMD) did not differ from age-matched normal-sighted controls. We discuss this in the context of visuospatial working memory demands for which both eye-movement control in the presence of central vision loss and for memory-guided search may compete. Memory-guided search in semantically rich scenes may depend less on visuospatial working memory than search in abstract displays, potentially explaining intact contextual cueing in the former but not the latter. In a practical sense, our findings may indicate that Patients with AMD are less deficient than expected after previous lab experiments. This shows the usefulness of realistic stimuli in experimental clinical research.
ARTICLE | doi:10.20944/preprints202002.0093.v1
Subject: Medicine And Pharmacology, Anesthesiology And Pain Medicine Keywords: Ketamine; Paravertebral block; Posterolateral thoracotomy; Thoracotomy; Visual analog scale
Online: 7 February 2020 (09:28:16 CET)
Severe postoperative pain affects most patients after thoracotomy and is a risk factor for post-thoracotomy pain syndrome (PTPS). This randomized controlled trial compared preemptively administered ketamine versus paravertebral block (PVB) versus control in patients undergoing posterolateral thoracotomy. The primary outcome was acute pain intensity on the visual analog scale (VAS) on the first postoperative day. Secondary outcomes included morphine consumption, patient satisfaction, and PTPS assessment with Neuropathic Pain Syndrome Inventory (NPSI). Acute pain intensity was significantly lower with PVB compared to other groups at four out of six time points. Patients in the PVB group used significantly less morphine via a patient-controlled analgesia pump than participants in other groups. Moreover, patients were more satisfied with postoperative pain management after PVB. PVB, but not ketamine, decreased PTPS intensity at 1, 3, and 6 months after posterolateral thoracotomy. Acute pain intensity at hour 8 and PTPS intensity at month 3 correlated positively with PTPS at month 6. Bodyweight was negatively associated with chronic pain at month 6. Thus, PVB but not preemptively administered ketamine decreases both acute and chronic pain intensity following posterolateral thoracotomies. The trial was prospectively registered at the Australian New Zealand Clinical Trial Registry (https://www.anzctr.org.au/; ACTRN12616000900415; 07 July 2016).
ARTICLE | doi:10.20944/preprints202308.0306.v1
Subject: Public Health And Healthcare, Public Health And Health Services Keywords: aging; Down syndrome; quality of life; refractive error; visual impairments
Online: 3 August 2023 (10:27:55 CEST)
People with Down syndrome have more visual problems than general population. They experience a premature aging, and it would be expected they will also have an acceleration in worsening visual function. A prospective observational study which includes visual acuity, refractive error, accommodation, binocular and colour vision was performed to young adults with (n=69) and without (n=65) Down syndrome and to a senior group (n=55) without Down syndrome. Results showed significant differences in visual acuity between groups (p<0.001) and it can be improved with a new prescription in 40% of the participants with Down syndrome. Regarding to accommodative state, no significant differences were found between groups of young people. Concerning binocular vision, 64.7% of strabismus was observed in the group with Down syndrome (p<0.001). Visual abnormalities are significant in young adults with Down syndrome and are different from those of older people without Down syndrome, some of which can be improved by providing the optimal prescription as well as regular eye examinations.
COMMUNICATION | doi:10.20944/preprints202307.2035.v1
Subject: Computer Science And Mathematics, Artificial Intelligence And Machine Learning Keywords: audio-visual archiving; ChatGPT; cultural heritage digital ephemera; publications ethics
Online: 31 July 2023 (04:15:50 CEST)
The recent public release of the generative AI language model ChatGPT has captured the public imagination and has resulted a rapid uptake and widespread experimentation by the general public and academia alike. The number of academic publications focusing on the capabilities as well as practical and ethical implications of generative AI has been growing exponentially. One of the concerns with this unprecedented growth in scholarship related to generative AI, in particular ChatGPT, is that in most cases the raw data, that is the text of the original ‘conversations,’ have not been made available to the audience of the papers and thus cannot be drawn on to assess the veracity of the arguments made and the conclusions drawn therefrom. This paper provides a protocol for the documentation and archiving of these raw data.
ARTICLE | doi:10.20944/preprints202305.0359.v1
Subject: Business, Economics And Management, Marketing Keywords: Visual Signal; Auditory Signal; Interaction Effect; Purchase Behavior; AISAS Model
Online: 5 May 2023 (10:59:16 CEST)
This study, based on the AISAS model, explores the impact of the interaction effect between visual and auditory signals on consumer purchase behavior. Using experimental methods, 120 participants were randomly assigned to four different visual and auditory signal combinations, and their purchase intentions and actual purchase behavior were measured. The results show that the interaction effect between visual and auditory signals has a significant impact on both purchase intentions and actual purchase behavior, and there is a significant positive relationship. Specifically, when visual and auditory signals are mutually consistent, consumers have the highest purchase intentions and actual purchase behavior; when both visual and auditory signals are absent, consumers have the lowest purchase intentions and actual purchase behavior; when either the visual or auditory signal is missing, consumers' purchase intentions and actual purchase behavior are in between the two extremes. This study provides a new perspective for understanding consumers' decision-making processes in multi-sensory environments and offers valuable insights for the development of marketing strategies.
DATA DESCRIPTOR | doi:10.20944/preprints202212.0118.v1
Subject: Computer Science And Mathematics, Information Systems Keywords: Lip reading; Visual speech recognition; Turkish dataset; Face parts detection
Online: 7 December 2022 (06:50:33 CET)
The promised dataset was obtained from the daily Turkish words and phrases pronounced by various people in the videos posted on YouTube. The purpose of collecting the dataset is to provide detection of the spoken word by recognizing patterns or classifying lip movements with supervised, unsupervised, semi-supervised learning and machine learning algorithms. Most of the datasets related with lip reading consist of people recorded on camera with fixed backgrounds and the same conditions, but the dataset presented here consists of images compatible with machine learning models developed for real-life challenges. It contains a total of 2335 instances taken from TV series, movies, vlogs, and song clips on YouTube. The images in the dataset vary due to factors such as the way people say words, accent, speaking rate, gender and age. Furthermore, the instances in the dataset consist of videos with different angles, shadows, resolution, and brightness that are not created manually. The most important feature of our lip reading dataset is that we contribute to the non-synthetic Turkish dataset pool, which does not have wide dataset varieties. Machine learning studies can be carried out in many areas, such as the defense industry and social life, with this dataset.
ARTICLE | doi:10.20944/preprints202210.0355.v1
Subject: Computer Science And Mathematics, Computer Science Keywords: Auto encoder; surface defects; abnormal defects; visual inspection; unsupervised defect
Online: 24 October 2022 (07:56:04 CEST)
Currently, most deep learning methods cannot solve the problem of scarcity of industrial product defect samples and significant differences in characteristics. This paper proposes an unsupervised defect detection algorithm based on a reconstruction network, which is realized using only a large number of easily obtained defect-free sample data. The network includes two parts: image reconstruction and surface defect area detection. The reconstruction network is designed through a fully convolutional autoencoder with a lightweight structure. Only a small number of normal samples are used for training so that the reconstruction network can be A defect-free reconstructed image is generated. A function combining structural loss and L1 loss is proposed as the loss function of the reconstruction network to solve the problem of poor detection of irregular texture surface defects. Further, the residual of the reconstructed image and the image to be tested is used as the possible region of the defect, and conventional image operations can realize the location of the fault. The unsupervised defect detection algorithm of the proposed reconstruction network is used on multiple defect image sample sets. Compared with other similar algorithms, the results show that the unsupervised defect detection algorithm of the reconstructed network has strong robustness and accuracy.
ARTICLE | doi:10.20944/preprints202209.0490.v1
Subject: Medicine And Pharmacology, Orthopedics And Sports Medicine Keywords: rehabilitation; shoulder; electromyography feedback; visual biofeedback; assistive robot; musculoskeletal disorder
Online: 30 September 2022 (15:04:05 CEST)
While shoulder injuries represent the musculoskeletal disorders (MSDs) most encountered in physical therapy, there is no consensus on their management. As attempts to provide standardized and personalized treatment, a robot-ic-assisted device combined with EMG biofeedback specifically dedicated to shoulder MSDs has been developed. The aim of this study was to determine the efficacy of an 8-week rehabilitation program (≈3 sessions a week) using a ro-botic-assisted device combined with EMG biofeedback (RA-EMG group) in comparison with a conventional program (CONV group) in patients presenting with shoulder MSDs. This study is a retrospective cohort study including data from 2010 to 2013 on patients initially involved in a physical rehabilitation program in a private clinic of Chicoutimi (Canada) for shoulder MSDs. Shoul-der flexion strength and range of motion were collected before and after the rehabilitation program. Forty-four patients participated in a conventional pro-gram using dumbbell (CONV group) while 72 of them completed a program on robot-assisted device with EMG and visual biofeedback (RA-EMG group), whereby both programs consisted in 2 sets of 20 repetitions at 60% of maximal capacity. Results showed that the RA-EMG had significantly greater benefits than the Conv group for shoulder flexion strength (+103.1% vs 67%, p = 0.016) and range of motion (+14.4% vs 6.1%, p = 0.046). The current retrospective co-hort study showed that a specific and tailored rehabilitation program with constant effort by automatic adjustment of the level of resistance was able to potentiate strength and range of motion shoulder flexion after an 8-week reha-bilitation period in comparison with a conventional approach in patients with shoulder MSDs. This study provides new insight on shoulder MSD rehabilita-tion and future research should be pursued to determine the added potential of this approach for abduction and external rotation with a randomized controlled design.
ARTICLE | doi:10.20944/preprints202104.0770.v1
Subject: Social Sciences, Library And Information Sciences Keywords: Wikipedia, knowledge equity, Wikimedia, open culture, visual arts, cultural bias
Online: 29 April 2021 (09:16:07 CEST)
We explore gaps in Wikipedia's coverage of the visual arts by comparing the representation of 100 artists and 100 artworks from the Western canon against corresponding sets of notable artists and artworks from non-Western cultures. We measure the coverage of these two sets of topics across Wikipedia as a whole and for its individual language versions. We also compare the coverage for Wikimedia Commons and Wikidata, sister-projects of Wikipedia that host digital media and structured data. We show that all these platforms strongly favour the Western canon, giving many times more coverage to Western art. We highlight specific examples of differing coverage of visual art inside and outside the Western canon. We find that European language versions of Wikipedia are generally more "Western" in their coverage and Asian languages more "global", with interesting exceptions. We suggest how both Wikipedia and the wider cultural sector can address this gap in content and thus give Wikipedia a truly global perspective on the visual arts.
ARTICLE | doi:10.20944/preprints202104.0460.v1
Subject: Medicine And Pharmacology, Immunology And Allergy Keywords: Cognition; Visual memory; Reaction time; Alcohol; schizophrenia and schizoaffective disorder.
Online: 19 April 2021 (11:26:29 CEST)
Purpose of the study was to explore the association of cognition with hazardous drinking, binge drinking and alcohol use disorder in schizophrenia and schizoaffective disorder. Cognitive deficits are common in schizophrenia. Alcohol might be associated with additional cognitive impairment in schizophrenia patients. The study population included 3362 schizophrenia and schizoaffective disorder patients in Finland. Hazardous drinking was screened with the AUDIT-C (Alcohol Use Disorders Identification Test for Consumption) screening tool. Binge drinking was obtained from the AUDIT-C. Alcohol use disorder (AUD) diagnoses were obtained from the national registrar data. Participants performed two computerized tasks from the Cambridge automated neuropsychological test battery (CANTAB) on tablet computer: the 5-choice serial reaction time task (5-CSRTT), or, reaction time (RT) test and the Paired Associative Learning (PAL) test. Association of alcohol use with RT test and PAL test was analyzed with log-linear regression and logistic regression, respectively. After adjustment for age, education and age at first psychotic episode, hazardous drinking in females was associated with lower median RT. Compared to never binge drinkers, male and female participants drinking 6 or more doses of alcohol monthly or less had lower median RT. In the PAL test both first trial memory score (FTMS) and total errors adjusted score (TEAS) were associated with better performance in males drinking 6 or more doses of alcohol weekly or more and in females drinking 6 or more doses monthly or less. Higher PAL TEAS was associated with AUD in females Some positive associations between alcohol and cognition were found in male and female schizophrenia and schizoaffective disorder patients with hazardous drinking and binge drinking.
ARTICLE | doi:10.20944/preprints202104.0191.v1
Subject: Biology And Life Sciences, Biochemistry And Molecular Biology Keywords: retinal pigmented epithelium, exocyst complex component 5, photoreceptor, visual function.
Online: 7 April 2021 (11:15:10 CEST)
To characterize the mechanisms by which the highly-conserved exocyst trafficking complex regulates eye physiology in zebrafish and mice, we focused on exoc5 (aka sec10), a central exocyst component. We analyzed both exoc5 zebrafish mutants and retinal pigmented epithelium (RPE)-specific Exoc5 knockout mice. Exoc5 is present in both the non-pigmented epithelium of the ciliary body and in the RPE. In this study we set out to establish an animal model to study the mechanisms underlying the ocular phenotype and to establish if loss of visual function is induced by postnatal RPE Exoc5-deficiency. Exoc5-/- zebrafish showed smaller eyes, with decreased number of melanocytes in the RPE and shorter photoreceptor outer segments. At 3.5 days post fertilization, loss of rod and cone opsins were observed in zebrafish Tg:exoc5 mutants. Mice with postnatal RPE-specific loss of Exoc5 showed retinal thinning associated with compromised visual function, and loss of visual photoreceptor pigments. This retinal phenotype in Exoc5-/- mice was present at 20-weeks, and the phenotype was more severe at 27-weeks, indicating progressive disease phenotype. We previously showed that the exocyst is necessary for photoreceptor ciliogenesis and retinal development. Here, we report that exoc5 mutant zebrafish and mice with RPE-specific genetic ablation of Exoc5 develop abnormal RPE pigmentation, resulting in retinal cell dystrophy and loss of visual pigments associated with compromised vision. As RPE cells are “downstream” of photoreceptor cells in the visual process, these data suggest exocyst-mediated retrograde communication and dependence between the RPE and photoreceptors.
ARTICLE | doi:10.20944/preprints202103.0744.v1
Subject: Medicine And Pharmacology, Ophthalmology Keywords: glaucoma; independent prescibing; optometrist; visual fields; intraocular pressure; shared care
Online: 30 March 2021 (13:52:26 CEST)
Aim: Reporting 3 year outcomes of a community shared care scheme run by specialised independent prescribing (IP) optometrists for stable glaucoma and ocular hypertension (OHT) patients in West Kent, England. Purpose: Shared Care Schemes for glaucoma exist to alleviate the burden on Hospital Eye Services (HES) glaucoma clinics. We studied the effectiveness of community care by highly trained and qualified IP optometrists in terms of disease stability and referral rate into HES.Methods: Retrospective longitudinal review of 200 eyes with stable early to moderate stage glaucoma and OHT followed-up in two specialist optometry practices. Outcome measures included visual field mean deviation (VFMD), intraocular pressure (IOP), changes to treatment and referral rate into HES. Inclusion criteria included all patients with OHT and glaucoma (open angle and primary angle closure) referred for community follow-up. Incomplete data sets were excluded.Results: Mean age 71yrs (range 28 - 93yrs) and equal male: female ratio. n= 159 at year 3. The results for both outcomes showed no significant change from baseline at 12 or 24-month time points. However, a significant change from baseline at 36 months was observed for both outcomes: mean reduction of 0.7 mmHg in IOP, and a mean reduction of 0.3 dB in VFMD. There was a statistically significant change in the number of drops used at 36 months (p=0.001). 11 patients had a change in medication within 3 years. One patient was referred back to HES for uncontrolled IOP and consideration of trabeculectomy.Conclusion: Community follow-up of stable cases of glaucoma and OHT by highly qualified IP optometrists was safe, with stability of disease maintained and few referrals back to HES.
Subject: Medicine And Pharmacology, Ophthalmology Keywords: visual cortical prosthesis; brain-machine interface; electrical stimulation; prosthetic vision
Online: 23 March 2021 (10:42:30 CET)
The electrical stimulation of the visual cortices has the potential to restore vision to blind individuals. Until now, the results of visual cortical prosthetics has been limited as no prosthesis has restored a full working vision but the field has shown a renewed interest these last years thanks to wireless and technological advances. However, several scientific and technical challenges are still open in order to achieve the therapeutic benefit expected by these new devices. One of the main challenges is the electrical stimulation of the brain itself. In this review, we analyze the results in electrode-based visual cortical prosthetics from the electrical point of view. We first briefly describe what is known about the electrode-tissue interface and safety of electrical stimulation. Then we focus on the psychophysics of prosthetic vision and the state-of-the-art on the interplay between the electrical stimulation of the visual cortex and phosphene perception. Lastly, we discuss the challenges and perspectives of visual cortex electrical stimulation and electrode array design to develop the new generation implantable cortical visual prostheses.
REVIEW | doi:10.20944/preprints202103.0218.v1
Subject: Engineering, Automotive Engineering Keywords: Image accessibility; touchscreen; nonvisual feedback; blind; visual impairment; systematic review
Online: 8 March 2021 (13:41:50 CET)
A number of studies have been conducted to improve the accessibility of images using touchscreen devices for screen reader users. In this study, we conducted a systematic review of 33 papers to get a holistic understanding of existing approaches and to suggest a research road map given identified gaps. As a result, we identified types of images, visual information, input device and feedback modalities that were studied for improving image accessibility using touchscreen devices. Findings also revealed that little has studied how to automate the generation of image-related information, and that screen reader users play important roles during the evaluation but the design process. Then we introduce two of our recent studies on the accessibility of artwork and comics, AccessArt and AccessComics respectively. Based on the identified key challenges, we suggest a research agenda for improving image accessibility for screen reader users.
REVIEW | doi:10.20944/preprints202102.0548.v1
Subject: Biology And Life Sciences, Anatomy And Physiology Keywords: Huanglongbing, Candidatus Liberibacter, Asian citrus psyllid, blotchy mottle, visual symptoms
Online: 24 February 2021 (11:45:39 CET)
Citrus Greening, which is mainly caused by bacteria, is one of the severe citrus diseases affecting all citrus cultivars and causing the deliberate abolition of trees worldwide. This infectious disease cannot be spread by wind, rain, or contact by contaminated personnel. The primary vector that spreads this disease through feeding citrus leaves is the Asian citrus psyllid (ACP), a minuscule insect. The management of citrus greening is also very costly as there is no fruitful technique is developed to cure this disease except removing all infected plants from good ones to eliminate the dissemination of the pathogen. Citrus greening identification is also the most difficult job, as the symptoms are similar to other citrus diseases and nutrient deficiency. Asymmetrical blotchy mottling patterns on leaves are the main symptoms to detect this disease. Here we have discussed some visual signs of citrus greening, which will ultimately help root level farmers to identify and prevent this disease before it drastically impacts citrus plants. Whether it is affected by citrus greening or lack of nutrients, we have also discussed the pen test method of determining the symptoms as symmetrical or asymmetrical across the mid-vein.
ARTICLE | doi:10.20944/preprints202102.0016.v1
Subject: Social Sciences, Psychology Keywords: development; adolescents; perceptual inhibition; joint visual search task; executive function
Online: 1 February 2021 (11:38:03 CET)
Recent studies suggest that the developmental curves in adolescence, related to the development of executive functions, could be fitted to a non-linear trajectory of development with progressions and retrogressions. Therefore, the present study proposes to analyze the pattern of development in Perceptual Inhibition (PI), considering all stages of adolescence (early, middle, and late) in intervals of one year. To this aim, we worked with a sample of 275 participants between 10 and 25 years, who performed a joint visual and search task (to measure PI). We have fitted exGaussian functions to the probability distributions of the mean response time across the sample and performed a covariance analysis (ANCOVA). The results showed that the 10- to 13-year-old groups performed similarly in the task and differ from the 14- to 19-year-old participants. We found significant differences between the older group and all the rest of the groups. We discuss the important changes that can be observed in relation to the nonlinear trajectory of development that would show the PI during adolescence.
ARTICLE | doi:10.20944/preprints201908.0228.v1
Subject: Engineering, Electrical And Electronic Engineering Keywords: EEG; luminance; brightness; IAPS; STFT; feature extraction; visual processing; emotion
Online: 22 August 2019 (03:43:25 CEST)
The aim of this study was to examine brightness effect, which is the perceptual property of visual stimuli, on brain responses obtained during visual processing of these stimuli. For this purpose, brain responses of the brain to changes in brightness were explored comparatively using different emotional images (pleasant, unpleasant and neutral) with different luminance levels. Moreover, electroencephalography recordings from 12 different electrode sites of 31 healthy participants were used. The power spectra obtained from the analysis of the recordings using short time Fourier transform were analyzed, and a statistical analysis was performed on features extracted from these power spectra. Statistical findings obtained from electrophysiological data were compared with those obtained from behavioral data. The results showed that the brightness of visual stimuli affected the power of brain responses depending on frequency, time and location. According to the statistically verified findings, the distinctive effect of brightness occurred in the parietal and occipital regions for all the three types of stimuli. Accordingly, the increase in the brightness of pleasant and neutral images increased the average power of responses in the parietal and occipital regions whereas the increase in the brightness of unpleasant images decreased the average power of responses in these regions. However, the increase in brightness for all the three types of stimuli reduced the average power of frontal and central region responses (except for 100-300 ms time window for unpleasant stimuli). The statistical results obtained for unpleasant images were found to be in accordance with the behavioral data. The results also revealed that the brightness of visual stimuli could be represented by changing the activity power of the brain cortex. The main contribution of this research was to comprehensively examine brightness effect on brain activity for images with different emotional content and different frequency bands at different time windows of visual processing for different brain regions. The findings emphasized that the brightness of visual stimuli should be viewed as an important parameter in studies using emotional image techniques such as image classification, emotion evaluation and neuro-marketing.
ARTICLE | doi:10.20944/preprints201804.0313.v1
Subject: Computer Science And Mathematics, Computer Networks And Communications Keywords: visual question answering; cross-modal multistep fusion network; attention mechanism
Online: 24 April 2018 (09:09:45 CEST)
Visual question answering (VQA) is receiving increasing attention from researchers in both the computer vision and natural language processing fields. There are two key components in the VQA task: feature extraction and multi-modal fusion. For feature extraction, we introduce a novel co-attention scheme by combining Sentence-guide Word Attention (SWA) and Question-guide Image Attention (QIA) in a unified framework. To be specific, the textual attention SWA relies on the semantics of the whole question sentence to calculate contributions of different question words for text representation. For the multi-modal fusion, we propose a “Cross-modal Multistep Fusion (CMF)” network to generate multistep features and achieve multiple interactions for two modalities, rather than focusing on modeling complex interactions between two modals like most current feature fusion methods. To avoid the linear increase of the computational cost, we share the parameters for each step in the CMF. Extensive experiments demonstrate that the proposed method can achieve competitive or better performance than the state-of-the-art.
ARTICLE | doi:10.20944/preprints202107.0385.v1
Subject: Computer Science And Mathematics, Algebra And Number Theory Keywords: Visual Question Generation; Visual Question Answering; Variational Autoencoders; Radiology Images; Domain Knowledge; UMLS; Data Augmentation; Computer Vision; Natural Language Processing; Artificial Intelligence; Medical Domain.
Online: 16 July 2021 (16:18:56 CEST)
Visual Question Generation (VQG) from images is a rising research topic in both fields of natural language processing and computer vision. Although there are some recent efforts towards generating questions from images in the open domain, the VQG task in the medical domain has not been well-studied so far due to the lack of labeled data. In this paper, we introduce a goal-driven VQG approach for radiology images called VQGRaD that generates questions targeting specific image aspects such as modality and abnormality. In particular, we study generating natural language questions based on the visual content of the image and on additional information such as the image caption and the question category. VQGRaD encodes the dense vectors of different inputs into two latent spaces, which allows generating, for a specific question category, relevant questions about the images, with or without their captions. We also explore the impact of domain knowledge incorporation (e.g., medical entities and semantic types) and data augmentation techniques on visual question generation in the medical domain. Experiments performed on the VQA-RAD dataset of clinical visual questions showed that VQGRaD achieves 61.86% BLEU score and outperforms strong baselines. We also performed a blinded human evaluation of the grammaticality, fluency, and relevance of the generated questions. The human evaluation demonstrated the better quality of VQGRaD outputs and showed that incorporating medical entities improves the quality of the generated questions. Using the test data and evaluation process of the ImageCLEF 2020 VQA-Med challenge, we found that relying on the proposed data augmentation technique to generate new training samples by applying different kinds of transformations, can mitigate the lack of data, avoid overfitting, and bring a substantial improvement in medical VQG.
ARTICLE | doi:10.20944/preprints202309.1994.v1
Subject: Aerospace Engineering, Engineering Keywords: GNSS; Visual Inertial Odometry; Failure Modes; GRU-aided ESKF; Complex Environments
Online: 28 September 2023 (10:28:57 CEST)
To enhance system reliability and mitigate vulnerabilities of the Global Navigation Satellite Systems (GNSS), it is common to fuse the Inertial Measurement Unit (IMU) and visual sensors with the GNSS receiver in the navigation system design, effectively enabling compensations with absolute position and reducing data gaps. To address shortcomings of traditional Kalman Filter (KF) such as sensor error, imperfect nonlinear system model, and KF estimation error, a GRU-aided ESKF architecture is proposed to enhance positioning performance. This study conducts Failure Mode and Effect Analysis (FMEA) to priories and identify the potential faults in the urban environment facilitating the design of improved fault-tolerant system architecture. The identified primary fault events are data association error and navigation environment error during fault conditions of feature mismatch, especially in the presence of multiple failure modes. A hybrid federated navigation system architecture is employed using a Gated Recurrent Unit (GRU) to predict state increments for updating the state vector in the Error Estate Kalman Filter (ESKF) measurement step. The proposed algorithm’s performance is evaluated in a simulation environment in MATLAB under multiple visually degraded conditions. Comparative results provide evidence that the GRU-aided ESKF outperforms standard ESKF and state-of-the-art solutions like VINS-Mono, End-to-End VIO, and Self-Supervised VIO, exhibiting higher efficiency in complex environments.
ARTICLE | doi:10.20944/preprints202307.1922.v1
Subject: Medicine And Pharmacology, Ophthalmology Keywords: enhanced monofocal intraocular lens; open-angle glaucoma; cataract surgery; visual outcomes
Online: 28 July 2023 (11:17:54 CEST)
This study aimed to compare the efficacies and safety of enhanced and standard monofocal intraocular lenses (IOLs) in eyes with early glaucoma. Patients with concurrent cataracts and open-angle glaucoma (OAG) were enrolled. They underwent cataract surgery with IOL implantation. Manifest refraction; monocular uncorrected distance visual acuity (UDVA), corrected distance visual acuity (CDVA), uncorrected intermediate visual acuity (UIVA), and uncorrected near visual acuity (UNVA); visual field (VF); contrast sensitivity (CS); defocus curves and questionnaires were assessed three months postoperatively. Thirty-four and 38 patients had enhanced and standard monofocal IOLs, respectively. The enhanced monofocal IOL provided better UIVA than the standard monofocal IOL (p = 0.003) but similar UDVA, CDVA and UNVA. The enhanced monofocal IOL had more consistent defocus curves than the standard monofocal IOL, especially at -1 (p = 0.042) and -1.5 (p = 0.026) diopters. The enhanced monofocal IOL provided better satisfaction (p = 0.019) and lower spectacle dependence (p = 0.004) than the standard monofocal IOL for intermediate vision, with similar VF and CS. In conclusion, the enhanced monofocal IOL is recommended for patients with OAG because they provide better intermediate vision, higher satisfaction, and lower dependence on spectacles than standard monofocal IOL, without worsening other visual outcomes.
ARTICLE | doi:10.20944/preprints202306.2268.v1
Subject: Engineering, Mechanical Engineering Keywords: Functional Connectivity; EEG Signals; Temporal patterns; Visual events; Discrimination; Brain regions
Online: 3 July 2023 (10:43:29 CEST)
To investigate the intricate dynamics of brain activity when interacting with a dynamic environment, it is imperative to continuously generate and update expectations regarding forthcoming events and their corresponding sensory and motor responses. This study aims to explore the interconnectivity in time perception across predictable and unpredictable conditions. The necessary data for this study were acquired from EEG signals, sourced from an existing database that involved an experiment conducted on a group of healthy participants. The individuals were subjected to two distinct conditions, predictable and unpredictable, encompassing various time delays. The functional connectivity between brain regions was estimated using a method known as the phase lag index. This method was employed to discern disparities in time perception between the two conditions. A comprehensive comparison of the two conditions across different delays demonstrated noteworthy variations, particularly in the gamma, beta, and theta frequency bands. The differences between delays were more pronounced in the predictable condition. Subsequently, an in-depth analysis was conducted to scrutinize the dissimilarities between the conditions within each delay. Notably, significant differences were observed across all delays. In the unpredictable condition, an increase in connectivity was detected within the alpha band during the 400-ms delay, specifically between occipital and temporal regions. Moreover, the mean connectivity in the unpredictable condition surpassed that of the predictable condition. In the delta band, distinct connectivity patterns were observed across different delays, involving connections between central and frontal regions. Specifically, a heightened connectivity between central and prefrontal regions was noted during the 83-ms delay. Notably, the right hemisphere of the prefrontal cortex played a vital role in time perception. Furthermore, a decline in connectivity across the delta, theta, and beta bands was observed in both conditions during the longest delay (800 ms) when compared to other delays
ARTICLE | doi:10.20944/preprints202306.2158.v1
Subject: Social Sciences, Transportation Keywords: rural road landscape; landscape character; landscape visual quality; rural tourism experience
Online: 30 June 2023 (02:43:44 CEST)
The rural road landscape is crucial in forming rural areas' landscape character (LC). As a platform for portraying the rural landscape, the rural roads demonstrate the area's unique natural and cultural characteristics to the visitors. However, with the continuous development of rural areas, the rural LC has been severely impacted, thus impacting visitors' visual experience. In order to preserve and protect the rural landscape, this study aims to assess the visual quality of rural road landscapes based on public preference and heatmap analysis. The results indicated that most of the participants had a higher level of preference for rural landscapes with open horizontal views represented by agricultural areas such as paddy fields. It was also found that different paddy field characters based on their planting stages can also positively affect the visual quality of rural road landscapes. The study also revealed that rural LCs with roadside settlements, commercial structures, mixed agricultural crops, and vegetation received low preference ratings. These characters negatively impact the visual quality of the rural road landscape. These findings provide significant insight for planners and decision-makers regarding protecting and preserving the essential rural road landscapes for the rural tourism experience.
ARTICLE | doi:10.20944/preprints202306.0959.v1
Subject: Computer Science And Mathematics, Artificial Intelligence And Machine Learning Keywords: visual grounding; human-computer interaction; intelligent systems; user experience; interaction design
Online: 13 June 2023 (16:29:23 CEST)
In recent years, with the rapid development of computer vision technology and the popularization of intelligent hardware, as well as people’s increasing demand for intelligent products for human-computer interaction, visual grounding technology can help machines and humans identify and locate objects, thereby promoting human-computer interaction and intelligent manufacturing. At the same time, human-computer interaction is constantly evolving and improving, becoming increasingly intelligent, humane, and efficient. This paper proposes a new VG model and designs a language verification module that uses language information as the main information to increase the model’s interactivity. Additionally, we propose the combination of visual grounding and human-computer interaction, aiming to explore the research status and development trends of visual grounding and human-computer interaction technology, as well as their application in practical scenarios and optimization directions, to provide references and guidance for relevant researchers and promote the development and application of visual grounding and human-computer interaction technology.
ARTICLE | doi:10.20944/preprints202203.0252.v1
Subject: Computer Science And Mathematics, Mathematical And Computational Biology Keywords: Audio-Visual Technologies; Blended Learning; Pedagogy; Virtual Learning Environments; Virtual Reality
Online: 17 March 2022 (11:05:27 CET)
The Covid-19 pandemic caused a shift in teaching practice towards blended learning for many Higher Education institutions. This led to the rapid adoption of certain digital technologies within existing teaching structures as a means to meet student access needs and facilitate learning. Integration of these technologies caused numerous challenges for practitioners and often provided mixed results. This paper is an attempt to summarise and extend pre-Covid pedagogical research to leverage digital immersive technologies for blended teaching in the post-pandemic era. Focus is given towards the evolution of Virtual Learning Environments through elements of immersive audio-visual technologies, which are shown to be effective when coupled in a blended approach. It is both a review of these methodologies and a case study of the I-Ulysses: Virtual Learning Environment as a point of comparison for evaluating the review.
Subject: Computer Science And Mathematics, Algebra And Number Theory Keywords: Data Visualization; Visual Analytics; Natural Language Processing; Dark Data; Pattern Recognition
Online: 28 October 2020 (07:47:26 CET)
Over the years, there has been a significant rise in the world's scientific knowledge. However, most of it lacks structure and is often termed as Dark Data. Both humans and expert systems have continually faced difficulty in analyzing and comprehending such overwhelming amounts of information which is crucial in solving several real-world problems. Information and data visualization techniques proffer a promising solution to explore such data by allowing quick comprehension of information, the discovery of emerging trends, identification of relationships and patterns, etc. In this tutorial, we utilize the rich corpus of PubMed comprising of more than 30 million citations from biomedical literature to visually explore and understand the underlying key-insights using various information visualization techniques. With this study, we aim to diminish the limitation of human cognition and perception in handling and examining such large volumes of data by speeding up the process of decision making and pattern recognition and enabling decision-makers to fully understand data insights and make informed decisions.
ARTICLE | doi:10.20944/preprints201909.0227.v1
Subject: Physical Sciences, Optics And Photonics Keywords: non-visual effect; metamerism; light-emitting diodes (LEDs); lighting; spectral optimization
Online: 19 September 2019 (15:43:06 CEST)
The growing awareness of the biological effects of artificial light on humans has stimulated ample research. It is now widely accepted that asynchrony between artificialand natural light-dark cycles can elicit severe detrimental health effects. New research has been devoted to lighting solutions that dynamically change their color to mimic spectral changes of daylight and to account for human needs. However, in some situations, the visual properties of light must be preserved: For example professional TV video editors and shift workers who must work under standardized lighting conditions to do color correction in post-production. We have investigated the possibility to tune circadian effects using white lights that are spectrally different but nonetheless have similar color coordinates thus appear as a similar white tone. Our simulation results indicate that it is possible to modulate circadian light effects by combining LEDs for neutral white (4000 K), a widely used white tone for indoor lighting in europe. The results also show that the solutions combining single-color LEDs do however not meet the quality criteria from the visual point of view because their color rendering ability decays to unacceptable low levels. Combining narrowband LEDs with a broadband white LED improves the color rendering quality and we show how far circadian light effects can be tuned according to common theoretical models. The aim is to reflect daylight situations with artificial lighting, thus having a high melatonin suppression in the morning and low melatonin suppression in the evening. As a consequence our aim is to show what maximum and minimum circadian effect is possible with the same set of LEDs.
ARTICLE | doi:10.20944/preprints201806.0211.v1
Subject: Social Sciences, Cognitive Science Keywords: metacontrast; attention; exogenous attention; endogenous attention; visual masking; masking attention interactions
Online: 13 June 2018 (11:06:02 CEST)
To efficiently use its finite resources, the visual system selects for further processing only a subset of the rich sensory information. Visual masking and spatial attention control the information transfer from visual sensory-memory to visual short-term memory. There is still a debate whether these two processes operate independently or interact, with empirical evidence supporting both arguments. However, recent studies pointed out that earlier studies showing significant interactions between common-onset masking and attention suffered from ceiling and/or floor effects. Our review of previous studies reporting metacontrast-attention interactions revealed similar artifacts. Therefore, we investigated metacontrast-attention interactions by using an experimental paradigm in which ceiling/floor effects were avoided. We also examined whether metacontrast masking is differently influenced by endogenous and exogenous attention. We analyzed mean absolute-magnitude of response-errors and their statistical distribution. Our results support the hypothesis that metacontrast and endogenous/exogenous attention are largely independent with negligible likelihood for interactions. Moreover, statistical modeling of the distribution of response-errors suggests weak interactions modulating the probability of “guessing” behavior for some observers in both types of attention. Nevertheless, our data suggest that any joint effect of attention and metacontrast can be adequately explained by their independent and additive contributions.
ARTICLE | doi:10.20944/preprints201704.0030.v1
Subject: Computer Science And Mathematics, Data Structures, Algorithms And Complexity Keywords: Automatic localization; human visual mechanism; superpixel contrast feature; ultrasound breast tumor.
Online: 5 April 2017 (15:50:40 CEST)
Human visual system (HVM) can quickly localize the most salient object in scenes, which has been widely applied on natural image segmentation -. In ultrasound (US) breast images, compared with background areas, tumor is more salient because of its higher contrast. In this paper, we develop a novel automatic localization method based on HVM for automatic segmentation of ultrasound (US) breast tumors. First, the input image is smoothed by convolution with a linearly separable Gaussian filter and then subsampled into a 9-layer Gaussian pyramid. Then intensity, blackness ratio, and superpixel contrast features are combined to compute saliency map, in which Winner Take All algorithm is used to localize the most salient region, presenting with a circle on the localized target. Finally the circle is taken as the initial contour of CV level set to finish the extraction of breast tumor. The localization method has been tested on 400 US beast images, among which 378 images have higher saliency than background areas and succeed in localization, with high accuracy 92.00%. The HVM localization method can be used to localize the tumors, combined with this method, CV level set can achieve the fully automatic segmentation of US breast tumors. By combing intensity, blackness ratio and superpixel contrast features, the proposed localization method can successfully avoid the interference caused by background areas with low echo and high intensity. Moreover, multi-object localization of US breast images can be considered in future employment.
ARTICLE | doi:10.20944/preprints202304.1074.v2
Subject: Computer Science And Mathematics, Artificial Intelligence And Machine Learning Keywords: data analysis; computer vision algorithms; visual data; natural language processing; scientific research
Online: 2 May 2023 (04:13:23 CEST)
The abundance of information in academic articles, reports, and studies can make it challenging for researchers to gain insights from the existing literature. To address this issue, there is a growing demand for tools that can help researchers effectively parse and analyze large volumes of data. One such tool is DataDiscoveryLab, a software system that utilizes computer vision algorithms and NLP techniques to parse academic articles into text and figures, creating three separate databases. These databases allow researchers to quickly identify articles that may be relevant to their research questions, gain a deeper understanding of the research presented, and analyze visual data. The integration of article mining and computer vision in the DataDiscoveryLab software system provides researchers with a powerful tool for navigating the vast amount of scientific literature available today. Yet, as we will discuss in the latter papers these databases’ purpose is to create a bridge between researchers’ data and practically unlimited scientific publications. Yet, in this article, we will discuss how we plan to do that, and our efforts on integrating deep learning modes. After all, unlike already existing AI models, DataDiscoveryLab can be their combination and the first Generative AI in academia that can encompass every part of the natural sciences.
ARTICLE | doi:10.20944/preprints202304.1242.v1
Subject: Computer Science And Mathematics, Artificial Intelligence And Machine Learning Keywords: visual intelligence; object detection; image processing; action recognition; autonomous vehicles; machine learning
Online: 30 April 2023 (02:50:07 CEST)
In the context of Shared Autonomous Vehicles, the need to monitor the environment inside the car will be crucial. This article focuses on the application of deep learning algorithms to detect objects, namely lost/forgotten items to inform the passengers, and aggressive items to monitoring if violent actions may arise between passengers. For object detection algorithms was used public datasets (COCO and TAO) to train state-of-the-art algorithms, such as YOLOv5. For violent action detection was used the MoLa InCar dataset to train on state-of-the-art algorithms such as I3D, R(2+1)D, SlowFast, TSN and TSM. At the end an embedded automotive solution was used to demonstrate both methods running in real-time.
ARTICLE | doi:10.20944/preprints202301.0403.v1
Subject: Social Sciences, Cognitive Science Keywords: visual perception; emotion; emoji; emoticon; sex differences; anger; fear; emotional communication; texting
Online: 23 January 2023 (08:43:06 CET)
Emojis are colorful ideograms resembling stylized faces commonly used for expressing emotions in instant messaging, in social network sites and in email communication. Notwithstanding their increasing and pervasive use in electronic communication, they are not much investigated in terms of their psychological properties and communicative efficacy. Here we presented 112 different human facial expressions and emojis (expressing neutrality, joy, surprise, sadness, anger, fear and disgust) to a group of 96 female and male university students engaged in the recognition of their emotional meaning. Both Analysis of Variance and Wilcoxon tests showed that men were significantly better than women at recognizing emojis (especially negative ones) while women were better than men at recognizing human facial expressions. Quite interestingly, men were better at recognizing emojis than human facial expressions per se. These findings are in line with more recent evidences suggesting how men may be more competent and inclined to use emojis to express their emotions in messaging (especially sarcasm, tease and love) than previously thought. Finally, the data indicate how emojis are less ambiguous than facial expressions (except for neutral and surprise emotions), possibly because of the limited number of fine-grained details, and the lack of morphological features conveying facial identity.
ARTICLE | doi:10.20944/preprints202209.0127.v1
Subject: Computer Science And Mathematics, Computer Vision And Graphics Keywords: Image defogging; visual enhancement evaluation; edge detection; deep neural networks; autonomous systems
Online: 8 September 2022 (15:37:03 CEST)
Fog, haze, or smoke are usual atmospheric phenomena that dramatically compromise the overall visibility of any scene, critically affecting features such as illumination, contrast, and contour detection of objects. The decrease in visibility compromises the performance of computer vision algorithms such as pattern recognition and segmentation, some of them very relevant for decision-making for the security or autonomous vehicle industries. Several dehazing methods have been proposed, however, to the best of our knowledge, all proposed metrics compare the defogged image to its ground truth for evaluation of the defogging algorithms, or need to estimate parameters through physical models. This fact hinders progress in the field as obtaining proper ground truth images is costly and time-consuming, and physical parameters greatly depend on the scene conditions. This paper aims to tackle this issue by proposing a contour-based metric for image defogging evaluation that does not need a ground truth image. The proposed metric only requires the original hazy RGB image and the RGB image after the defogging procedure. A comparison of the proposed metric with metrics currently used in the NTIRE 2018 defogging challenge is performed to prove its effectiveness in a general situation, showing comparable results to conventional metrics.
ARTICLE | doi:10.20944/preprints202208.0282.v2
Subject: Engineering, Civil Engineering Keywords: Keywords: Computer Program; User manual; Visual Basic; Solid Slab, and simply supported
Online: 29 August 2022 (04:57:59 CEST)
This study's main target was to analyze and design rectangular edges supported two ways solid slabs by using ES EN 1992-1-1:2015. Slab design is often carried out either manually or with the use of design and analytic software. The researcher sees that some software cannot accept some countries' standard codes. For example, currently in Ethiopia analysis and design of two-way solid slab is done using readily available Excel sheet template. But working with this might have many problems, firstly the structure that is already analyzed by SAP or SAFE or any other international software application that uses international codes but which cannot design structure using ES EN 1992; for instant euro codes and designed by excel sheet can create failure and uneconomical analysis and design result. In this paper, the slab is designed and analyzed based on the chosen concrete grade, chosen reinforcement bar diameter, chosen steel grade for design and analysis of slabs calculations like load, moment, shear, and deflection checking using the moment coefficient method for analysis and design and Microsoft Visual Basic 2010 for coding. All input values are given by the International Standard units and are also used to represent output values. Using manual calculations delays time and mostly the result is not correct. But using this Computer program can increase computation accuracy and save time. The procedure the researcher followed is, first the manual calculation has been done and then SADSE2021 has been done. The result is that both are 99.9% identical, and the disadvantage of this method is that it cannot be used to determine the detailed drawing.
ARTICLE | doi:10.20944/preprints202107.0624.v1
Subject: Computer Science And Mathematics, Information Systems Keywords: knowledge representation; electronic health records; health information systems; content identification; visual interface
Online: 28 July 2021 (10:42:49 CEST)
Medical records contain many terms which are difficult to process. Our aim in this study is to allow the visual exploration of the information in medical databases where the texts presents a large number of syntactic variations and abbreviations, through an interface which facilitates content identification, navigation and information retrieval. We propose the use of multi-term tag clouds as content representation tools and as assistants for the browsing and querying tasks. The tag cloud generation is achieved through a novelty mathematical method that allows related terms to remain grouped together within the tags To evaluate this proposal, we have used a database with 24,481 records. 23 expert users in the medical field were tasked to complete a survey to evaluate the generated tag clouds properties and we obtained a precision of 0.990, a recall of 0.870 and a F1score of 0.904 in the evaluation of the tag cloud as an information retrieval tool. The main contribution of this approach is that we automatically generate a visual interface over the text capable of capturing the semantics of the information and facilitating access to medical records.
ARTICLE | doi:10.20944/preprints202010.0455.v1
Subject: Engineering, Automotive Engineering Keywords: KINECT; industrial robot; vision system; RobotStudio; Visual Studio; gesture control; voice control
Online: 22 October 2020 (09:57:07 CEST)
The paper presents the possibility of using KINECT v2 module to control an industrial robot by means of gestures and voice commands. It describes elements of creating software for off-line and on-line robot control. The application for KINECT module was developed in C# language in Visual Studio environment, while the industrial robot control program was developed in RAPID language in RobotStudio environment. The development of a two-threaded application in RAPID language allowed to separate two independent tasks for the IRB120 robot. The main task of the robot is performed in thread no. 1 (responsible for movement). Simultaneously working thread no. 2 ensures continuous communication with the KINECT system and provides information about the gesture and voice commands in real time without any interference in thread no. 1. The applied solution allows the robot to work in industrial conditions without negative impact of communication task on the time of robot’s work cycles. Thanks to the development of a digital twin of the real robot station, tests of proper application functioning in off-line mode (without using a real robot) were conducted. Obtained results were verified online (on the real test station). Tests of correctness of gesture recognition were carried out, the robot recognized all programmed gestures. Another test carried out was the recognition and execution of voice commands. A difference in the time of task completion between the actual and virtual station was noticed - the average difference was 0.67 s. The last test carried out was to examine the impact of interference on the recognition of voice commands. With a 10dB difference between the command and noise, the recognition of voice commands was equal to 91.43%. The developed computer programs have a modular structure, which enables easy adaptation to process requirements.
ARTICLE | doi:10.20944/preprints202009.0740.v1
Subject: Biology And Life Sciences, Biochemistry And Molecular Biology Keywords: balance training; real-time visual feedback; smart wearable devices; center of pressure
Online: 30 September 2020 (11:00:33 CEST)
This study aims to explore the effect of real-time visual feedback (VF) information of the pres-sure of center (COP) provided by intelligent insoles on balance training in a one leg stance (OLS) and tandem stance (TS) posture. Thirty healthy female college students were randomly assigned to the visual feedback balance training group (VFT), non-visual feedback balance training group (NVFT), and control group (CG). The balance training includes: OLS, tandem Stance (dominant leg behind, TSDL), tandem stance (non-dominant leg behind, TSNDL). The training lasted 4 weeks, the training lasts 30 minutes at an interval of 1 days. There was a sig-nificant difference in the interaction effect between Groups*Times of the COP parameters (p<0.05) for OLS. There was no significant difference in the interaction effect between Groups*Times of the COP parameters (p>0.05) for TS. The main effect of the COP parameters was a significant difference in Times (p<0.05). The COP displacement, velocity, radius, and area in VFT significantly decreased after training (p < 0.05). Therefore, the visual feedback technology of intelligent auxiliary equipment during balance training can enhance the benefit of training. The use of smart wearable devices in OLS balance training may improve the visual and physical balance integration ability.
REVIEW | doi:10.20944/preprints202003.0020.v1
Subject: Medicine And Pharmacology, Anesthesiology And Pain Medicine Keywords: visual patient; patient monitoring; avatar-based technology; situation awareness; user-centered design
Online: 2 March 2020 (00:56:03 CET)
Visual Patient technology is a situation awareness–oriented visualization technology that translates numerical and waveform patient monitoring data into a new user-centered visual language. Vital sign values are converted into colors, shapes, and rhythmic movements—a language humans can easily perceive and interpret—on a patient avatar model in real time. In this review, we summarize the current state of the research on the Visual Patient, including the technology, its history, and its scientific context. We also provide a summary of our primary research and a brief overview of research work on similar user-centered visualizations in medicine. In several computer-based studies under various experimental conditions, Visual Patient transferred more information per unit time, increased perceived diagnostic certainty, and lowered perceived workload. Eye tracking showed the technology worked because of the way it synthesizes and transforms vital sign information into new and logical forms corresponding to the real phenomena. The technology could be particularly useful for improving situation awareness in settings with high cognitive demand or when users must make quick decisions. This comprehensive review of Visual Patient research is the foundation for an evaluation of the technology in clinical applications, starting with a high-fidelity simulation study in early 2020.
ARTICLE | doi:10.20944/preprints201804.0040.v1
Subject: Medicine And Pharmacology, Ophthalmology Keywords: Methanol exposure; toxic effects; subcontractor manufacturing; dispatched workers; visual defect; neurobehavioral function
Online: 3 April 2018 (16:11:15 CEST)
An outbreak of occupational methanol poisoning occurred in small-scale 3rd tier factories of large-scale smartphone manufacturer, in the Republic of Korea, in 2016. To investigate the working environment and the health effect of the methanol exposure among co-workers of the methanol poisoning cases, we performed a cross sectional study on 155 workers at the five aluminum CNC cutting factories. Air and urinary methanol concentration were measured by gas chromatography, and health examination included symptoms, ophthalmological examinations and neurobehavioral tests. Multiple logistic regression analyses controlled for age and sex were conducted for revealing association of employment duration with symptoms. Air concentrations of methanol in factory A and E were ranged from 228.5 to 2220.0 ppm. Mean urinary methanol concentrations of the workers in each factory were from 3.5 mg/L up to 91.2 mg/L. The odds ratios for symptom of deteriorating vision and CNS increased, according to the employment duration, after adjusting for age and sex. Four cases with injured optic nerve and two cases with decreased neurobehavioral function were founded among co-workers of the victims. This study showed that the methanol exposure under poor environmental control not only produce eye and CNS symptoms but also affect neurobehavioral function and optic nerve.
ARTICLE | doi:10.20944/preprints201710.0093.v1
Subject: Engineering, Automotive Engineering Keywords: SAR image; Visual attention model; Texture Saliency; Feature map; Focus of attention
Online: 13 October 2017 (17:08:14 CEST)
Targets detection in synthetic aperture radar (SAR) remote sensing images, which is a fundamental but challenging problem in the field of satellite image analysis, plays an important role for a wide range of applications and is receiving significant attention in recent years. Besides, the ability of human visual system to detect visual saliency is extraordinarily fast and reliable. However, computational modeling of SAR image scene still remains a challenge. This paper analyzes the defects and shortcomings of traditional visual models applied to SAR images. Then a visual attention model designed for SAR images is proposed. The model draws the basic framework of classical ITTI model; selects and extracts the texture features and other features that can describe the SAR image better. We proposes a new algorithm for computing the local texture saliency of the input image, then the model constructs the corresponding saliency maps of features; Next, a new mechanism of feature fusion is adopted to replace the linear additive mechanism of classical models to obtain the overall saliency map; Finally, the gray-scale characteristics of focus of attention (FOA) in saliency map of all features are taken into account, our model choose the best saliency representation, Through the multi-scale competition strategy, the filter and threshold segmentation of the saliency maps can be used to select the salient regions accurately, thereby completing this operation for the visual saliency detection in SAR images. In the paper, several types of satellite image data, such as TerraSAR-X (TS-X), Radarsat-2, are used to evaluate the performance of visual models. The results show that our model provides superior performance compared with classical visual models. By further contrasting with the classical visual models, Our model reduce the false alarm caused by speckle noise, and its detection speed is greatly improved, and it is increased by 25% to 45%.
ARTICLE | doi:10.20944/preprints202307.1841.v1
Subject: Engineering, Architecture, Building And Construction Keywords: Historical timber; conservation; visual survey; mechanical properties; stiffness; non-destructive testing; Silver fir.
Online: 27 July 2023 (09:30:32 CEST)
In ancient buildings, timber members could need specific on-site interventions, including reinforcement or repair, sometimes inserting of reinforcing material into grooves routed in the original sound wood. Grooves number, positions and dimensions, and strengthening materials may differ, according to the desired increase of bending stiffness and strength. The Modulus-of-Elasticity (MOE) of each beam is of key importance, but the incisions can affect it, net of the reduction of the removed wood. In this study 12 old beams were accurately measured and their static and dynamic MOEs calculated before and after the grooves processing, to simulate the operations of typical reinforcing interventions. One groove was routed along the length of each beam and deepened progressively by three steps. Tests have showed how the MOE is affected by the groove depth, decreasing up to one third. The paper proves how the weakening effect of grooves on-site can be assessed with dynamic MOE and roughly predicted by means of visual survey. Grooves need for strengthening beams, must be carefully evaluated because of the variation of the actual mechanical properties, net to lost wood.
ARTICLE | doi:10.20944/preprints202306.1669.v1
Subject: Environmental And Earth Sciences, Remote Sensing Keywords: Livestock monitoring; Open source UAV; Depth sorting; Kalman filter; Optical flow; Visual servo
Online: 23 June 2023 (11:53:25 CEST)
It is a challenging and meaningful task to carry out drone-based livestock monitoring in high-altitude and cold regions. The purpose of AI is to execute automated tasks and to solve practical problems in actual applications by combining the software technology with the hardware carrier to create integrated advanced devices. Only in this way, the maximum value of AI could be realized. In this paper, a real-time tracking system with dynamic target tracking ability is proposed. It is developed based on the tracking-by-detection architecture using YOLOv7 and DeepSORT algorithms for target detection and tracking, respectively. To address the existing problems of the DeepSORT algorithm, the following two optimizations are made: (1) Optical flow is used to compensate the Kalman filter for improvement of the prediction accuracy; (2) A low-confidence trajectory filtering method is adopted to reduce the influence of unreliable detection on target tracking. In addition, an visual servo controller for the UAV is designed to enable the automated tracking task. Finally, the system is tested using the Tibetan yaks living in the Tibetan Plateau as the tracking targets, and the results reveal the real-time multiple tracking ability and the ideal visual servo effect of the proposed system.
ARTICLE | doi:10.20944/preprints202306.0459.v1
Subject: Computer Science And Mathematics, Artificial Intelligence And Machine Learning Keywords: car price prediction; visual features; cross-market analysis; feature analysis; deep neural networks
Online: 6 June 2023 (12:11:52 CEST)
The used car market has a high global economic importance, with more than 35 million cars sold yearly. Accurately predicting prices is a crucial task for both buyers and sellers to facilitate informed decisions in terms of opportunities or potential problems. Although various Machine Learning techniques have been applied to create robust prediction models, a comprehensive approach has yet to be studied. This research introduces two datasets from different markets, one with over 300,000 entries from Germany to serve as a training base for deep prediction models and a second dataset from Romania containing more than 15,000 car quotes used mainly to observe local traits. As such, we include extensive cross-market analyses by comparing the emerging Romanian market versus one of the world’s largest and most developed car markets, Germany. Our study uses several neural network architectures that capture complex relationships between car model features, individual add-ons, and visual features to predict used car prices accurately. Our models achieved a high R2 score exceeding 0.95 on both datasets, indicating their effectiveness in estimating used car prices. Moreover, we experimented with advanced convolutional architectures to predict car prices based solely on visual features extracted from car images. This approach exhibited transfer-learning capabilities, leading to improved prediction accuracy, especially since the Romanian training dataset is limited. Our experiments highlight the most important factors influencing the price, while our findings have practical implications for buyers and sellers in assessing the value of vehicles. At the same time, the insights gained from this study enable informed decision-making and provide valuable guidance in the used car market.
REVIEW | doi:10.20944/preprints202010.0388.v1
Subject: Engineering, Automotive Engineering Keywords: Autism Spectrum Disorder; activity analysis; automated detection; repetitive behavior; abnormal gait; visual saliency
Online: 19 October 2020 (14:49:24 CEST)
Autism Spectrum Disorder (ASD) is a neuro-developmental disorder that limits social interactions, cognitive skills, and abilities. Since ASD can last during an affected person's entire life cycle, the diagnosis at the early onset can yield a significant positive impact. The current medical diagnostic systems (e.g., DSM-5/ICD-10) are somewhat subjective; rely purely on the behavioral observation of symptoms, and hence, some individuals often go misdiagnosed or late-diagnosed. Therefore, researchers have focused on developing data-driven automated diagnosis systems with less screening time, low cost, and improved accuracy while significantly reducing professional intervention. Human Activity Analysis (HAA) is considered one of the most promising niches in computer vision research. This paper aims to analyze its potentialities in the automated detection of autism by tracking the exclusive characteristics of autistic individuals such as repetitive behavior, atypical walking style, and unusual visual saliency. This review provides a detailed inspection of HAA-based autism detection literature published in 2011 on-wards depicting core approaches, challenges, probable solutions, available resources, and scopes of future exploration in this arena. According to our study, deep learning outperforms machine learning in ASD detection with a classification accuracy of 76\% to 95\% on different datasets comprise of video, image, or skeleton data that recorded participants performing a large number of actions. However, machine learning provides satisfactory results on datasets with a small number of action classes and has a range of 60\% to 93\% accuracy among numerous studies. We hope this extensive review will provide a comprehensive guideline for researchers in this field.
REVIEW | doi:10.20944/preprints201912.0179.v1
Subject: Biology And Life Sciences, Biophysics Keywords: globular set; category theory; multidimensional; visual recognition; drug-resistant epilepsy; transcranial magnetic stimulation.
Online: 13 December 2019 (10:37:03 CET)
Once a wheat sheaf has been sealed and tied up, its packed down straws display the same orientation and zero-divergence. This observation brings us to the mathematical notion of presheaf, i.e., a topological structure in which diverging functions are locally superimposed. We show how the concepts of presheaves and the correlated globular sets, borrowed from category theory and algebraic topology, allow a well-founded mathematical approach to otherwise elusive activities of the brain. The mathematical assessment of brain functions in terms of presheaves: a) explains why spontaneous random spikes synchronize; b) leads to the counterintuitive intuition of antidromic effects in neuronal spikes: when an entrained oscillation propagates from A to B, changes in B lead to changes in A. We provide testable previsions: a) we suggest the proper locations of transcranial magnetic stimulation’s coils to improve the clinical outcomes of drug-resistant epilepsy; b) we advocate that axonal stimulation by external sources backpropagates and alters the neuronal electric oscillatory frequency. Further, we describe how the hierarchical information transmission inside globular sets provides fresh insights concerning different issues at various coarse-grained scales, such as object persistence, memory reinforcement in spite of random noise, Bayesian inferential circuits.
Subject: Engineering, Marine Engineering Keywords: unmanned surface vehicles; optical visual perception; image stabilization; defogging; target detection; target tracking
Online: 24 November 2019 (16:54:46 CET)
Unmanned surface vehicles have the advantages of maneuverability, concealment, wide activity area and low cost of use. Therefore, they have broad application prospects. This makes unmanned surface vehicles a research hotspot at home and abroad, and the sensing technology is the basis for the unmanned surface vehicles to perform tasks. The perception technology based on optical vision has the advantages of convenient application, relatively low cost, easy data acquisition and large amount of information, and has been widely studied by scholars at home and abroad. This paper mainly discusses the research of optical vision in unmanned surface vehicles from five aspects: Firstly, the water surface image preprocessing based on unmanned surface vehicles, mainly including water surface image stabilization research and defogging enhancement research; two water boundary detection; It is the use of light vision target detection; the fourth is the surface target tracking method. Finally, the light vision research of unmanned surface vehicles is summarized and forecasted.
Subject: Computer Science And Mathematics, Other Keywords: steady-state visual evoked potential; brain-computer interface; direction; eccentricity; canonical correlation analysis
Online: 15 October 2019 (12:21:12 CEST)
The feasibility of a steady-state visual evoked potential (SSVEP) brain-computer interface (BCI) with a single flicker stimulus for multiple-target decoding has been demonstrated in a number of recent studies. The single-flicker BCIs have mainly employed the direction information for encoding the targets, i.e. different targets are placed at different spatial directions relative to the flicker stimulus. The present study explored whether visual eccentricity information can also be used to encode target for the purpose of increasing the number of targets in the single-flicker BCIs. A total number of 16 targets were encoded, placed at eight spatial directions, and two eccentricities (2.5° and 5°) relative to a 12 Hz flicker stimulus. Whereas distinct SSVEP topographies were elicited when participants gazed at targets of different directions, targets of different eccentricities were mainly represented by different signal-to-noise ratios (SNRs). Using a canonical correlation analysis-based classification algorithm, simultaneous decoding of both direction and eccentricity information was achieved, with an average offline 16-class accuracy of 66.8±16.4% averaged over 12 participants and a best individual accuracy of 90.0%. Our results demonstrate a single-flicker BCI with a substantially increased target number towards practical applications.
ARTICLE | doi:10.20944/preprints202309.0605.v1
Subject: Medicine And Pharmacology, Clinical Medicine Keywords: Visual Patient Avatar; patient monitoring; situation awareness; human factors; user-centered design; user perception
Online: 8 September 2023 (16:15:59 CEST)
Visual Patient Avatar ICU is an innovative approach to patient monitoring enhancing the user's situation awareness in intensive care settings. It dynamically displays the patient's current vital signs through changes in color, shape and animation. The technology can also indicate patient-inserted devices, such as arterial lines, central lines and urinary catheters, along with their insertion locations. We conducted an international, multi-center study using a sequential qualitative-quantitative design to evaluate users' perception of Visual Patient Avatar ICU among physicians and nurses. Twenty-five nurses and twenty-five physicians from the ICU participated in the structured interviews. Forty of them completed the online survey. Overall, ICU professionals expressed a positive outlook on Visual Patient Avatar ICU. They described Visual Patient Avatar ICU as a simple and intuitive tool that improved information retention and facilitated problem identification. However, a subset of participants expressed concerns about potential information overload and a sense of incompleteness due to missing exact numerical values. These findings provide valuable insights into user perceptions of Visual Patient Avatar ICU and encourage further technology development before clinical implementation.
ARTICLE | doi:10.20944/preprints202308.2188.v1
Subject: Social Sciences, Psychology Keywords: traumatic brain injury; social cognition; emotion recognition; eye tracking; fixation; visual processing; dynamic stimuli
Online: 31 August 2023 (13:16:58 CEST)
Emotion recognition and social inference impairments are well-documented post-traumatic brain injury (TBI) yet the mechanisms underpinning these are not fully understood. We examined dynamic emotion recognition, social inference abilities, and eye fixation patterns between adults with and without TBI. Eighteen individuals with TBI and 18 matched non-TBI participants were recruited and underwent all three components of The Assessment of Social Inference Test (TASIT). The TBI group were less accurate in identifying emotions compared to the non-TBI group. Individuals with TBI also scored lower when distinguishing sincere and sarcastic conversations but scored similarly to those without TBI during lie vignettes. Finally, those with TBI also had difficulty understanding the actor’s intentions, feelings, and beliefs compared to participants without TBI. No group differences were found for eye fixation patterns and there were no associations between fixations and behavioural accuracy scores. This conflicts with previous studies and might be related to an important distinction between static and dynamic stimuli. Visual strategies appeared goal- and stimulus-driven, with attention being distributed to the most diagnostic area of the face for each emotion. These findings suggest that low-level visual deficits may not be modulating emotion recognition and social inference disturbances post-TBI.
ARTICLE | doi:10.20944/preprints202304.0486.v1
Subject: Engineering, Chemical Engineering Keywords: Ultrasound irradiation; Liquid protrusion; Beads fountain/column/jet; Mist emergence; Acoustic conditions; Visual analysis
Online: 18 April 2023 (04:56:45 CEST)
The process of ultrasonic atomization involves a series of dynamic/topological deformations of free surface, though not always, of a bulk liquid (initially) below the air. This study focuses on such dynamic interfacial alterations realized by changing some acousto-related operating conditions, including ultrasound excitation frequency, acoustic strength or input power density, and the presence/absence of a “stabilizing” nozzle. High-speed, high-resolution imaging made it possible to qualitatively identify four representative transitions/demarcations: 1) the onset of a protrusion on otherwise flat free surface; 2) the appearance of undulation along the growing protuberance; 3) the triggering of emanating beads fountain out of this foundation-like region; and 4) the induction of droplets bursting and/or mist spreading. Quantitatively examined were the two-parameters specifications—on the degrees as well as induction—of the periodicity in the protrusion-surface and beads-fountain oscillations, detected over wider ranges of driving/excitation frequency (0.43–3.0 MHz) and input power density (0.5–10 W/cm2) applied to the ultrasound transducer of flat surface on which the nozzle was mounted or not. The resulting time sequence of images processed for the extended operating ranges, regarding the fountain structure pertaining in particular to recurring beads, confirms the wave-associated nature, i.e., their size “scalability” to the ultrasound wavelength, predictable from the traveling wave relationship. The thresholds in acoustic conditions for each of the four transition states of the fountain structure have been identified—notably, the onset of plausible “bifurcation” in the chain-beads diameter below a critical excitation frequency.
REVIEW | doi:10.20944/preprints202005.0348.v2
Subject: Social Sciences, Psychology Keywords: emotion; visual thalamus; initial evaluation; lateral geniculate nucleus; thalamic reticular nucleus; pulvinar; superior colliculus
Online: 11 May 2021 (10:32:30 CEST)
Current proposals on the temporal sequence in the processing of emotional visual stimuli are partially incompatible with growing empirical data. In the majority of them, the initial evaluation structures (IES) postulated to be in charge of the earliest detection of emotional stimuli (i.e., salient for the individual), are high order structures (i.e., those receiving visual inputs after several synapses). Thus, their latency of response cannot account for the first visual cortex response to emotional stimuli (peaking 80 ms in humans). Additionally, these proposed structures lack the necessary infrastructure to locally analyze the visual features of the stimulus (shape, color, motion, etc.) that define a stimulus as emotional. In particular, the amygdala is defended as the cornerstone IES also in humans, and cortical areas such as the ventral prefrontal cortex or the insula have been proposed as well to intervene in this initial evaluation process. The present review describes several first-order brain structures (i.e., receiving visual inputs after one synapsis), and second order structures (two synapses) that may complement the former, that accomplish with both prerequisites: presenting response latencies compatible with the observed activity at the visual cortex and possessing the necessary architecture to rudimentarily analyze in situ relevant features of the visual stimulation. The visual thalamus, and particularly the lateral geniculate nucleus (LGN), a first-order thalamic nucleus that actively processes visual information, is a good candidate to be the core IES, with the complementary action of the thalamic reticular nucleus (TRN). This LGN-TRN tandem could be supported, also in an ascending, initial evaluation phase, by the pulvinar, a second order thalamic structure, and first-order extra-thalamic nuclei (superior colliculus and certain nuclei of pretectum and the accessory optic system). In sum, the visual thalamus, scarcely studied in relation to emotional processing, is a serious candidate to be the missing link in early emotional evaluation and, in any case, is worth exploring in future research.
Subject: Engineering, Electrical And Electronic Engineering Keywords: automated visual inspection; convolutional neural network; deep learning; pattern classification; semiconductor inspection; wafer map
Online: 7 April 2020 (11:30:38 CEST)
This article presents an automated vision-based algorithm for the die-scale inspection of wafer images captured using scanning acoustic tomography (SAT). This algorithm can find defective and abnormal die-scale patterns, and produce a wafer map to visualize the distribution of defects and anomalies on the wafer. The main procedures include standard template extraction, die detection through template matching, pattern candidate prediction through clustering, and pattern classification through deep learning. To conduct the template matching, we first introduce a two-step method to obtain a standard template from the original SAT image. Subsequently, a majority of the die patterns are detected through template matching. Thereafter, the columns and rows arranged from the detected dies are predicted using a clustering method; thus, an initial wafer map is produced. This map is composed of detected die patterns and predicted pattern candidates. In the final phase of the proposed algorithm, we implement a deep learning-based model to determine defective and abnormal patterns in the wafer map. The experimental results verified the effectiveness and efficiency of our proposed algorithm. In conclusion, the proposed method performs well in identifying defective and abnormal die patterns, and produces a wafer map that presents important information for solving wafer fabrication issues.
ARTICLE | doi:10.20944/preprints202002.0187.v1
Subject: Biology And Life Sciences, Plant Sciences Keywords: dioecious; DNA quality; flower type; sample preservation method; sex genotype; sex phenotype; visual assay
Online: 14 February 2020 (04:22:20 CET)
Methods for high-quality DNA extraction and knowledge of sex expression and flowering time are essential for applying genomic-assisted breeding and improve the success with hybridization in Guinea yam. A dioecious or monoecious pattern of flowering and sometimes non-flowering is a common phenomenon within and between the Dioscorea species. The flowering in yam plants raised from botanical seeds often takes an extended period, mostly till the first clonal generation after propagation from the tubers. The prolonged process of testing required to identify plant sex and flowering intensity in yam breeding often poses a challenge to realize reduced breeding cycle and apply genomic selection. This study assessed sample preservation methods for DNA quality during extraction and potential of DNA marker to diagnose plant sex at the early seedling stage in white Guinea yam. The predicted sex at the seedling stage was further validated with the visual score for the sex phenotype at the flowering stage. DNA extracted from leaf samples preserved in liquid nitrogen, silica gel, dry ice, and oven drying methods was similar in quality with a high molecular weight than samples stored in ethanol solution. Yam plant sex diagnosis with the DNA marker (sp16) identified a higher proportion of ZW genotypes (female or monoecious phenotypes) than the ZZ genotypes (male phenotype) in the studied materials with 74% prediction accuracy. The results from this study provided valuable insights on suitable sample preservation methods for quality DNA extraction and the potential of DNA marker sp16 to predict sex in white Guinea yam.
ARTICLE | doi:10.20944/preprints201904.0084.v1
Subject: Biology And Life Sciences, Ecology, Evolution, Behavior And Systematics Keywords: abundance; detection; diamondback terrapin; Malaclemys terrapin; monitoring; N-mixture; salt marsh; visual head count
Online: 8 April 2019 (10:55:00 CEST)
Generating a range-wide population status of the diamondback terrapin (Malaclemys terrapin spp.) is challenging due to a combination of species ecology and behavior, and limitations associated with traditional sampling methods. Visual counting of emergent heads offers an efficient, non-invasive and promising method for generating abundance estimates of terrapin populations across broader spatial scales and can be used to explain spatial variation in population size. We conducted repeated visual head count surveys at 38 predetermined sites along the shoreline of Wellfleet Bay in Wellfleet, Massachusetts. We analyzed the count data using a hierarchical modeling framework designed specifically to analyze repeated count data: the so-called N-mixture model. This approach allows for simultaneous modeling of imperfect detection to generate estimates of true terrapin abundance. We found detection probability was lowest when skies were overcast and when wind speed was highest. Site specific abundance varied but we found that abundance estimates were, on average, higher in unexposed sites compared to exposed sites. We demonstrate the utility of pairing visual head counts and N-mixture models as an efficient method for estimating terrapin abundance and show how the approach can be used to identifying environmental factors that influence detectability and distribution.
ARTICLE | doi:10.20944/preprints201804.0213.v2
Subject: Social Sciences, Cognitive Science Keywords: Temporal-order judgments; modeling; theory of visual attention; TVA; range of indecision; encoding reset
Online: 8 June 2018 (16:12:09 CEST)
Humans are incapable of judging the temporal order of visual events at brief temporal separations with perfect accuracy. Their performance---which is of much interest in visual cognition and attention research---can be measured with the temporal-order judgment task, which typically produces S-shaped psychometric functions. Occasionally, researchers reported plateaus within these functions, and some theories predict such deviation from the basic S shape. However, the centers of the psychometric functions result from the weakest performance at the most difficult presentations and therefore fluctuate strongly, leaving existence and exact shapes of plateaus unclear. This study set out to investigate whether plateaus disappear if the data accuracy is enhanced, or if we are ``stuck on a plateau'', or rather with it. For this purpose, highly accurate data were assessed by model-based analysis. The existence of plateaus is confidently confirmed and two plausible mechanisms derived from very different models are presented. Neither model, however, performs well in the presence of a strong attention manipulation, and model comparison remains unclear on the question which of the models describes the data best. Nevertheless, the present study includes the highest accuracy in visual TOJ data and the most explicit models of plateaus in TOJ studied so far.
ARTICLE | doi:10.20944/preprints201802.0152.v1
Subject: Medicine And Pharmacology, Ophthalmology Keywords: glaucoma; lamina cribrosa; optic nerve head; optical coherence tomography; corneal hysteresis; visual field; trabeculectomy
Online: 24 February 2018 (11:06:05 CET)
Purpose: To investigate the relationship of lamina cribrosa displacement to corneal biomechanical properties and visual function after mitomycin C-augmented trabeculectomy. Method: Eighty-one primary open angle eyes were imaged before and after trabeculectomy using an enhanced depth spectral-domain optical coherence tomography (SDOCT). Corneal biomechanical properties were measured with the Ocular Response Analyser before the surgery. The anterior lamina cribrosa (LC) was marked at several points in each of six radial scans to evaluate LC displacement in response to Intraocular pressure (IOP) reduction. A Humphrey visual field test (HVF) was performed before the surgery as well as three and six months postoperatively. Results: Factors associated with a deeper baseline anterior lamina cribrosa depth (ALD) were cup-disc ratio (P=0.04), baseline IOP (P= 0.01), corneal hysteresis (P= 0.001), and corneal resistance factor (P= 0.001). After the surgery, the position of LC became more anterior (negative), posterior (positive) or remained unchanged. The mean LC displacement was -42 μm (P= 0.001) and was positively correlated with the magnitude of IOP reduction (regression coefficient: 0.251, P=0.02), and negatively correlated with age (regression coefficient: - 0.224, P= 0.04) as well as baseline cup-disk ratio (Regression coefficient: -0.212,P= 0.05) Eyes with a larger negative LC displacement were more likely to experience an HVF improvement of more than 3 dB gain in mean deviation (P= 0.002). Conclusion: A lower SDOCT cup-disc ratio, younger age, and a larger IOP reduction were correlated with a larger negative LC displacement and improving HVF. Corneal biomechanics did not predict LC displacement.
ARTICLE | doi:10.20944/preprints201609.0126.v2
Subject: Computer Science And Mathematics, Computer Science Keywords: Brain-computer interface (BCI); visual motion perception; neurotechnology application; EEG; realtime brain signal decoding
Online: 4 October 2016 (14:43:48 CEST)
The paper presents a study of two novel visual motion onset stimulus-based brain–computer interfaces (vmoBCI). Two settings are compared with afferent and efferent to a computer screen center motion patterns. Online vmoBCI experiments are conducted in an oddball event–related potential (ERP) paradigm allowing for “aha–responses” decoding in EEG brainwaves. A subsequent stepwise linear discriminant analysis classification (swLDA) classification accuracy comparison is discussed based on two inter–stimulus–interval (ISI) settings of 700 and 150 ms in two online vmoBCI applications with six and eight command settings. A research hypothesis of classification accuracy non–significant differences with various ISIs is confirmed based on the two settings of 700 ms and 150 ms, as well as with various numbers of ERP response averaging scenarios.The efferent in respect to display center visual motion patterns allowed for a faster interfacing and thus they are recommended as more suitable for the no–eye–movements requiring visual BCIs.
REVIEW | doi:10.20944/preprints202307.0542.v1
Subject: Biology And Life Sciences, Other Keywords: eye health; visual impairment; age-related macular degeneration; glaucoma; retinitis pigmentosa; diabetic retinopathy; therapeutic strategies
Online: 10 July 2023 (08:29:38 CEST)
Visual impairment and blindness are a growing public health problem, as they reduce the quality of life of millions of people. The management and treatment of these diseases represent a scientific and therapeutic challenge, since the different cellular and molecular actors involved in the pathophysiology are still being identified. The visual system components, particularly the retinal cells, are extremely sensitive to genetic or metabolic alterations, and immune cells activated by insults contribute to biological events that culminate with vision loss and irreversible blindness. Several ocular diseases are linked to retinal cell loss, and diseases such as retinitis pigmentosa, age-related macular degeneration, glaucoma and diabetic retinopathy are characterized by pathophysiological hallmarks that represent possibilities to study and develop novel treatments for retinal cells degeneration. Here, we present a compilation of revisited information on retinal degeneration, including pathophysiologic and molecular features, biochemical hallmarks and possible directions for novel treatments, aiming to assist as a guide for innovative research. The expansion of knowledge of the mechanistic bases of the pathobiology of eye diseases, including information on the complex interactions of genetic predisposition, chronic inflammation, and environmental and aging-related factors will allow the identification of new therapeutic strategies.
ARTICLE | doi:10.20944/preprints202306.1313.v2
Subject: Computer Science And Mathematics, Artificial Intelligence And Machine Learning Keywords: Visual Signals; Stereovision; Image Sampling; Feature Extraction; Incremental Learning; Match-Maker; Cognition; Recognition; Possibility Function.
Online: 19 June 2023 (07:47:08 CEST)
Visual signals are the upmost important source for robots, vehicles or machines to achieve human-like intelligence. Human beings heavily depend on binocular vision to understand the dynamically changing world. Similarly, intelligent robots or machines must also have the innate capabilities of perceiving knowledge from visual signals. Until today, one of the biggest challenges faced by intelligent robots or machines is the matching in stereovision. In this paper, we present the details of a new principle toward achieving a robust matching solution which leverages on the use and integration of top-down image sampling strategy, hybrid feature extraction, and RCE neural network for incremental learning (i.e., cognition) as well as robust match-maker (i.e., recognition). A preliminary version of the proposed solution has been implemented and tested with data from Maritime RobotX Challenge (www.robotx.org). The contribution of this paper is to attract more research interest and effort toward this new direction which may eventually lead to the development of robust solutions expected by future stereovision systems in intelligent robots, vehicles and machines.