ARTICLE | doi:10.20944/preprints202007.0072.v1
Online: 5 July 2020 (12:32:00 CEST)
Remote sensing has been used as an important tool for disaster monitoring and disaster scope extraction, especially for the analysis of temporal and spatial disasters patterns of large-scale long time series. In order to find out a rapid and effective method to monitor disaster in a wide range, based on the Google Earth Engine cloud platform, this study used MODIS vegetation index products of 250 meter spatial resolution synthesized in 16 days during the year 2005-2019 and three kinds of disaster monitoring and scope extraction models are proposed: normalized vegetation index median time standardization (RNDVI_TM(i)) model, the normalized vegetation index median phenology Standardization(RNDVI_AM(i)(j)) model, normalized vegetation index median spatiotemporal Standardization (RNDVI_ZM(i)(j)) model. The optimal threshold of disaster extraction for each model in different time phases was determined by Otsu method, and the extraction results were verified by Medium resolution image and ground measured data of the same or quasi-same period. Finally, the disaster scope of cultivated land in Heilongjiang province from 2010 to 2019 was extracted and the temporal and spatial pattern of disasters was analyzed based on the meteorological data. It shows that the three above-mentioned models have high disaster monitoring and range extraction capabilities with the verification accuracy of RNDVI_TM(i) 97.46%, RNDVI_AM(i)(j) 96.90%, and RNDVI_ZM(i)(j) 96.67% respectively. The spatial and temporal distribution of disasters is consistent with the disaster of the insured plots and meteorological data in the whole province. Meanwhile, it turns out that different monitoring and extraction methods are used in different disasters, among which wind hazard and insect disasters often need to be delayed for 16 days to observe. Each model also has various sensitivity and applicability to different disasters. Compared with other methods, this method is fast, and convenient, which allows it to be used for large-scale agricultural disaster monitoring and is easy to be applied into other research areas. The research provides a new idea for large-scale agricultural disaster monitoring.
ARTICLE | doi:10.20944/preprints202110.0015.v1
Subject: Medicine & Pharmacology, Nursing & Health Studies Keywords: Social media; Community; Facebook; Twitter; Google; Information; Interaction
Online: 1 October 2021 (12:03:09 CEST)
Background: Caregivers often use the internet to access information related to stroke care to improve preparedness, thereby reducing uncertainty and enhancing the quality of care. Method: Social media communities used by caregivers of people affected by stroke were identified using popular keywords searched for using Google. Communities were filtered based on their ability to provide support to caregivers. Data from the included communities were extracted and analysed to determine the content and level of interaction. Results: There was a significant rise in the use of social media by caregivers of people affected by stroke. The most popular social media communities were charitable and governmental organizations with the highest user interaction – this was for topics related to stroke prevention, signs and symptoms, and caregiver self-care delivered through video-based resources. Conclusion: Findings show the ability of social media to support stroke caregiver needs and practices that should be considered to increase their interaction and support.
TECHNICAL NOTE | doi:10.20944/preprints202211.0241.v1
Subject: Earth Sciences, Environmental Sciences Keywords: Google Earth Engine; R coding; GIS, Restoration, Decision-Making
Online: 14 November 2022 (06:29:30 CET)
Land degradation and climate change are among the main threats to the sustainability of ecosystems worldwide. Therefore, the restoration of degraded landscapes is essential to maintain the functionality of ecosystems, especially those with greater social, economic and environmental vulnerability. Nevertheless, policy-makers are frequently challenged by deciding on where to prioritize restoration actions, which usually includes to deal with multiple and complex needs under an always short budget. If these decisions are not taken based on proper data and processes, restoration implementation can easily fail. To help decision-makers taking informed decisions on where to implement restoration activities, we have developed a semiautomatic geospatial platform to prioritize areas for restoration activities based on ecological, social and economic variables. This platform takes advantage of the potential to integrate R coding, Google Earth Engine cloud computing and GIS visualization services to generate an interactive geospatial decision-maker tool for restoration. Here, we present a prototype version called "RePlant alpha" which was tested with data from the Central Zone of Chile. This exercise proved that integrating R and GEE was feasible, and that the analysis, with at least six indicators and for a specific region was also feasible to implement even from a personal computer. Therefore, the use of a virtual machine in the cloud with a large number of indicators over large areas is both possible and practical.
ARTICLE | doi:10.20944/preprints202207.0462.v1
Subject: Social Sciences, Finance Keywords: Machine Learning; Random Forest; Google Trends; Predictability; Banks; Greece
Online: 29 July 2022 (13:07:42 CEST)
Background/Objectives: Accurate prediction of stock prices is an extremely challenging task because of factors such as political conditions, global economy, unexpected events, market anomalies, and relevant companies’ features. In this work, the random forest has been used to forecast the prices of the four major Greek systemic banks Methods/Analysis: We make use of a set of financial variables based on intraday data: (i) Open stock price, (ii) High stock price, (iii) Low stock price, and (iv) Close stock price of a particular Greek systemic bank. Results/Findings: The variables used here are crucial in predicting systemic banks' stock closing prices. These provide a better prediction of the next day's closing price of the bank series. Novelty /Improvement: To our knowledge, this is the first study that employs machine learning techniques in Greek systemic banks.
ARTICLE | doi:10.20944/preprints202104.0146.v1
Subject: Earth Sciences, Oceanography Keywords: Salt Marshes, Google Earth Engine, SVM, Distribution, China’s coast
Online: 5 April 2021 (14:28:19 CEST)
Based on the cloud platform of Google Earth Engine (GEE), this study selected Landsat 5/8 and Sentinel-2 remote sensing images and used Support Vector Machine (SVM) classification method to classify the 35 years of intertidal salt marshes in China, and verified the classification results in combination with field survey. Finally, combining with various driving factors, the reasons and laws affecting the changes of salt marshes species and area were discussed and analyzed. The main results of the study are as follows:The main types of salt marshes plants in China include Phragmites australis, Spartina alterniflora, Suaeda salsa, Scirpus mariquete, Tamarix chinensis, Cyperus malaccensis and Sesuvium portulacastrum. The results salt marshes classification indicated that 166999.32 ha in 1985, 172893.87 ha in 1990, 174952.29 ha in 1995, 125567.51 ha in 2000, 93257.97 ha in 2005, 102539.04 ha in 2010, 96302.92 ha in 2015, and 115722.75 ha in 2019. The main driving factors of salt marsh change from 1985 to 2015 are reclamation, mudflat aquaculture, climate change, coastal zone erosion, invasion of alien species, and natural competition and succession among salt marshes species. The results can be used to quantitatively analyze the salt marshes carbon storage in space and time, and provide data support for the protection of salt marsh wetlands, the restoration of ecological functions and the implementation of "carbon neutral".
Subject: Earth Sciences, Geoinformatics Keywords: Google Earth Engine; MODIS; disaster monitoring; remote sensing index
Online: 21 July 2020 (03:12:22 CEST)
Remote sensing has been used as an important tool for disaster monitoring and disaster scope extraction, especially for the analysis of spatial and temporal disaster patterns of large-scale and long-duration series. Based on the Google Earth Engine cloud platform, this study used MODIS vegetation index products with 250-m spatial resolution synthesized over 16 days from the period 2005–2019 to develop a rapid and effective method for monitoring disasters across a wide spatiotemporal range. Three types of disaster monitoring and scope extraction models are proposed: the normalized difference vegetation index (NDVI) median time standardization model (RNDVI_TM(i)), the NDVI median phenology standardization model (RNDVI_AM(i)(j)), and the NDVI median spatiotemporal standardization model (RNDVI_ZM(i)(j)). The optimal disaster extraction threshold for each model in different time phases was determined using Otsu’s method, and the extraction results were verified by medium-resolution images and ground-measured data of the same or quasi-same period. Finally, the disaster scope of cultivated land in Heilongjiang Province from 2010–2019 was extracted, and the spatial and temporal patterns of the disasters were analyzed based on meteorological data. This analysis revealed that the three aforementioned models exhibited high disaster monitoring and range extraction capabilities, with verification accuracies of 97.46%, 96.90%, and 96.67% for RNDVI_TM(i), RNDVI_AM(i), and (j)RNDVI_ZM(i)(j), respectively. The spatial and temporal disaster distributions were found to be consistent with the disasters of the insured plots and the meteorological data across the entire province. Moreover, different monitoring and extraction methods were used for different disasters, among which wind hazard and insect disasters often required a delay of 16 days prior to observation. Each model also displayed various sensitivities and were applicable to different disasters. Compared with other techniques, the proposed method is fast and easy to implement. This new approach can be applied to numerous types of disaster monitoring as well as large-scale agricultural disaster monitoring and can easily be applied to other research areas. This study presents a novel method for large-scale agricultural disaster monitoring.
ARTICLE | doi:10.20944/preprints201910.0275.v1
Subject: Earth Sciences, Geoinformatics Keywords: Landsat; Sentinel 2; harmonization; crop monitoring; Google Earth Engine
Online: 24 October 2019 (06:02:04 CEST)
Proper satellite-based crop monitoring applications at the farm-level often require near-daily imagery at medium to high spatial resolution. The synthesizing of ongoing satellite missions by ESA (Sentinel 2) and NASA (Landsat7/8) provides this unprecedented opportunity at a global scale; nonetheless, this is rarely implemented because these procedures are data demanding and computationally intensive. This study developed a complete stream processing in the Google Earth Engine cloud platform to generate harmonized surface reflectance images of Landsat7,8 and Sentinel 2 missions. The harmonized images were generated for two agriculture schemes in Bekaa (Lebanon) and Ninh Thuan (Vietnam) during the period 2018-2019. We evaluated the performance of several pre-processing steps needed for the harmonization including image co-registration, brdf correction, topographic correction, and band adjustment. This study found that the miss-registration between Landsat 8 and Sentinel 2 images, varied from 10 meters in Ninh Thuan, Vietnam to 32 meters in Bekaa, Lebanon, and if not treated, posed a great impact on the quality of the harmonized dataset. Analysis of a pair overlapped L8-S2 images over the Bekaa region showed that after the harmonization, all band-to-band spatial correlations were greatly improved from (0.57, 0.64, 0.67, 0.75, 0.76, 0.75, 0.79) to (0.87, 0.91, 0.92, 0.94, 0.97, 0.97, 0.96) in bands (blue, green, red, nir,swir1,swir2, ndvi) respectively. We demonstrated that dense observation of the harmonized dataset can be very helpful for characterizing cropland in highly dynamic areas. We detected unimodal, bimodal and trimodal shapes in the temporal NDVI patterns (likely cycles of paddy rice) in Ninh Thuan province only during the year 2018. We fitted the temporal signatures of the NDVI time series using harmonic (Fourier) analysis. Derived phase (angle from the starting point to the cycle's peak) and amplitude (the cycle's height) were combined with max-NDVI to generate an R-G-B image. This image highlighted croplands as colored pixels (high phase and amplitude) and other types of land as grey/dark pixels (low phase/amplitude). Generated harmonized datasets that contain surface reflectance images (bands blue, green, red, nir, swir1, swir2, and ndvi at 30 meters) over the two studied sites are provided for public usage and testing.
ARTICLE | doi:10.20944/preprints202202.0023.v1
Subject: Social Sciences, Economics Keywords: consumer behavior; Google Trends; wind energy; public interest; environmental marketing.
Online: 1 February 2022 (21:39:15 CET)
The public interest towards renewable and clean energy is an important part of shaping consumer behavior and policy towards these topics. A “big data” approach towards assessing the public interest for various topics consists in using freely available search frequency data for the Google search engine, through the Google Trends service. The search data frequency can be used to assess public opinion for a variety of topics, such as medicine, climate change and environmental concerns, finance, and economics etc. A study of the public interest towards wind energy topics is reported here. Six Google search keywords (“Wind power”, “Wind energy”, “Offshore wind”, “Wind farm”, “Wind turbine” and “Wind generator”) were investigated in the 2004–2020-time range. All keywords except “Offshore wind” show a steady decrease from a 2008 – 2010 maximum up to 2015, followed by a period limited change in the 2015 – 2020 range. The interest towards offshore wind topics follows a similar trend but increases in frequency starting from 2015 and reaches a maximum in 2018. Overall, the Google Trends data show a decrease of public interests towards most wind energy topics, with the exception of “Offshore wind”, for English speaking users, in the 2004–2020-time range.
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: Online Social Media prediction, Covid-19 prediction, Twitter, Google Trends
Online: 3 June 2021 (11:37:56 CEST)
As the coronavirus disease 2019 (COVID-19) continues to rage worldwide, the United States has become the most affected country with more than 34.1 million total confirmed cases up to June 1, 2021. In this work, we investigate correlations between online social media and Internet search for the COVID-19 pandemic among 50 U.S. states. By collecting the state-level daily trends through both Twitter and Google Trends, we observe a high but state-different lag correlation with the number of daily confirmed cases. We further find that the predictive accuracy measured by the correlation coefficient is positively correlated to a state’s demographic, air traffic volume and GDP development. Most importantly, we show that a state’s early infection rate is negatively correlated with the lag to the previous peak in Internet search and tweeting about COVID-19, indicating that earlier collective awareness on Twitter/Google correlates with lower infection rate. Lastly, we demonstrate that correlations between online social media and search trends are sensitive to time, mainly due to the attention shifting of the public.
ARTICLE | doi:10.20944/preprints202004.0338.v1
Subject: Social Sciences, Education Studies Keywords: active learning; web-based quiz; Google Forms; reviewing habits; smartphone
Online: 19 April 2020 (07:59:23 CEST)
Active participation of students is paramount not only for their learning experiences but also for their academic performance. Therefore, various methods have been developed and proven to help students achieve active learning. However, several shortcomings in these methods have been indicated as increasing students’ sense of burden and discomfort, eventually preventing them from benefiting sufficiently. This study aimed to determine the efficiency of a low-load web-based review quiz built by the researchers on Google Forms to enhance students’ reviewing habits and active class participation. Participants in this study were 53 first-year dental hygiene students in a 10-class microbiology course. After each class, all students were given the web-based quiz to prepare for a paper-based review test, which assessed the learning of the content covered in the previous classes. We analyzed the correlations between frequency of participation in the web-based quiz and the average scores of the weekly review tests or the final examination scores. Consequently, voluntary participation in the web-based quiz positively correlated with both short-term and long-term students’ learning outcomes. Through this web-based quiz during the first year of the dental hygiene program, students can develop the “self-learning attitude” needed to pass the national examination.
ARTICLE | doi:10.20944/preprints201807.0076.v1
Subject: Earth Sciences, Geoinformatics Keywords: flood; disaster prevention; emergency response; decision making, Google earth engine
Online: 4 July 2018 (15:33:44 CEST)
This paper reports the efforts made and experiences gained in developing the Flood Prevention and Emergency Response System (FPERS) powered by Google Earth Engine, with focus on its applications at the three stages of floods. At the post-flood stage, FPERS integrates various remote sensing imageries, including Formosat-2 optical imagery, to detect and monitor barrier lakes, synthetic aperture radar imagery to derive an inundation map, and high-spatial-resolution photographs taken by unmanned aerial vehicles to evaluate damage to river channels and structures. At the pre-flood stage, a huge amount of geospatial data are integrated in FPERS and are categorized as typhoon forecast and archive, disaster prevention and warning, disaster events and analysis, or basic data and layers. At the during-flood stage, three strategies are implemented to facilitate the access of the real-time data: presenting the key information, making a sound recommendation, and supporting the decision-making. The example of Typhoon Soudelor in August of 2015 is used to demonstrate how FPERS was employed to support the work of flood prevention and emergency response from 2013 to 2016. The capability of switching among different topographic models and the flexibility of managing and searching data through a geospatial database are also explained, and suggestions are made for future works.
ARTICLE | doi:10.20944/preprints201807.0040.v1
Subject: Engineering, Civil Engineering Keywords: Google Earth Engine; EEFlux; METRIC; evapotranspiration; Landsat; water resources management
Online: 3 July 2018 (11:51:31 CEST)
Reliable evapotranspiration (ET) estimation is a key factor for water resources planning, attaining sustainable water resources use, irrigation water management, and water regulation. During the past few decades, researchers have developed a variety of remote sensing techniques to estimate ET. The Earth Engine Evapotranspiration Flux (EEFlux) application uses Landsat imagery archives on the Google Earth Engine platform to calculate the daily evapotranspiration at the local field scale (30 m). Automatically calibrated for each Landsat image, the EEFlux application design is based on the widely vetted Mapping Evapotranspiration at high Resolution with Internalized Calibration (METRIC) model and produces ET estimation maps for any Landsat 5, 7 or 8 scene in a matter of seconds. In this research we evaluate the consistency and accuracy of EEFlux products that are produced when standard US and global assets are used. Processed METRIC products for 58 scenes distributed around the western and central United States were used as the baseline for comparison. The goal of this paper is to compare the results from EEFlux with the standard METRIC applications to illustrate the utility of the EEFlux products as they currently stand. Given that EEFlux is derived from METRIC, differences are expected to occur due to differing calibration methods (automatic versus manual) and differing input datasets. The products compared include the fraction of reference ET (ETrF), actual ET (ETa), and surface energy balance components net radiation (Rn), ground heat flux (G), and sensible heat flux (H), as well as Ts, albedo and NDVI. The product comparisons show that the intermediate products of Ts, Albedo, and NDVI, and also Rn have similar values and behavior for both EEFlux and METRIC. Larger differences were found for H and G. Despite the more significant differences in H and G, results show that EEFlux is able to calculate ETrF and ETa values comparable to the values from trained expert METRIC users for agricultural areas. For non-agricultural areas such as semi-arid rangeland and forests, the automated EEFlux calibration algorithm needs to be improved in order to be able to reproduce ETrF and ETa that is similar to the manually calibrated METRIC products.
ARTICLE | doi:10.20944/preprints201805.0274.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: artificial intelligence; semantic web; natural language; Google cloud speech; SPARQL
Online: 21 May 2018 (12:38:00 CEST)
The main restriction of the Semantic Web is the difficult of the SPARQL language, that is necessary to extract information from the Knowledge Representation also known as ontology. Making the Semantic Web accessible for people who do not know SPARQL, is essential the use of friendlier interfaces and a good alternative is Natural Language. This paper shows the implementation of a friendly prototype interface to query and retrieve, by voice, information from website building with the Semantic Web tools. In that way, the end users avoid the complicated SPARQL language. To achieve this, the interface recognizes a speech query and converts it into text, it processes the text through a java program and identifies keywords, generates a SPARQL query, extracts the information from the website and read it in voice, for the user. In our work Google Cloud Speech API makes Speech-to-Text conversions and Text-to Speech conversions are made with SVOX Pico. As results, we have measured three variables: The success rate in queries, the response time of query and a usability survey. The values of the variables allows the evaluation of our prototype. Finally the interface proposed provides us a new approach in the problem, using the Cloud like a Service, reducing barriers of access to the Semantic Web for people without technical knowledge of Semantic Web technologies.
ARTICLE | doi:10.20944/preprints202012.0701.v1
Subject: Medicine & Pharmacology, Nutrition Keywords: COVID-19; eating behavior; diet; food concern; Google Trends; health behavior.
Online: 28 December 2020 (12:55:42 CET)
COVID-19 pandemic and its restrictive measures have present serious unprecedented challenges to human eating behavior. Given that Google Search has become a valuable information resource to examine, predict, and estimate human online interests and behavior, that somehow linked to real people concerns. This study aimed to investigate the features and evaluate the impacts of the COVID-19 pandemic and its lockdown on consumer worldwide interest in eating behavior and its related factors. Google Trends-Relative Search Volumes (RSV) of distinct keywords related to eating behavior, were obtained from a timeframe before and during the COVID-19 pandemic, from January 1, 2018, to December 13, 2020. During the global lockdown from March 11, 2020, to June 30, 2020, RSV curves exhibited a short-term fluctuation of interest in multiple keywords related to eating behavior and its related factors such as food purchasing, food security, food poisoning, panic buying, stocking up, health awareness and mental illness. Spearman’s correlation analysis showed a strong correlation between daily confirmed cases and examined keywords. Univariate repeated measures ANOVA following with Bonferroni Post-hoc test revealed that during the year with the presence of COVID-19 pandemic, people worldwide pay more concerns in each keyword (1) environmental and economic factors (unemployment: +269%, food shortage: +180%, food bank: +50%); (2) health- and food-safety concerns (immunity: +138%, vitamin C: +90%, vitamin D: +55%, zinc: +47%, food storage containers: +40%, food packaging: +31%); (3) food choices and interest (local meat: +84%, frozen food: +67%, CSA: +65%, flour: +66%, bread: +53%, soybean oil: +45%, local fruit: +43%, canned tomato: +42%, refrigerated food: +41%, canned meat: +39%, pancake: +37%, cookie: +29%, butter: +29%, canned fish: +29%, liquior: +20%); (4) social and individual factors (take-out: +128%, deliver: +53%), (6) lifestyle factors (stationary bicycle: +110%, dumbbell: +89%, yoga mat: +84%, treadmill: +65%, grocery store: +51%); (6) psychological factors (isolation: +113%). COVID-19 pandemic and its lockdown have had far-reaching effects on global concerns in many factors related to human eating behavior. Swift action is necessarily performed to strengthen the resilience of the food supply chain system, support and adapt to the new normal behavior, and mitigation the profound negative changes, especially targeting those in high-risks and vulnerable groups and food-insecure regions.
ARTICLE | doi:10.20944/preprints202008.0487.v1
Subject: Social Sciences, Geography Keywords: Twitter; data reliability; risk communication; data mining; Google Cloud Vision API
Online: 22 August 2020 (02:32:40 CEST)
While Twitter has been touted to provide up-to-date information about hazard events, the reliability of tweets is still a concern. Our previous publication extracted relevant tweets containing information about the 2013 Colorado flood event and its impacts. Using the relevant tweets, this research further examined the reliability (accuracy and trueness) of the tweets by examining the text and image content and comparing them to other publicly available data sources. Both manual identification of text information and automated (Google Cloud Vision API) extraction of images were implemented to balance accurate information verification and efficient processing time. The results showed that both the text and images contained useful information about damaged/flooded roads/street networks. This information will help emergency response coordination efforts and informed allocation of resources when enough tweets contain geocoordinates or locations/venue names. This research will help identify reliable crowdsourced risk information to enable near-real time emergency response through better use of crowdsourced risk communication platforms.
ARTICLE | doi:10.20944/preprints202212.0471.v1
Subject: Mathematics & Computer Science, Other Keywords: Deep learning; Convoluted Neural Networks; LSTM; MediaPipe; Google Cloud; Object detection; Classification
Online: 26 December 2022 (04:10:07 CET)
The Median American Sign Language Interpretation Software (ASL) Interpretation Software is a web application that is capable of interpreting American Sign Language in real-time, utilizing an internet connection and a primary web camera, complete with basic phrases and letters. Extensive use of Deep Learning and Neural Networks, specifically Convoluted Neural Networks, enables Median to interpret video inputs and generate accurate results directly displayed to the user in text format. The ultimate goal for Median is to have it act as a bridge between hearing people and members of the deaf community, allowing deaf people to communicate with non-signing people using American Sign Language. Furthermore, Median has been designed to benefit people who lack access to a human ASL Translator, as its format as a website allows it to be accessible anywhere at any time, giving increased availability over human interpreters. Median is designed to be a very versatile program with great potential for growth and expansion.
ARTICLE | doi:10.20944/preprints202210.0026.v1
Subject: Medicine & Pharmacology, Nursing & Health Studies Keywords: online health information; digital literacy; e-Health; e-Health solutions; Dr. Google
Online: 5 October 2022 (03:55:03 CEST)
The investment in digital e-Health services is a priority direction in the development of global health care systems. While people are increasingly using the Web for health information, it is not entirely clear what is the physicians’ attitude towards digital transformation, and the acceptance of new technologies in healthcare. The aim of this cross-sectional survey study was to investigate physicians’ self-digital skills, and their opinions on obtaining online health knowledge by patients, as well as the recognition of physicians’ attitudes towards e-Health solutions. Principal Component Analysis (PCA) was performed to emerge the variables from self-designed questionnaire, and cross-sectional analysis comparing descriptive statistics and correlations for dependent variables using the one-way ANOVA (F-test). 307 physicians participated in the study, reported using the internet mainly several times a day (66.8%). Most participants (70.4%) were familiar with new technologies and rate their e-Health literacy high, although 84.0% reported the need for additional training in this field, and reported a need to introduce a larger number of subjects shaping digital skills (75.9%) in medical studies 53.4% of physicians perceived Internet-sourced information as sometimes reliable, and in general assessed the effects of using it by their patients negatively (41.7%). Digital skills increased significantly with frequency of internet use (F = 13.167; p = 0.0001), and decreased with physicians’ age, and the need for training. Those who claimed that patients often experienced health benefits from online health showed higher digital skills (-1.06). Physicians most often recommended their patients to obtain laboratory test results online (32.2%), and to arrange medical appointments via the Internet (27.0%). Along with the deterioration of physicians’ digital skills, the recommendation of e-Health solutions decreased (r = 0.413), and lower the assessment of e-Health solutions for the patient (r = 0.449). Physicians perceive digitization as a sign of the times, and frequently use its tools in daily practice. The evaluation of Dr. Google’s phenomenon and online health is directly related to their own e-Health literacy skills, but there is still a need for practical training to deal with digital revolution.
ARTICLE | doi:10.20944/preprints202207.0071.v1
Subject: Earth Sciences, Geoinformatics Keywords: Urban Mapping; Impervious Surface Area; Google Earth Engine; GISAI; Spectral Index; Landsat
Online: 5 July 2022 (10:07:01 CEST)
Impervious surface area (ISA) is a crucial indicator for quantitative urban studies. It is also important for land use land cover classification, groundwater recharge, sustainable development, urban heat island effects, and more. Spectral ISA mapping suffers from mixed pixel problems, especially with bare soil. This study aims to develop an ISA index for spatiotemporal urban mapping from common multispectral bands by reducing soil signature better than in previous studies. This study proposed a global impervious surface area index (GISAI) enhancing ISA mapping accuracy using a temporal parameter of the remote sensing (RS) dataset. Bare soil spectral reflectance shows more fluctuation than urban ISA. Therefore, the study uses minimum composites of earlier urban indices to compile minimum soil signature. It is later improved by removing water, increasing the contrast between bare soil and urban ISA and reducing bright bare soil area. This study maps the ISA of all 12 megacities using the annual RS image collection from 2021. GISAI reduced the bare soil signature and achieved an overall accuracy of 87.29%, F1-score of 0.84, and Kappa coefficient of 0.75. However, it has some limitations with grey bare soil and barren hilly areas. By limiting bare soil signature, GISAI broadens the scope of inter-urban studies globally and lengthens potential urban time-series analysis using common multispectral bands.
ARTICLE | doi:10.20944/preprints202203.0064.v1
Subject: Behavioral Sciences, Other Keywords: Computer vision; Google Street View; Built Environment; Walkability; Micro-scale; Deep learning
Online: 3 March 2022 (13:49:08 CET)
The study purpose was to train and validate a deep-learning approach to detect micro-scale streetscape features related to pedestrian physical activity. This work innovates by combining computer vision techniques with Google Street View (GSV) images to overcome impediments to conducting audits (e.g., time, safety, and expert labor cost). The EfficientNETB5 architecture was used to build deep-learning models for eight micro-scale features guided by the Microscale Audit of Pedestrian Streetscapes-Mini tool: sidewalks, sidewalk buffers, curb cuts, zebra and line crosswalks, walk signals, bike symbols, and streetlights. We used a train--correct loop, whereby images were trained on a training dataset, evaluated using a separate validation dataset, and trained further until acceptable performance metrics were achieved. Further, we used trained models to audit participant (N=512) neighborhoods in the WalkIT Arizona trial. Correlations were explored between micro-scale features and GIS-measured- and participant reported-macro-scale walkability. Classifier precision, recall, and overall accuracy were all >84%. Total micro-scale was associated with overall macro-scale walkability (r=0.300,p<.001). Positive associations were found between model-detected and self-reported sidewalks (r=0.41,p<.001) and sidewalk buffers (r=0.26,p<.001). Computer vision model results suggest an alternative to trained human raters, allowing for audits of hundreds or thousands of neighborhoods for population surveillance or hypothesis testing.
ARTICLE | doi:10.20944/preprints202106.0473.v1
Subject: Biology, Anatomy & Morphology Keywords: bleaching; coral reef; environmental stress; Google Earth Engine; monitoring; remote sensing; satellite
Online: 18 June 2021 (10:43:50 CEST)
Coral reefs are critical ecosystems globally for marine fauna, biodiversity and through the services they provide to humanity. However, they are significantly threatened by anthropogenic stressors, such as climate change. By combining 9 environmental variables and ecological and health-based thresholds obtained from the available literature, we develop, using fuzzy logic (discontinuous functions), a Coral Reef Stress Exposure Index (CRSEI) for remotely monitoring coral reef exposure to environmental stressors. Our approach capitalises on the abundance of readily available satellite Earth Observation (EO) data available in the Google Earth Engine (GEE) cloud-based geospatial processing platform. CRSEI values from 3157 distinct reefs were generated and mapped across 12 important coral reef ecosystem regions. Quantitative analyses indicated that the index detected significant temporal differences in stress and was, therefore, able to capture historic change at a global scale. We also applied the CRSEI to three case-study reef ecosystems, previously well-monitored for stress and disturbance using other methods. PCA analysis indicated that depth, current, sea surface temperature (SST) and SST anomaly accounted for the greatest contribution to the variance in stress in these three regions. The CRSEI corroborated temporal and spatial differences in stress exposure from known disturbances within these reference regions, in addition to identifying the potential drivers of inter- and intra-region differences in stress, namely depth, degree heating weeks and SST anomaly. We discuss how the index can be further improved in future with site-specific thresholds for each stress variable, and the incorporation of additional variables not currently available in GEE. This index provides an open access tool, built around a free and powerful processing platform, that has broad potential to assist in the regular monitoring of our increasingly imperilled coral reef ecosystems, and, in particular, those that are remote or inaccessible.
ARTICLE | doi:10.20944/preprints202105.0389.v1
Subject: Earth Sciences, Atmospheric Science Keywords: forest degradation; NDFI index; multitemporal analysis; Continuous Degradation Detection; Google Earth Engine
Online: 17 May 2021 (13:30:45 CEST)
The goal of this study was to analyze the forest degradation in the Reserve for San Rafael National Park, Paraguay, during the period 2005-2019. This Reserve is one of the most important forest remnants of the Upper Paraná Atlantic Forest Ecoregion. A multitemporal analysis of degradation was carried out due to the occurrence of three disturbances: forest fires, a twister and illicit crops, using the Continuous Degradation Detection (CODED) algorithm, for which 3 factors were considered: variations due to pixel in the NDFI index values before, during and after every disturbance registered. In this context, the phenomenon with the greatest impact in terms of magnitude of degradation were the forest fires of 2005, being that year at the same time, the one that reported the highest degradation values. Secondly, there are the illicit crops established until the first semester of 2019, and lastly, the twister that occurred in 2017. Our findings demonstrate that CODED algorithm can detect multi-temporal degradation events in a Subtropical Broadleaf Forest, and the post-disturbance regeneration process after every disturbance tends to occur immediately. The response in terms of degradation-regeneration is highly variable, depending of the nature and severity of each disturbance and the vegetation recovery dynamics.
ARTICLE | doi:10.20944/preprints202008.0042.v1
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: Reputation; Android; application; sentiment analysis; reviews; security service; NLP; Google Play; polarity
Online: 2 August 2020 (15:49:51 CEST)
To keep its business reliable, Google is concerned to ensure quality of apps on the store. One crucial aspect concerning quality is security. Security is achieved through Google Play protect and anti-malware solutions. However, they are not totally efficient since they rely on application features and application execution threads. Google provides additional elements to enable consumers to collectively evaluate applications providing their experiences via reviews or showing their satisfaction through rating. The latter is more informal and hides details of rating whereas the former is textually expressive but requires further processing to understand opinions behind. Literature lacks approaches which mine reviews through sentiment analysis to extract useful information to improve security aspects of provided applications. This work goes in this direction and in a fine-grained way, investigates in terms of confidentiality, integrity, availability and authentication (CIAA). While assuming that reviews are reliable and not fake, the proposed approach determines review polarities based on CIAA-related keywords. We rely on the popular classifier Naive Bayes to classify reviews into positive, negative and neutral sentiment. We then provide an aggregation model to fusion different polarities to obtain application global and CIAA reputations. Quantitative experiments have been conducted on 13 applications including e-banking, live messaging and anti-malware apps with a total of 1050 security-related reviews and 7.835.322 functionality-related reviews. Results show that 23% of applications (03 apps) have a reputation greater than 0.5 with an accent on integrity, authentication and availability, while the remaining 77% has a polarity under 0.5. Developers should make lot of efforts in security while developing codes and that more efforts should be made to improve confidentiality reputation. Results also show that applications with good functionality-related reputation generally offer bad security-related reputation. This situation means that even if the number of security reviews is low, it does not mean that security aspect is not a consumer preoccupation. Unlike, developers put much more time to test whether applications works without errors even if they include possible security vulnerabilities. A quantitative comparison against well-known rating systems reveals effectiveness and robustness of CIAA-RepDroid to repute apps in terms of security. CIAA-RepDroid can be associated to existing rating solutions to recommend developers exact CIAA aspects to improve within source codes.
ARTICLE | doi:10.20944/preprints202003.0332.v1
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: reputation; Android; application; sentiment analysis; comments; security service; NLP; Google Play; polarity
Online: 23 March 2020 (04:22:23 CET)
Comments are exploited by product vendors to measure satisfaction of consumers. With the advent of Natural Language Processing (NLP), comments on Google Play can be processed to extract knowledge on applications such as their reputation. Proposals in that direction are either informal or interested merely on functionality. Unlike, this work aims to determine reputation of Android applications in terms of confidentiality, integrity, availability and authentication (CIAA). This work proposes a model of assessing app reputation relying on sentiment analysis and text analysis of comments. While assuming that comments are reliable, we collect Google Play applications subject to comments which include security keywords. An in-depth analysis of keywords based on Naive Bayes classification is made to provide polarity of any comment. Based on comment polarity, reputation is evaluated for the whole application. Experiments made on real applications including dozens to billions of comments, reveal that developers lack to make efforts to guarantee CIAA services. A fine-grained analysis shows that not security reputed applications can be reputed in specific CIAA services. Results also show that applications with negative security polarities display in general positive functional polarities. This result suggests that security checking should include careful comment analysis to improve security of applications.
ARTICLE | doi:10.20944/preprints201902.0123.v1
Subject: Medicine & Pharmacology, Dentistry Keywords: Medical Illiteracy, Public Awareness, Periodontal Diseases, Global Burden of Disease, Google Trends
Online: 13 February 2019 (15:54:04 CET)
Background: The progression of periodontal diseases at national Portuguese level and its public awareness are of great interest, mainly due to the high burden of periodontitis. Objectives: To evaluate the prevalence progression of periodontal diseases in Portugal and correspondent public awareness, between 2004 and 2017, by using data from the Global Burden of Disease (GBD), Directorate-General of Health (DGH) and Google® Trends (GT). Methods: For the period 2004-2017, Portuguese national data of periodontal diseases prevalence were searched in the Institute for Health Metrics and Evaluation of GBD and DGH and for public awareness, GT comparison tool between Portuguese words for “Periodontitis”, “Gingivitis”, “Gums” and “Periodontal disease” trends was used. Results: For the period 2004-2017, the overall prevalence of periodontitis slightly increased from 11.3% to 11.7%. During that period the GT search term “Gums” (“Gengivas”) was the most relevant. It increased steadily over time while the search term “Periodontal disease” (“Doença periodontal”) decreased, being these search trends significantly correlated (
ARTICLE | doi:10.20944/preprints202003.0249.v1
Subject: Mathematics & Computer Science, Other Keywords: machine learning; preprocessing; semantic analysis; text mining; TF/IDF; scraping; Google Play Store
Online: 11 August 2020 (08:14:10 CEST)
The fact is quite transparent that almost everybody around the world is using android apps. Half of the population of this planet is associated with messaging, social media, gaming, and browsers. This online marketplace provides free and paid access to users. On the Google Play store, users are encouraged to download countless of applications belonging to predefined categories. In this research paper, we have scrapped thousands of users reviews and app ratings. We have scrapped 148 apps’ reviews from 14 categories. We have collected 506259 reviews from Google play store and subsequently checked the semantics of reviews about some applications form users to determine whether reviews are positive, negative, or neutral. We have evaluated the results by using different machine learning algorithms like Naïve Bayes, Random Forest, and Logistic Regression algorithm. we have calculated Term Frequency (TF) and Inverse Document Frequency (IDF) with different parameters like accuracy, precision, recall, and F1 and compared the statistical result of these algorithms. We have visualized these statistical results in the form of a bar chart. In this paper, the analysis of each algorithm is performed one by one, and the results have been compared. Eventually, We've discovered that Logistic Regression is the best algorithm for a review-analysis of all Google play store. We have proved that Logistic Regression gets the speed of precision, accuracy, recall, and F1 in both after preprocessing and data collection of this dataset.
TECHNICAL NOTE | doi:10.20944/preprints202003.0038.v1
Subject: Earth Sciences, Environmental Sciences Keywords: Okavango Delta; inundation maps; inundation extent; Landsat; Google Earth Engine; automated time series
Online: 3 March 2020 (11:25:49 CET)
Accurate inundation maps for flooded wetlands and rivers are a critical resource for their management and conservation. In this paper we automate a method (thresholding of the short-wave infrared band) for classifying inundation, using Landsat imagery and Google Earth Engine. We demonstrate the method in the Okavango Delta, northern Botswana, a complex case study due to the spectral overlap between inundated areas covered with aquatic vegetation and dryland vegetation classes on satellite imagery. Inundation classifications in the Okavango Delta have predominately been implemented on broad spatial resolution images. We present the longest time series to date (1990-2019) of inundation maps at high spatial resolution (30m) for the Okavango Delta. We validated the maps using image-based and in situ data accuracy assessments, with accuracy ranging from 91.5 - 98.1%. Use of Landsat imagery resulted in consistently lower estimates of inundation extent than previous studies, likely due to the increased number of mixed pixels that occur when using broad spatial resolution imagery, which can lead to overestimations of the size of inundated areas. We provide the inundation maps and Google Earth Engine code for public use.
ARTICLE | doi:10.20944/preprints201911.0218.v1
Subject: Earth Sciences, Environmental Sciences Keywords: Landsat; Google Earth; water index; unsupervised image classification; supervised image classification; Kappa coefficient
Online: 19 November 2019 (03:10:17 CET)
To address three important issues related to extraction of water features from Landsat imagery, i.e., selection of water indexes and classification algorithms for image classification, collection of ground truth data for accuracy assessment, this study applied four sets (ultra-blue, blue, green, and red light based) of water indexes (NWDI, MNDWI, MNDWI2, AWEIns, and AWEIs) combined with three types of image classification methods (zero-water index threshold, Otsu, and kNN) to 24 selected lakes across the globe to extract water features from Landsat-8 OLI imagery. 1440 (4x5x3x24) image classification results were compared with the extracted water features from high resolution Google Earth images with the same (or ±1 day) acquisition dates through computing the Kappa coefficients. Results show the kNN method is better than the Otsu method, and the Otsu method is better than the zero-water index threshold method. If the computational cost is not an issue, the kNN method combined with the ultra-blue light based AWEIns is the best method for extracting water features from Landsat imagery because it produced the highest Kappa coefficients. If the computational cost is taken into account, the Otsu method is a good choice. AWEIns and AWEIs are better than NDWI, MNDWI and MNDWI2. AWEIns works better than AWEIs under the Otsu method, and the average rank of the image classification accuracy from high to low is the ultra-blue, blue, green, and red light-based AWEIns.
ARTICLE | doi:10.20944/preprints201811.0260.v2
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: evidence-based dentistry; public health dentistry; google trends; real-time analytics; predictive analytics
Online: 16 November 2018 (10:34:04 CET)
BACKGROUND Epidemiological sciences have been evolving at an exponential rate paralleled only by the comparable growth within the discipline of data science. Digital epidemiological studies are playing a vital role in medical science analytics for the past few decades. To date, there are no published attempts at deploying the use of real-time analytics in connection with the disciplines of Dentistry or Medicine. AIMS AND OBJECTIVES We deployed a real-time statistical analysis in connection with topics in Dental Anatomy and Dental Pathology represented by the maxillary sinus, posterior maxillary teeth, related oral pathology. The purpose is to infer the digital epidemiology based on a continuous stream of raw data retrieved from Google Trends database. MATERIALS AND METHODS Statistical analysis was carried out via Microsoft Excel 2016 and SPSS version 24. Google Trends database was used to retrieve data for digital epidemiology. Real-time analytics and the statistical inference were based on encoding a programming script using Python high-level programming language. A systematic review of the literature was carried out via PubMed-NCBI, the Cochrane Library, and Elsevier databases. RESULTS The comprehensive review of databases of the literature, based on specific keywords search, yielded 491813 published studies. These were distributed as 488884 (PubMed-NCBI), 1611 (the Cochrane Library), and 1318 (Elsevier). However, there was no single study attempting real-time analytics. Nevertheless, we succeeded in achieving an automated real-time stream of data accompanied by a statistical inference based on data extrapolated from Google Trends. CONCLUSION Real-time analytics are of considerable impact when implemented in biological and life sciences as they will tremendously reduce the required resources for research. Predictive analytics, based on artificial neural networks and machine learning algorithms, can be the next step to be deployed in continuation of the real-time systems to prognosticate changes in the temporal trends and the digital epidemiology of phenomena of interest.
ARTICLE | doi:10.20944/preprints201809.0522.v1
Subject: Earth Sciences, Environmental Sciences Keywords: drought; NDVI; ENSO; wavelet; time series analysis; Hluhluwe-iMfolozi Park; Google Earth Engine
Online: 26 September 2018 (15:53:40 CEST)
ARTICLE | doi:10.20944/preprints202007.0646.v1
Subject: Keywords: Machine Learning; Natural Language Processing; Text Mining; Semantic Analysis; Scraping; Google Play Store; Rating
Online: 26 July 2020 (17:11:09 CEST)
Google play store allow the user to download a mobile application (app) and user get inspired by the rating and reviews of the mobile app. A recent study analyzes that user preferences, user opinion for improvement, user sentiment about particular feature and detail with descriptions of experiences are very useful for an application developer. However, many application reviews are very large and difficult to process manually. Star rating is given of the whole application and the developer cannot analyze the single feature. In this research, we have scrapped 282,231 user reviews through different data scraping techniques. We have applied the text classification on these user reviews. We have applied different algorithms and find the precision, accuracy, F1 score and recall. In evaluated results, we have to also find the best algorithm.
ARTICLE | doi:10.20944/preprints201808.0154.v2
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: deep learning; multiple instance learning; weakly supervised learning; demography; socioeconomic analysis; google street view
Online: 24 October 2018 (08:53:26 CEST)
(1) Background: Evidence-based policymaking requires data about the local population's socioeconomic status (SES) at detailed geographical level, however, such information is often not available, or is too expensive to acquire. Researchers have proposed solutions to estimate SES indicators by analyzing Google Street View images, however, these methods are also resource-intensive, since they require large volumes of manually labeled training data. (2) Methods: We propose a methodology for automatically computing surrogate variables of SES indicators using street images of parked cars and deep multiple instance learning. Our approach does not require any manually created labels, apart from data already available by statistical authorities, while the entire pipeline for image acquisition, parked car detection, car classification, and surrogate variable computation is fully automated. The proposed surrogate variables are then used in linear regression models to estimate the target SES indicators. (3) Results: We implement and evaluate a model based on the proposed surrogate variable at 30 municipalities of varying SES in Greece. Our model has $R^2=0.76$ and a correlation coefficient of $0.874$ with the true unemployment rate, while it achieves a mean absolute percentage error of $0.089$ and mean absolute error of $1.87$ on a held-out test set. Similar results are also obtained for other socioeconomic indicators, related to education level and occupational prestige. (4) Conclusions: The proposed methodology can be used to estimate SES indicators at the local level automatically, using images of parked cars detected via Google Street View, without the need for any manual labeling effort.
ARTICLE | doi:10.20944/preprints201901.0302.v1
Subject: Earth Sciences, Geoinformatics Keywords: interoperability; digital elevation model; Google Sketchup; geographical information systems-science; free and open source software
Online: 30 January 2019 (05:28:53 CET)
Data creation is often the only way for researchers to produce basic geospatial information for the pursuit of more complex tasks and procedures such as those that lead to the production of new data for studies concerning river basins, slope morphodynamics, applied geomorphology and geology, urban and territorial planning, detailed studies, for example, in architecture and civil engineering, among others. This exercise results from a reflection where specific data processing tasks executed in Google Sketchup (Pro version, 2018) can be used in a context of interoperability with Geographical Information Systems (GIS) software. The focus is based on the production of contour lines and Digital Elevation Models (DEM) using an innovative sequence of tasks and procedures in both environments (GS and GIS). It starts in Google Sketchup (GS) graphic interface, with the selection of a satellite image referring to the study area—which can be anywhere on Earth's surface; subsequent processing steps lead to the production of elevation data at the selected scale and equidistance. This new data must be exported to GIS software in vector formats such as Autodesk Design Web format—DWG or Autodesk Drawing Exchange format—DXF. In this essay the option for the use of GIS Open Source Software (gvSIG and QGIS) was made. Correcting the original SHP by removing “data noise” that resulted from DXF file conversion permits the author to create new clean vector data in SHP format and, at a later stage, generate DEM data. This means that new elevation data becomes available, using simple but intuitive and interoperable procedures and techniques which confgures a costless work flow.
ARTICLE | doi:10.20944/preprints202110.0202.v1
Subject: Earth Sciences, Environmental Sciences Keywords: Inland saline wetland; lake; ecosystem; biodiversity; human interventions; Google Earth Engine; Normalized Difference Water Index; Restoration
Online: 13 October 2021 (13:09:59 CEST)
Globally, saline lakes occupying 23% by area 44% by volume among all the lakes might desiccate by 2025 due to agricultural diversion, illegal encroachment, pollution, and invasive species. India’s largest saline lake, Sambhar is currently shrinking at the rate of 4.23% due to illegal saltpan en-croachment. This research article aims to identify the trend of migratory birds and monthly wetland status. Birds survey was conducted for 2019, 2020 and 2021 and combined with literature data of 1994, 2003, and 2013 for visiting trend, feeding habit, migratory and resident ratio, and ecological diversity index analysis. Normalized Difference Water Index was scripted in Google Earth Engine. Results state that it has been suitable for 97 species. Highest NDWI values for the was whole study period was 0.71 in 2021 and lowest 0.008 in 2019 which is highly fluctuating. The decreasing trend of migratory birds coupled with decreasing water level indicates the dubious status for its existence. If the causal factors are not checked, it might completely desiccate by 2059 as per its future prediction. Certain steps are suggested that might help conservation. Least, the cost of restoration might exceed the revenue generation.
ARTICLE | doi:10.20944/preprints202301.0231.v1
Subject: Earth Sciences, Geology Keywords: NDVI; SAR; change detection; Norway; Sentinel-1; Sentinel-2; deep learning; U-Net; CCDC; Google Earth Engine
Online: 13 January 2023 (02:00:25 CET)
Landslide risk mitigation is limited by data scarcity. This could be improved using continuous landslide detection systems. In order to investigate which image types and machine learning (ML) models are most useful for landslide detection in a Norwegian setting, we compared the performance of five different ML models, for the Jølster case study (30-July-2019), in Western Norway. These included three globally pre-trained models; i) the Continuous Change Detection and Classification (CCDC) algorithm, ii) a combined k-means clustering and Random Forest classification model, and iii) a convolutional neural network (CNN), and two locally-trained models, including; iv) Classification and Regression Trees and v) a U-net CNN model. Images used included Sentinel-1, Sentinel-2, digital elevation model (DEM) and slope. The globally-trained models performed poorly in shadowed areas, and were all outperformed by the locally-trained models. A maximum Matthew’s correlation coefficient (MCC) score of 89% was achieved with model v, using combined Sentinel-1 and -2 images as input. This is one of the first attempts to apply deep-learning to detect landslides with both Sentinel-1 and -2 images. Using Sentinel-1 images only, the locally-trained deep-learning model significantly outperformed the conventional ML model. These findings contribute towards developing a national continuous monitoring system for landslides.
ARTICLE | doi:10.20944/preprints202008.0053.v1
Subject: Physical Sciences, Atomic & Molecular Physics Keywords: Google Trend; Particulate Matter; National Ambient Air Quality Monitoring Information System; Chronic obstructive pulmonary disease; Big Data
Online: 2 August 2020 (18:29:51 CEST)
Depending on the characteristics of the industrial area, toxicity evaluation of human body, risk assessment and health impact assessment may directly cause cancer due to air pollution. Environmental data collection is from August 2018 to January 31, 2019, and the average, minimum, and maximum values of air pollution data respectively. According to the global data on global trends using the Big Data, high blood pressure is confirmed at 33rd place in the world, and myocardial infarction among the environmental diseases is confirmed to be lower than Korea. Disease that occurred in Jeolla province industrial complex considering the characteristics of our country was identified as representative. Air pollutants are considered to be the causes of allergic diseases in Korea. PM10 was found to be higher than the control area (28.8804348 (㎍ / ㎥), 31.7065217 (㎍ / ㎥) and 32.8532609 (㎍ / ㎥). The mean concentrations of PM2.5 in the middle and high exposure areas were lower than those of the control areas, but the highest in the intermediate exposure areas was 16.5978261 (㎍ / ㎥), 16.1086957 (㎍ / ㎥) and 17.1847826 (㎍ / ㎥) respectively. The relationship between the major variables of environmental exposure in Yeosu was confirmed to be correlated with high blood pressure, chronic obstructive pulmonary disease (COPD), bronchitis, cerebrovascular, diabetes, thyroid disease, sinus infection, anemia and pneumonia.
CONCEPT PAPER | doi:10.20944/preprints201909.0016.v1
Subject: Earth Sciences, Geoinformatics Keywords: land cover; classification Spatial and temporal Analysis; forest cover; Google Earth Engine (GEE); MODIS; Landsat; NOAA AVHRR
Online: 2 September 2019 (04:51:15 CEST)
TECHNICAL NOTE | doi:10.20944/preprints202208.0484.v1
Subject: Earth Sciences, Oceanography Keywords: remote sensing; ocean color; Google Earth Engine; MODIS/Aqua, SGLI/GCOM-C, swath reprojection, Earth Engine data ingestion
Online: 29 August 2022 (10:09:58 CEST)
Data from ocean color (OC) remote sensing are considered a cost-effective tool for the study of biogeochemical processes globally. Satellite-derived chlorophyll, for instance, is considered an Essential Climate Variable since it is helpful in detecting climate change impacts. Google Earth Engine (GEE) is a planetary scale tool for remote sensing data analysis. Along with OC data, such tools allow an unprecedented spatial and temporal scale analysis of water quality monitoring in a way that has never been done before. Although OC data have been routinely collected at medium (~1 km) and more recently at high (~250 m) spatial resolution, only coarse resolution (≥4 km) data are available in GEE, making them unattractive for applications in the coastal regions. Data reprojection is needed prior to making OC data readily available in the GEE. In this paper, we introduce a simple but practical procedure to reproject and ingest OC data into GEE. The procedure is applicable to OC swath (Level-2) data and is easily adaptable to higher-level products. The results showed consistent distributions between swath and reprojected data, building confidence in the introduced framework. The study aims to start a discussion on making high resolution OC data readily available in GEE.
ARTICLE | doi:10.20944/preprints202004.0073.v2
Subject: Mathematics & Computer Science, Numerical Analysis & Optimization Keywords: SARS-CoV-2; COVID-19; SEIR modeling; Italy; stochastic modeling; swarm intelligence; Google COVID 19 Community Mobility Reports
Online: 5 May 2020 (16:10:48 CEST)
We applied a generalized SEIR epidemiological model to the recent SARS-CoV-2 outbreak in the world, with a focus on Italy and its Lombardia, Piemonte, and Veneto regions. We focus on the application of a stochastic approach in fitting the model numerous parameters using a Particle Swarm Optimization (PSO) solver, to improve the reliability of predictions in the medium term (30 days). We analyze the official data and the predicted evolution of the epidemic in the Italian regions, and we compare the results with data and predictions of Spain and South Korea. We link the model equations to the changes in people’s mobility, with reference to Google’s COVID-19 Community Mobility Reports. We discuss the effectiveness of policies taken by different regions and countries and how they have an impact on past and future infection scenarios.
ARTICLE | doi:10.20944/preprints202001.0023.v1
Subject: Earth Sciences, Geophysics Keywords: Land Use Land Cover (LULC); Land Surface Temperature (LST); Google Earth Engine (GEE); relationship; remote sensing indices; MODIS; global
Online: 3 January 2020 (05:03:05 CET)
Land Surface Temperature (LST) and Land Use Land Cover (LULC) are the principal aspects of climate and environment studies. The object of the study is to assess spatial relationship between LST and remote sensing LULC indices at the global and continental scale. Moderate Resolution Imaging Spectroradiometer (MODIS) Aqua daytime LST and eight LULC MODIS indices of 2018 prepared and processed using Earth Engine Code Editor. R squared and significance of the relationship values of randomly selected points computed in R program. The research observed the relationship between examined indices and LST is significant at the 0.001 level. Normalized Difference Water Index (NDWI) and Normalized Difference Snow Index (DSI) are the dominant drivers of LST in the world, Asia and North America. In Australia and Africa, Normalized Difference Vegetation Index (NDVI) and Enhanced Vegetation Index (EVI) are the dominant drivers of LST. Albedo and Normalized Difference Soil Index (NDSI) have superior in Central America. In South America and Europe, the dominant driver of LST is NDWI. Relationship between albedo and LST is moderate inverse on a global scale. Observed relationship between LST and examined vegetation indices is positive in Europe and North America while inverse in Australia and Africa. All observed relationship between Normalized Difference Built-up Index (NDBI) and LST are positive. Association observed between NDSI and LST is positive in Australia, Africa and Central America.
ARTICLE | doi:10.20944/preprints201901.0083.v1
Subject: Earth Sciences, Geoinformatics Keywords: earthquake; anomaly detection; Google Earth Engine; outliers; interquartile range (IQR); multiparameter; brightness temperature (BT); latent heat flux (LE); land surface temperature; wind speed
Online: 9 January 2019 (11:59:14 CET)
One of the most destructive natural disasters is the earthquake which brings enormous risks to humankind. The objective of the current study was to determine the Earthquake’s remote sensing multiparameter (i.e. land surface temperature (LST), air temperature, specific humidity, precipitation and wind speed) spatiotemporal anomaly of many earthquake samples occurred during 2018 around the world. In this research 11 earthquake (M > 6:0) studied (4 samples selected in a land with transparent sky situations, 3 samples in land within cloudy situations and 4 samples in marine earthquakes). The interquartile range (IQR) and mean ± 2σ methods utilized to improve the efficiency of anomalous differences. As a result, based on the IQR method, negative anomaly before the event detected during the daytime in Mexico and during the nighttime in Afghanistan. In addition, a negative outlier of brightness temperature (BT) detected in Alaska before, after and during the event. In contrast, based on IQR and mean ± 2σ positive anomaly detected in precipitation before and after the event in all investigated examples. According to mean ± 2σ, negative anomaly LST, specific humidity, sea surface temperature (SST_100) and wind detected in most examined earthquake samples. In contrast, positive SST_0 anomaly observed in Fiji and Honduras after the earthquake. Our results suggested in marine earthquakes, for earthquake forecasting we can merge a prior negative anomaly in the wind speed and SST_100. Regarding the in land cloudy sky earthquakes, merging anomaly parameters could be the negative prior anomaly in BT, skin temperature, in contrast, a positive anomaly in precipitation. In land transparent sky earthquake, usually negative prior anomalies in air temperature, specific humidity and LST.