ARTICLE | doi:10.20944/preprints201702.0074.v1
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: network; systems; cloud computing; data centre; performance; software-defined; virtual machine; scheduling; admission control; application-aware;
Online: 20 February 2017 (04:56:24 CET)
Cloud computing refers to applications delivered as services over the Internet. Cloud systems employ policies that are inherently dynamic in nature and that depend on temporal conditions defined in terms of external events, such as the measurement of bandwidth, use of hosts, intrusion detection or specific time events. In this paper, we investigate an optimized resource management scheme named v-Mapper. The basic premise of v-Mapper is to exploit application-awareness concepts using software-defined networking (SDN) features. This paper makes three key contributions to the field: (1) We propose a virtual machine (VM) placement scheme that can effectively mitigate the VM placement issues for data-intensive applications; (2) We propose a validation scheme that will ensure that a service is entertained only if there are sufficient resources available for its execution and (3) We present a scheduling policy that aims to eliminate network load constraints. An evaluation was carried out with various benchmarks and demonstrated that v-Mapper shows improved performance over other state-of-the-art approaches in terms of average task completion time, service delay time and bandwidth utilization. Given the growing importance of supporting large-scale data processing and analysis in datacentres, the v-Mapper system has the potential to make a positive impact in improving datacentre performance in the future.
Subject: Engineering, General Engineering Keywords: cloud computing; cloud broker; PuLSAR; IaaS; PaaS; SaaS; QoS; SOS
Online: 3 April 2020 (15:31:23 CEST)
Abstract—as the complexity in the cloud services increases day by day the role of brokers used in the cloud gain more importance. Here we basically resolve this issue by discussing preference based cloud service recommender that support MCDM approach. The implementation and specifications details are properly discussed in a unified way to deal with the problem. SC3 is a tool that compromises capabilities of the quality assurance dimension of CSB and also it strengthen the flexibility of cloud services. The anxieties between definition procedure and implementation procedure are separated by service completeness compliance checker (SC3). Now a days companies use cloud computing for their economic benefits and market competitions that increase the demand of “cloud computing”. So for the calculation of critical success factor of “cloud computing” we focus on plan do check act strategy.
ARTICLE | doi:10.20944/preprints202005.0325.v1
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: Mobile Cloud Computing; Cloud Computing; Security and Computing Power
Online: 20 May 2020 (10:17:50 CEST)
Presently, smartphones support a large range of applications, many of which require high computing power. This presents a problem because smartphones offer limited computing power, storage, and energy. Fortunately, Cloud computing (CC) is rapidly becoming known as the state of the art technology in the computer world. CC allows users to use unlimited dynamic resources when necessary. Mobile Cloud Computing (MCC) is integration into a mobile environment of the concept of cloud computing which eliminates barriers to the performance of mobile devices. The demand for mobile cloud applications and mobile user services is strong. This gives MCC a great opportunity to do business and research. However, security problems are a major obstacle to the adaptability of mobile cloud computing. This study presents the definitions of Cloud Computing, Mobile Computing (MC) and Mobile Cloud Computing including the architecture and applications of MCC. Furthermore, it discusses the challenges and opportunities faced in MCC.
REVIEW | doi:10.20944/preprints202109.0413.v1
Subject: Engineering, Electrical & Electronic Engineering Keywords: Cloud Client (CC); Cloud computing; Cloud Service Provider (CSP); Security; Service Level Agreement (SLA); Privacy-Preserving Model (PPM); Third-party auditor (TPA)
Online: 23 September 2021 (15:55:08 CEST)
Cloud computing has become a prominent technology due to its important utility service; this service concentrates on outsourcing data to organizations and individual consumers. Cloud computing has considerably changed the manner in which individuals or organizations store, retrieve, and organize their personal information. Despite the manifest development in cloud computing, there are still some concerns regarding the level of security and issues related to adopting cloud computing that prevent users from fully trusting this useful technology. Hence, for the sake of reinforcing the trust between Cloud Clients (CC) and Cloud Service Providers (CSP), as well as safeguarding the CC’s data in the cloud, several security paradigms of cloud computing based on a Third-Party Auditor (TPA) have been introduced. The TPA, as a trusted party, is responsible for checking the integrity of the CC’s data and all the critical information associated with it. However, the TPA could become an adversary and could aim to deteriorate the privacy of the CC’s data by playing a malicious role. In this paper, we present the state-of-art of cloud computing’s privacy-preserving models (PPM) based on a TPA. Three TPA factors of paramount significance have been discussed: TPA involvement, security requirements, and security threats caused by vulnerabilities. Moreover, TPA’s privacy preserving models have been comprehensively analyzed and categorized into different classes with an emphasis on their dynamicity. Finally, we discuss the limitations of the models and present our recommendations for their improvement.
REVIEW | doi:10.20944/preprints201912.0138.v1
Subject: Keywords: Quality of Service; Cloud Computing; Virtualization; Data Accessibility; Challenges
Online: 10 December 2019 (15:45:46 CET)
Quality of Service (QoS) has a significant role in the provision of resources within service oriented distributed systems. In Quality of Service, cloud computing creates new challenges for improvements using the concept of virtualization. Currently, Cloud Computing is very emerging technology in every field of data storage and resource distribution over the network. Considering this new emerging technology, for the ease of data accessibility, price, resource use, restoration, response time and number of constraints the quality performance measures need to be upgraded. The paper highlights the research gap in providing a solution to achieve a Quality of Services in Cloud Computing. We also review the issues and challenges arising in cloud computing to guarantee quality.
REVIEW | doi:10.20944/preprints201903.0096.v1
Subject: Engineering, Electrical & Electronic Engineering Keywords: IIoT; cloud computing; fog computing; reliability; architecture.
Online: 7 March 2019 (12:44:18 CET)
Reliability is essential in industrial networks. In addition, most of the data from nodes of industrial Internet of Things (IIoT) are generated in real time. Thus, those data are mainly used for the time-sensitive applications. Furthermore, device failures should be considered when modeling reliable fog computing for IIoT. In this paper, we provide fundamental aspects to model reliable fog computing for IIoT. First, existing models of fog computing are compared. Then, the most feasible communication type to achieve a reliable system is determined from model analysis. Interaction modes are elaborated to study the advantages and drawbacks when communication is deployed in fog computing for IIoT, and challenges and solutions for reliable fog computing are discussed.
ARTICLE | doi:10.20944/preprints202109.0407.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: cloud computing; cloud resource management; task scheduling; ecosystem; geometric mean; symbiotic organisms search algorithm; convergence speed
Online: 23 September 2021 (12:31:06 CEST)
The search algorithm based on symbiotic organisms’ interactions is a relatively recent bio-inspired algorithm of the swarm intelligence field for solving numerical optimization problems. It is meant to optimize applications based on the simulation of the symbiotic relationship among the distinct species in the ecosystem. The modified SOS algorithm is developed to solve independent task scheduling problems. This paper proposes a modified symbiotic organisms search based scheduling algorithm for efficient mapping of heterogeneous tasks to access cloud resources of different capacities. The significant contribution of this technique is the simplified representation of the algorithm's mutualism process, which uses equity as a measure of relationship characteristics or efficiency of species in the current ecosystem to move to the next generation. These relational characteristics are achieved by replacing the original mutual vector, which uses an arithmetic mean to measure the mutual characteristics with a geometric mean that enhances the survival advantage of two distinct species. The modified symbiotic organisms search algorithm (G_SOS) aimed to minimize the task execution time (Makespan), response, degree of imbalance and cost and improve the convergence speed for an optimal solution in an IaaS cloud. The performances of the proposed technique have been evaluated using a Cladism toolkit simulator, and the solutions are found to be better than the existing standard (SOS) technique and PSO.
REVIEW | doi:10.20944/preprints201903.0063.v1
Subject: Mathematics & Computer Science, General & Theoretical Computer Science Keywords: 5G wireless, Distributed Cloud, Internet of Things
Online: 5 March 2019 (12:15:16 CET)
This article will provide an overview on internet of things (IoT), 5G communication System and Distributed Clouds. The basic concepts and benefits will be briefly presented, along with current standardization activities. In a nutshell, but the research will focus on relating internet of things, 5G and Distributed Cloud Computing.
ARTICLE | doi:10.20944/preprints201912.0079.v1
Subject: Mathematics & Computer Science, Other Keywords: mobile computing; cloud computing; security; virtualisation; privacy; authentication; storage
Online: 6 December 2019 (10:49:43 CET)
Mobile Cloud Computing(MCC) is a recent technology used by various users worldwide. In 2015, more than 240 million users used mobile cloud computing which gives a profit of $5.2 billion to service providers. MCC is a combination of Mobile computing and cloud computing. By the combination of these two, it gives various challenges like network access, elasticity, management, availability, security, privacy, etc. Here security issues are considered because both the security issues of mobile computing and cloud computing are considered as important like data security, virtualization security, partitioning security, Mobile cloud application security, Mobile device security. This paper gives a detailed study of security issues in mobile cloud computing and its prevention measures.
ARTICLE | doi:10.20944/preprints202112.0068.v1
Subject: Engineering, Electrical & Electronic Engineering Keywords: Data security; data handling; access control; unauthorized access; cloud computing
Online: 6 December 2021 (12:15:56 CET)
Nowadays, cloud computing is one of the important and rapidly growing paradigms that extend its capabilities and applications in various areas of life. The cloud computing system challenges many security issues, such as scalability, integrity, confidentiality, and unauthorized access, etc. An illegitimate intruder may gain access to the sensitive cloud computing system and use the data for inappropriate purposes that may lead to losses in business or system damage. This paper proposes a hybrid unauthorized data handling (HUDH) scheme for Big data in cloud computing. The HUDU aims to restrict illegitimate users from accessing the cloud and data security provision. The proposed HUDH consists of three steps: data encryption, data access, and intrusion detection. HUDH involves three algorithms; Advanced Encryption Standards (AES) for encryption, Attribute-Based Access Control (ABAC) for data access control, and Hybrid Intrusion Detection (HID) for unauthorized access detection. The proposed scheme is implemented using Python and Java language. Testing results demonstrate that the HUDH can delegate computation overhead to powerful cloud servers. User confidentiality, access privilege, and user secret key accountability can be attained with more than 97% high accuracy.
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: Amazon Web Services Academy; cloud computing; employability
Online: 24 November 2019 (16:18:08 CET)
The continuous increase in tuition fees in university education in many countries requires justification by the university authorities through improved learning resources that can guarantee employment opportunities for the students through hands-on industry training. This paper describes the design of curriculum of cloud computing module in collaboration with Amazon Web Services (AWS) Academy to include industry-based practical hands-on labs in the curriculum to improve the employability of Internet of Things (IoT) engineering students. This study introduces industry best practices and hands-on labs in cloud computing and discovered that students’ understanding and learning experience in the subject increase when exposed to real-life applications. The study blends academic theories in cloud computing with their applications as obtained in industry to enable students to have both the theoretical and practical skills that will prepare them for careers in cloud computing. To achieve this, cloud computing lecturers were trained by AWS Academy as a prerequisite to ensure that the tutors themselves acquire hands-on proficiency in cloud computing technologies. The study finds that students tend to be more engaged and learn better when academic theories and concepts are combined with real-world applications as obtained in the industry.
ARTICLE | doi:10.20944/preprints201903.0145.v1
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: Fog Computing, Cloud Computing, Security and Privacy, Edge Computing, Internet of Things
Online: 13 March 2019 (11:07:19 CET)
The development of the Internet of Things (IoT) has triggered a virtual wave of interconnection and intercommunication among an enormous number of universal things. This has caused an exceptional surge of colossal heterogeneous information, known as an information explosion. Until now, cloud computing has filled in as a proficient method to process and store these data. Still, it came to light that by utilizing just cloud computing, pesky issues like, the expanding requests of actual-time or speed-sensitive applications and the restrictions on system transfer speed could not be solved. Consequently, another computing platform, called fog computing has been advanced as a supplement to the cloud arrangement. Fog computing spreads the cloud administrations and services to the edge of the system, and brings processing, communications and reserving and storage capacity closer to edge gadgets and end-clients and, in the process, aims at enhancing versatility, low latency, transfer speed and safety and protection. This paper takes an extensive and wide-ranging view of fog computing, covering several aspects. At the outset is outlined the many-layered structural design of fog computing and its attributes. After that, chief advances like communication and inter-exchange, computing, reserving and storage, asset administration, naming, safety and safeguarding of privacy are delineated while showing how this backup and facilitate the installations and various applications. Then, numerous applications like augmented reality (AR), healthcare, gaming and brain-machine interface, vehicular computing, smart scenarios etc. are highlighted to explain the fog computing application milieu. Following that, it is shown that how, despite fog computing being a features-rich platform, it is dogged by its susceptibility to several security, privacy and safety concerns, which stem from the nature of its widely distributed and open architecture. Finally, some suggestions are advanced to address some of the safety challenges discussed so as to propel the further growth of fog computing.
REVIEW | doi:10.20944/preprints202207.0190.v1
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: cloud computing; data storage; users; service provider; software; hardware
Online: 13 July 2022 (04:52:59 CEST)
The popularity of cloud computing is growing owing to its large data storage capacity and high computation power. It provides online, on-demand, scalable application solution, removes hardware and software barriers for non-specialist, rapidly integrates and deploys desired and necessary facilities, supports quick upgrading and addition of features. Users get benefitted with the selection of the appropriate cloud computing platform for their projects. Here, our paper provides a comprehensive overview of the services provided to the users by the most common cloud computing service providers. This paper could be used as a reference while selecting the best service provider based on the requirements of the projects.
ARTICLE | doi:10.20944/preprints201807.0404.v1
Subject: Engineering, Electrical & Electronic Engineering Keywords: cloud computing; reliability; load balancing; Sufferage; task dispatching
Online: 22 July 2018 (11:43:32 CEST)
Due to the rapid development and popularity of the Internet, cloud computing has become an indispensable application service. However, how to assign various tasks to the appropriate service nodes is an important issue. Based on the reason above, an efficient scheduling algorithm is necessary to enhance the performance of system. Therefore, a Three-Layer Cloud Dispatching (TLCD) architecture is proposed to enhance the performance of task scheduling. In first layer, the tasks need to be distinguished to different types by their characters. Subsequently, the Cluster Selection Algorithm is proposed to dispatch the task to appropriately service cluster in the secondly layer. Besides, a new scheduling algorithm is proposed to dispatch the task to a suitable server in a server cluster to improve the dispatching efficiency in the thirdly layer. Basically, the TLCD architecture can obtain better task completion time than previous works. Besides, our algorithm and can achieve load-balancing and reliability in cloud computing network.
ARTICLE | doi:10.20944/preprints202109.0329.v1
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: BOINC; cloud computing; economic uncertainty; framework; grid compu-ting; optimization; resource pooling; Small and Medium-Sized Enterprise; technology acceptance model; virtualization
Online: 20 September 2021 (11:49:41 CEST)
Market turbulence with fiscal investment influences has altered IT infrastructure performance as business pursue extravagant new technology adoption. Yet, few studies have examined how shareware solution goes beyond Medium Size Enterprise that pushes efficiency and sustainability. This PLS-SEM integrated with dual primary compilation approach lessens shallow perceptions coupled with outlooks that streamline each phenomenal activity that is worthy of the necessity for competitive innovation. This unified model was applied to sampling respondents and analyzed using an ordinal regression relationship that generates a robust association that triggers the hypothesis acceptance. The adopting of BOINC shareware mesh network towards unified processing designs that were employed to build the yield by promising financial possibility using coordinated interworks hence improved group accomplishment and establishing greater esteem. This paper showcases a flexible inner IT infrastructure alongside the economic uncertainty with the framework advancement for Exostructure as a Service. Associated theoretical and practical implications were discussed.
REVIEW | doi:10.20944/preprints202102.0048.v1
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: autonomous driving systems; computer vision; neural networks; feature extraction; segmentation; assisted driving; cloud computing; parallelization
Online: 1 February 2021 (14:50:20 CET)
Autonomous driving systems are increasingly becoming a necessary trend towards building smart cities of the future. Numerous proposals have been presented in recent years to tackle particular aspects of the working pipeline towards creating a functional end-to-end system, such as object detection, tracking, path planning, sentiment or intent detection. Nevertheless, few efforts have been made to systematically compile all of these systems into a single proposal that effectively considers the real challenges these systems will have on the road, such as real-time computation, hardware capabilities, etc. This paper has reviewed various techniques towards proposing our own end-to-end autonomous vehicle system, considering the latest state on the art on computer vision, DSs, path planning, and parallelization.
ARTICLE | doi:10.20944/preprints201809.0442.v1
Subject: Earth Sciences, Atmospheric Science Keywords: MISR; cloud volume; cloud geometry; cloud shape; cloud boundary; cloud volume reconstruction.
Online: 22 September 2018 (23:00:20 CEST)
Abstract: Characterization the 3-D structure of clouds is needed for a more complete understanding of the Earth's radiative and latent heat fluxes. Here we develop and explore a “ray casting” algorithm applied to the Multi-angle Imaging SpectroRadiometer (MISR) on board the Terra satellite, to reconstruct 3-D cloud volumes for observed clouds. The ray casting algorithm is first applied to geometrically simple synthetic clouds to show that, under the assumption of perfect, clear-conservative cloud masks, the reconstruction method yields overestimation whose magnitude depends on the cloud geometry and the resolution of the reconstruction grid relative to the image pixel resolution. The method is then applied to two select MISR scenes, fully accounting for MISR’s viewing geometry for reconstructions over the Earth’s ellipsoidal surface. The MISR Radiometric Camera-by-camera Cloud Masks at 1.1 km resolution and custom cloud masks at 275 m resolution independently derived from MISR RGB channels are used as input cloud masks. A wind correction method, termed “cloud spreading”, is devised and applied to the cloud masks to offset potential cloud movements over short time intervals (around 7 minutes at maximum) between the cameras. The MISR cloud top height product is used as a constraint to reduce the overestimation at the cloud top. The reconstruction results show that their uncertainty is significant when the wind correction is applied, and that they have more refined structures when the input cloud mask has a higher resolution. Recommendations for improving the presented cloud volume reconstructions as well as for future passive remote sensing satellite missions are discussed.
ARTICLE | doi:10.20944/preprints202109.0180.v1
Subject: Engineering, Industrial & Manufacturing Engineering Keywords: point cloud registration; template point cloud; multiple partial point cloud; deep learning
Online: 10 September 2021 (10:26:10 CEST)
With the advancement of photoelectric technology and computer image processing technology, the visual measurement method based on point clouds is gradually applied to the 3D measurement of large workpieces. Point cloud registration is a key step in 3D measurement, and its registration accuracy directly affects the accuracy of 3D measurements. In this study, we designed a novel MPCR-Net for multiple partial point cloud registration networks. First, an ideal point cloud was extracted from the CAD model of the workpiece and was used as the global template. Next, a deep neural network was used to search for the corresponding point groups between each partial point cloud and the global template point cloud. Then, the rigid body transformation matrix was learned according to these correspondence point groups to realize the registration of each partial point cloud. Finally, the iterative closest point algorithm was used to optimize the registration results to obtain a final point cloud model of the workpiece. We conducted point cloud registration experiments on untrained models and actual workpieces, and by comparing them with existing point cloud registration methods, we verified that the MPCR-Net could improve the accuracy and robustness of the 3D point cloud registration.
Subject: Earth Sciences, Atmospheric Science Keywords: warm cloud-precipitation; cloud radar; ceilometer; disdrometer; South China
Online: 23 October 2019 (03:35:10 CEST)
Warm cloud-precipitation plays a vital role in the hydrological cycle, weather, and climate. Comprehensive observation and study of warm cloud-precipitation can advance our understanding of the internal physical processes and provide valuable information for developing the numerical models. This paper mainly focused on a study of characteristics of warm cloud-precipitation in South China during the pre-flood season using datasets observed from a Ka-band cloud radar, laser ceilometer and disdrometer. Eighteen kinds of quantities from these three instruments were used to precisely elucidate the distribution, diurnal variation, vertical structure, and physical property of warm cloud-precipitation. The results showed that the occurrence of aloft cloud-precipitation decreased with the increase of height, and most of the hydrometeors were distributed below 2 km. During the observation period, the ground rainfall mainly came from light precipitation; however, short-time and sharp showers contributed to the majority of rain amounts. Most of the cloud layers were single-layer, with base heights below 2.2 km, thickness thinner than 2.1 km, and top heights within 0.6-4.2 km. Warm cloud-precipitation owned certain diurnal variations, with a rising trend of cloud base heights in the afternoon and midnight. During 0230-1100, 1200-1800, and 2100-2300, the convections were relatively active with higher cloud tops, thicker cloud thickness, and higher rainfall occurrences. Separation and statistical results of cloud and precipitation indicated that they owned different vertical structures and physical properties, exhibiting different value ranges and changes of radar reflectivity, vertical air motion, particle size, number concentration, liquid water, and rain rate at different height levels. The particle size distributions of cloud and precipitation both were exponential. Radar-derived raindrop size distribution was very coherent with the ground measurement when the reflectivity of precipitation was within 10-20 dBZ. However, for other reflectivity regimes, instrument sensitivity, sampling height, attenuation, and non-precipitating weak targets can affect the comparison.
ARTICLE | doi:10.20944/preprints201707.0060.v1
Subject: Earth Sciences, Atmospheric Science Keywords: vertical air velocity; millimeter-wave cloud radar; convective cloud; Tibetan Plateau
Online: 21 July 2017 (04:58:56 CEST)
In the summertime, convections occur frequently over the Tibetan Plateau (TP) because of the large dynamic and thermal effects of the landmass. Measurements of vertical air velocity in convective cloud are useful for advancing our understanding of the dynamic and microphysical mechanisms of clouds and can be used to improve the parameterization of current numerical models. This paper presents a technique for retrieving high-resolution vertical air velocity from convective cloud over the TP, by using Doppler spectra from a vertically pointing Ka-band cloud radar. The method is based on the development of a “small-particle-traced” idea and the necessary data processing and uses three modes of radar. Spectral broadening corrections, uncertainty estimations, and result merging are used to ensure accurate results. Qualitative analysis of two typical convective cases shows that the retrievals are reliable and agree with the expectant results inferred from other radar measurements. A quantitative retrieval of vertical air motion from a ground-based optical disdrometer is used to preliminarily validate our radar-derived results. The comparison illustrates that while the data trends from the two methods of retrieval are similar, with the updrafts and downdrafts coinciding, cloud radar has a much higher resolution and can reveal the small-scale variation of vertical air motion.
ARTICLE | doi:10.20944/preprints201807.0276.v1
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: cloud computing; service-oriented architecture; SOA; cloud-native; serverless; microservice; container; unikernel; distributed cloud; P2P; service-to-service; service-mesh
Online: 16 July 2018 (10:57:39 CEST)
This paper presents a review of cloud application architectures and its evolution. It reports observations being made during the course of a research project that tackled the problem to transfer cloud applications between different cloud infrastructures. As a side effect we learned a lot about commonalities and differences from plenty of different cloud applications which might be of value for cloud software engineers and architects. Throughout the course of the research project we analyzed industrial cloud standards, performed systematic mapping studies of cloud-native application related research papers, performed action research activities in cloud engineering projects, modeled a cloud application reference model, and performed software and domain specific language engineering activities. Two major (and sometimes overlooked) trends can be identified. First, cloud computing and its related application architecture evolution can be seen as a steady process to optimize resource utilization in cloud computing. Second, this resource utilization improvements resulted over time in an architectural evolution how cloud applications are being build and deployed. A shift from monolithic servce-oriented architectures (SOA), via independently deployable microservices towards so called serverless architectures is observable. Especially serverless architectures are more decentralized and distributed, and make more intentional use of independently provided services. In other words, a decentralizing trend in cloud application architectures is observable that emphasizes decentralized architectures known from former peer-to-peer based approaches. That is astonishing because with the rise of cloud computing (and its centralized service provisioning concept) the research interest in peer-to-peer based approaches (and its decentralizing philosophy) decreased. But this seems to change. Cloud computing could head into future of more decentralized and more meshed services.
ARTICLE | doi:10.20944/preprints202104.0074.v1
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Cloud continuum; fog computing; edge computing; fog-to-cloud; Internet of Things (IoT)
Online: 2 April 2021 (14:38:23 CEST)
The wide adoption of the recently coined fog and edge computing paradigms alongside conventional cloud computing creates a novel scenario, known as the cloud continuum, where services may benefit from the overall set of resources to optimize their execution. To operate successfully, such a cloud continuum scenario demands for novel management strategies, enabling a coordinated and efficient management of the entire set of resources, from the edge up to the cloud, designed in particular to address key edge characteristics, such as mobility, heterogeneity and volatility. The design of such a management framework poses many research challenges and has already promoted many initiatives worldwide at different levels. In this paper we present the results of one of these experiences driven by an EU H2020 project, focusing on the lessons learnt from a real deployment of the proposed management solution in three different industrial scenarios. We think that such a description may help understand the benefits brought in by a holistic cloud continuum management and also may help other initiatives in their design and development processes.
ARTICLE | doi:10.20944/preprints201812.0345.v1
Subject: Mathematics & Computer Science, Other Keywords: Cloud Storage Forensics, Cloud Application Artifacts, Data Remnants, Data Carving, Digital Forensic Investigations
Online: 3 January 2019 (12:17:11 CET)
This research proposed in this paper focuses on gathering evidence from devices with Windows 10 operating systems in order to discover and collect artifacts left by cloud storage applications that suggest their use even after the deletion of the Google client application. We show where and what type of data remnants can be found using our analysis which can be used as evidence in a digital forensic investigations.
ARTICLE | doi:10.20944/preprints201804.0367.v1
Subject: Earth Sciences, Atmospheric Science Keywords: effective cloud albedo; solar surface irradiance; optical flow; cloud motion vectors; renewable energies
Online: 28 April 2018 (11:32:20 CEST)
The increasing use of renewable energies as a source of electricity has led to a fundamental transition of the power supply system. The integration of fluctuating weather-dependent energy sources into the grid already has a major impact on the load flows of the grid. As a result, the interest in forecasting wind and solar radiation with a sufficient accuracy over short time horizons grew. In this study the short-term forecast of the effective cloud albedo based on optical flow estimation methods are investigated. The optical flow method utilized here is TV-L1 from the open source library OpenCV. This method uses a multi-scale-approach to capture cloud motions on various spatial scales. After the clouds are displaced the solar surface radiation will be calculated with SPECMAGIC NOW which computes the global irradiation spectrally resolved from satellite imagery. Due to a high temporal and spatial resolution of satellite measurements the effective cloud albedo and thus solar radiation can be forecasted from 5 minutes up to 4 hours with a resolution of 0.05°. In the following there will be a brief description of the method for the short-term forecast of the effective cloud albedo. Subsequently evaluation results will be presented and discussed. Finally an outlook of further developments will be given.
ARTICLE | doi:10.20944/preprints202208.0426.v1
Online: 25 August 2022 (07:22:48 CEST)
With the ever-increasing popularity of unmanned aerial vehicles and other platforms providing dense point clouds, filters for identification of ground points in such dense clouds are needed. Many filters have been proposed and are widely used, usually based on the determination of an original surface approximation and subsequent identification of points within a predefined dis-tance from such surface. We present a new filter, Multi-view and shift rasterization algorithm (MVSR) is based on a different principle, i.e., on the identification of just the lowest points in in-dividual grid cells, shifting the grid along both planar axis and subsequent tilting of the entire grid. The principle is presented in detail and compared both visually and numerically to other commonly used ground filters (PMF, SMRF, CSF, ATIN) on three sites with different ruggedness and vegetation density. Visually, the MVSR filter showed the smoothest and thinnest ground profiles, with ATIN the only filter performing comparably. The same was confirmed when comparing ground filtered by other filters with the MVSR-based surface. The goodness of fit with the original cloud is demonstrated by the root mean square deviations (RMSD) of the points from the original cloud found below the MVSR-generated surface (ranging, depending on site, between 0.6-2.5 cm). The MVSR filter performed outstandingly at all sites, identifying the ground points with great accuracy while filtering out the maximum of vegetation/above-ground points. The filter dilutes the cloud somewhat; in such dense point clouds, however, this can be perceived rather as a benefit than as a disadvantage.
ARTICLE | doi:10.20944/preprints202206.0300.v1
Online: 22 June 2022 (03:37:45 CEST)
With the ever-increasing popularity of unmanned aerial vehicles and other platforms providing dense point clouds, universal filters for accurate identification of ground points in such dense clouds are needed. Many filters have been proposed and are widely used, usually based on the determination of an original surface approximation and subsequent identification of points within a predefined distance from such surface. In this paper, we present a new filter. This Multi-view and shift rasterization algorithm (MVSR) is based on an entirely different principle, i.e., on the identification of just the lowest points in individual grid cells, shifting the grid along both planar axis and subsequent tilting of the entire grid – after each of these steps, one lowest point per cell is detected. The principle is presented in detail and compared both visually and numerically to other commonly used ground filters (PMF, SMRF, CSF, ATIN) on three sites with different ruggedness and vegetation density. Visually, the MVSR filter showed the smoothest and thinnest ground profiles, with ATIN the only filter performing comparably (although the profiles were somewhat thicker and not as complete as MVSR-acquired ground). The same was confirmed when comparing ground filtered by other filters with the MVSR-based surface. The goodness of fit with the original cloud is demonstrated by the root mean square deviations (RMSD) of the points from the original cloud found below the MVSR-generated surface (ranging, depending on site, between 0.6-2.5 cm). ATIN again performed closest to MVSR, with RMSDs of ground filtered points found above MVSR-based surface at individual sites ranging between 4.5-7.4 cm. The remaining filters performed comparable in the simplest flat area but poorly in rugged and much-vegetated sites, with RMSDs above the MVSR surface ranging at such sites from 21 to 95 cm. In conclusion, the novel filter presented in this paper performed outstandingly at all sites, identifying the ground points with great accuracy while filtering out the maximum of vegetation/above-ground points. The filter dilutes the cloud somewhat; in such dense point clouds, however, this can be perceived rather as a benefit than as a disadvantage.
ARTICLE | doi:10.20944/preprints201808.0164.v1
Subject: Engineering, Electrical & Electronic Engineering Keywords: Cloud robotic; Security policy; DDoS
Online: 8 August 2018 (08:54:39 CEST)
Cloud robot is becoming popular and security of cloud robot is important. However, the researches of cloud robot safety are a few. This work develops a security policy to defense DDoS attack of cloud robot. In this policy, complex, but accurate calculation models are deployed on the cloud, simple but efficient calculation models are deployed on the robot. In the cloud, there are master server and standby server. The master server transfers parameters of complex but accurate models to the standby server periodically and the master executes the start-stop backup policy. Specifically, this work proposes and proves an algorithm to dynamically adjust the interval of parameter transfer. According to a PDRA feature of Netflow, when DDoS attacks, the master server sends warning signals to the robot and standby server. The robot runs local models to avoid stopping work until it is connected to the standby server. Then, standby server provides service to the robot until the master server recover. Finally, this work implemented a gesture recognition cloud robot based on convolutional neural network, hidden Markov model and PDRA feature of Netflow to verify the policy. Experiment shows that the security policy to defense DDoS attack for cloud robot is effective.
ARTICLE | doi:10.20944/preprints202209.0284.v1
Online: 20 September 2022 (02:44:31 CEST)
The change of tire groove depth will have a huge impact on tire performance, and the use of excessively worn tires is not conducive to the driving safety of automobiles. Tire groove depth detection has become one of the annual inspection items of automobiles, but the research on its related detection technology is still relatively backward. Based on the principle of monocular vision ranging (MVR), image processing technology and cloud platform technology, this paper develops a tire groove depth detection system, which realizes non-destructive detection of tire groove depth. In addition, the system uses the cloud platform to store the test results, and builds a multi-level data management system, allowing car owners to keep track of the tire wear status and historical changes, which is of great significance to ensuring driving safety.
ARTICLE | doi:10.20944/preprints202105.0230.v1
Online: 11 May 2021 (10:31:35 CEST)
Cloud computing nowadays is not an emerging topic, and virtualization is an indispensable technology to expedite cloud computing to become the next sign of the coming Internet revolution. In real life, scientists never stop at exploring the possibilities from such technology by investigating millions of experiments and applications to enhance the quality of virtual services. However, isolated construction for the virtual machine doesn’t save the technology from unwanted data volumes or insensitive processing time. Containers are created to address such problems, by distributing applications without initiating the entire virtual machine. Docker, as an important player in this game, is an open-source application of the container family. The management tool from Docker containers, Swamskit, does not take heterogeneities in either virtualized containers or physical nodes. There are different nodes in the cluster, and each node is different in configurations, resource availability, or concerning resource, etc. Furthermore, the requirements initiated by different services change all the time. The demand might be CPU-intensive (e.g. Clustering services) and also memory-intensive (e.g. Web services), or completely at the opposite. In this paper, we focus on exploring the Docker container cluster and designing, DRAPS, a resource-aware placement scheme, to improve the system performance in a heterogeneous cluster.
ARTICLE | doi:10.20944/preprints201803.0248.v1
Subject: Earth Sciences, Atmospheric Science Keywords: stratocumulus; cloud physical characteristics; eastern China
Online: 29 March 2018 (09:03:22 CEST)
Stratocumulus (Sc) is the most common cloud type in China. Sc clouds may or may not be accompanied by various types of precipitation that are representative of different macro- and microphysical characteristics. The finely resolved CloudSat data products are used in this study to quantitatively investigate the macro- and microphysical characteristics of precipitating and non-precipitating Sc (PS and NPS, respectively) clouds over Eastern China (EC). Based on statistical information extracted from the CloudSat data, Sc clouds are highly likely to occur alone, in association with liquid precipitation, or in association with drizzle over 25.65% of EC. The cloud bases of NPS clouds are higher than those of PS clouds, although the latter display higher cloud top heights and thicker cloud thicknesses. The spatial distributions of microphysical characteristics differ between PS and NPS clouds. The magnitudes of microphysical characteristics in NPS clouds are relatively small and decrease with height, whereas the magnitudes of microphysical characteristics in PS clouds are relatively large and peak in response to certain circulation patterns and over certain terrain. The variations in microphysical characteristics in Sc clouds with height and contoured frequency by altitude diagrams (CFADs) of radar reflectivity may indicate that different microphysical processes operate in PS and NPS clouds. In NPS clouds, hydrometeor particles accumulate by coalescence as they rise; once the particles are too large to be supported by updrafts, the cloud droplets form raindrops. In PS clouds, raindrops increase continuously in size via collision-coalescence processes as they fall. The levels between 2.5 and 3.0 km represent the space where particles grow most rapidly. Particles are affected by updrafts and accumulate at levels between 2.5 and 1.0 km as height decreases.
ARTICLE | doi:10.20944/preprints201802.0103.v1
Online: 15 February 2018 (16:49:55 CET)
An effective on-board cloud detection method in small satellites would greatly improve the downlink data transmission efficiency and reduce the memory cost. In this paper, an ensemble method combining a lightweight U-Net with wavelet image compression is proposed and evaluated. The red, green, blue and infrared waveband images from Landsat-8 dataset are trained and tested to estimate the performance of proposed method. The LeGall-5/3 wavelet transform is applied on the dataset to accelerate the neural network and improve the feasibility of on-board implement. The experiment results illustrate that the overall accuracy of the proposed model achieves 97.45% by utilizing only four bands. Tests on low coefficients of compressed dataset have shown that the overall accuracy of the proposed method is still higher than 95%, while its inference speed is accelerated to 0.055 second per million pixels and maximum memory cost reduces to 2Mb. By taking advantage of mature image compression system in small satellites, the proposed method provides a good possibility of on-board cloud detection based on deep learning.
ARTICLE | doi:10.20944/preprints202109.0112.v1
Subject: Engineering, Marine Engineering Keywords: 3D point Cloud Classification, 3D point Cloud Shape Completion,Auto-Encoders, Contrastive Learning, Self-Supervised Learning
Online: 6 September 2021 (18:00:28 CEST)
In this paper, we present the idea of Self Supervised learning on the Shape Completion and Classification of point clouds. Most 3D shape completion pipelines utilize autoencoders to extract features from point clouds used in downstream tasks such as Classification, Segmentation, Detection, and other related applications. Our idea is to add Contrastive Learning into Auto-Encoders to learn both global and local feature representations of point clouds. We use a combination of Triplet Loss and Chamfer distance to learn global and local feature representations. To evaluate the performance of embeddings for Classification, we utilize the PointNet classifier. We also extend the number of classes to evaluate our model from 4 to 10 to show the generalization ability of learned features. Based on our results, embedding generated from the Contrastive autoencoder enhances Shape Completion and Classification performance from 84.2% to 84.9% of point clouds achieving the state-of-the-art results with 10 classes.
ARTICLE | doi:10.20944/preprints201910.0155.v1
Subject: Engineering, Control & Systems Engineering Keywords: robot cloud; cognition as a service; cognitive robots; sentential cognitive system; cloud service; human–robot interaction
Online: 14 October 2019 (06:26:52 CEST)
Cloud robotics is becoming an alternative to support advanced services of robots with low computing power as network technology advances. Recently, fog robotics has gained attention since the approach has merit relieving latency and security issues over the conventional cloud robotics. In this paper, a Function-as-a-Service based Fog Robotic (FaaS-FR) for cognitive robots is proposed. The model distributes the cognitive functions according to the computational power, latency and security with a public robot cloud and fog robot server. During the experiment with a Raspberry Pi as an edge, the proposed FaaS-FR model shows efficient and practical performance in the proper distribution of the computational work of the cognitive system.
ARTICLE | doi:10.20944/preprints201808.0235.v1
Subject: Engineering, Control & Systems Engineering Keywords: Control as a Service; Cloud Computing; Cloud Manufacturing; Additive Manufacturing; Smart Manufacturing; Industry 4.0; Internet of Things
Online: 13 August 2018 (17:04:45 CEST)
Control as a Service (CaaS) is an emerging paradigm where low-level control of a device is moved from a local controller to the Cloud, and provided to the device as an on-demand service. Among its many benefits, CaaS gives the device access to advanced control algorithms which may not be executable on a local controller due to computational limitations. As a step toward 3D printer CaaS, this paper demonstrates the control of a 3D printer by streaming low-level stepper motor commands (as opposed to high-level G-codes) directly from the Cloud to the printer. The printer is located at the University of Michigan, Ann Arbor, while its stepper motor commands are calculated using an advanced motion control algorithm running on Google Cloud computers in South Carolina and Australia. The stepper motor commands are sent over the Internet using the user datagram protocol (UDP) and buffered to mitigate transmission delays; checks are included to ensure accuracy and completeness of the transmitted data. All but one part printed using the cloud-based controller in both locations were hitch free (i.e., no pauses due to excessive transmission delays). Moreover, using the cloud-based controller, the parts printed up to 54% faster than using a standard local controller, without loss of accuracy.
ARTICLE | doi:10.20944/preprints202204.0138.v1
Subject: Mathematics & Computer Science, Other Keywords: API; clickstream; cloud applications; process mining; scripting
Online: 15 April 2022 (07:37:06 CEST)
Background: Process mining (PM) exploits event logs to obtain meaningful information about the processes that produced them. As the number of applications developed on cloud infrastructures is increasing, it becomes important to study and discover their underlying processes. However, many current PM technologies face challenges in dealing with complex and large event logs from cloud applications, especially when they have little structure (e.g., clickstreams). Methods: Using Design Science Research, this paper introduces a new method, called Cloud Pattern API – Process Mining (CPA-PM), that enables discovering and analyzing cloud-based application processes using PM in a way that addresses many of these challenges. CPA-PM exploits a new application programming interface (API), with an R implementation, for creating repeatable scripts that preprocess event logs collected from such applications. Results: Applying CPA-PM to a case with real and evolving event logs related to the trial process of a Software-as-a-Service cloud application led to useful analyses and insights, with reusable scripts. Conclusion: CPA-PM helps producing executable scripts for filtering event logs from clickstream and cloud-based applications, where the scripts can be used in pipelines while minimizing the need for error-prone and time-consuming manual filtering.
ARTICLE | doi:10.20944/preprints201909.0201.v1
Subject: Earth Sciences, Environmental Sciences Keywords: freeze–thaw erosion; cloud model; ahp; tibet
Online: 18 September 2019 (08:04:38 CEST)
Traditionally, studies on freeze–thaw erosion have used the analytic hierarchy process (AHP) to calculate the weight of evaluation factors, however, this method cannot accurately depict the fuzziness and randomness of the problem. To overcome this disadvantage, the present study has proposed an improved AHP method based on the cloud model to evaluate the impact factors in freeze–thaw erosion. To establish an improved evaluation method for freeze–thaw erosion in Tibet, the following six factors were selected: annual temperature range, average annual precipitation, slope, aspect, vegetation coverage, and topographic relief. The traditional AHP and the cloud model were combined to determine the weight of the impact factors, and a consistency check was performed. The comprehensive evaluation index model was used to evaluate the intensity of freeze–thaw erosion in Tibet. The results show that freeze–thaw erosion is extensive, stretching over approximately 66.1% of Tibet. The problem is the most serious in Ngari Prefecture and Nagqu. However, mild erosion and moderate erosion, accounting for 37.1% and 25.0%, respectively, of the total freeze–thaw erosion are the most widely distributed. The evaluation results for the freeze–thaw erosion was confirmed to be consistent with the actual situation. In brief, this study provided a new approach to evaluate the conditions of freeze–thaw erosion quantitively in Tibet.
ARTICLE | doi:10.20944/preprints201811.0270.v1
Subject: Mathematics & Computer Science, General & Theoretical Computer Science Keywords: cloud systems; AI; cognitive; programming languages; algorithms
Online: 12 November 2018 (08:49:16 CET)
The web ecosystem is rapidly evolving with changing business and functional models. Cloud platforms are available in a SaaS, PaaS and IaaS model designed around commoditized Linux based servers. 10 billion users will be online and accessing the web and its various content. The industry has seen a convergence around IP based technology. Additionally, Linux based designs allow for a system wide profiling of application characteristics. The customer is an OEM who provides Linux based servers for telecom solutions. The end customer will develop business applications on the server. Customers are interested in a latency profiling mechanism which helps them to understand how the application behaves at run time. The latency profiler is supposed to find the code path which makes an application block on I/O, and other synchronization primitives. This will allow the customer to understand the performance bottleneck and tune the system and application parameters.
ARTICLE | doi:10.20944/preprints202208.0427.v1
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: cloud-native; observability; cloud computing; logging; structured logging; logs; metrics; traces; distributed tracing; log aggregation; log forwarding; log consolidation
Online: 25 August 2022 (07:32:18 CEST)
Background: Cloud-native software systems often have a much more decentralized structure and many independently deployable and (horizontally) scalable components, making it more complicated to create a shared and consolidated picture of the overall decentralized system state. Today, observability is often understood as a triad of collecting and processing metrics, distributed tracing data, and logging. The result is often a complex observability system composed of three stovepipes whose data is difficult to correlate. Objective: This study analyzes whether these three historically emerged observability stovepipes of logs, metrics and distributed traces could be handled more integrated and with a more straightforward instrumentation approach. Method: This study applied an action research methodology used mainly in industry-academia collaboration and common in software engineering. The research design utilized iterative action research cycles, including one long-term use case. Results: This study presents a unified logging library for Python and a unified logging architecture that uses the structured logging approach. The evaluation shows that several thousand events per minute are easily processable. Conclusion: The results indicate that a unification of the current observability triad is possible without the necessity to develop utterly new toolchains.
COMMUNICATION | doi:10.20944/preprints202301.0335.v2
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: Cloud Computing; Data Protection; Secure Communication; Middleware; Protocols
Online: 30 January 2023 (09:24:01 CET)
In recent years, Cloud Computing and Big Data have been considered the most attractive areas that are revolutionizing the IT world. Cloud Computing paradigm has recently appeared that allows running proprietary or difficult portable applications outside their original software environment on one or more virtual hardware platforms. Therefore, we are to developing such techniques which make it possible to secure communication between the communicating Cloud entities. These techniques must take into account several factors due to the data transmitted in this type of environment is proprietary and of significant size. Conventional data security techniques are not suitable for today's cloud usage. Hence, the main research of this thesis is to define an adaptable architecture with the aim to propose a scalable system that supports cloud services. We will define feasible security solutions dedicated to the Cloud computing context in order to robustly protect data stored in the Cloud. We are more precisely looking for working on NoSQL databases. We also intend to propose a secure solution based on the blockchain that has powerful features like decentralization, autonomy, security, reliability, and transparency.
ARTICLE | doi:10.20944/preprints202109.0275.v1
Online: 16 September 2021 (11:02:38 CEST)
Molecular Dynamics (MD) simulations model motion of molecules in atomistic detail and aid in drug design. While simulations on large systems may require several days to complete, analysis of terabytes of data generated in the process could also be time consuming. Recent studies captured exciting and dramatic drug-receptor interactions under cell-like complex conditions. Such advances make simulations of biomolecular interactions more realistic, insightful, and informative and have potential to make drug design more realistic. However, currently available resources and techniques do not provide, in reasonable time, a comprehensive understanding of events seen in simulations. We demonstrate that big data approach results in significant speedups, and provides rapid insights into simulations performed. Advancing this improvement, we propose a scalable, self-tuning, and responsive framework based on Cloud-infrastructure to accomplish the best possible MD studies with given priorities and within available resources.
ARTICLE | doi:10.20944/preprints202011.0410.v1
Online: 16 November 2020 (10:39:44 CET)
DevOps is an emerging practice to be followed in the Software Development life cycle. The name DevOps indicates that it’s an integration of the Development and Operation team. It is followed to integrate the various stages of the development cycle. DevOps is an extended version of the existing Agile method. DevOps aims at continuous integration, Continuous Delivery, Continuous Improvement, faster feedback and security. This paper reviews the building blocks of DevOps, challenges in adopting DevOps, Models to improve DevOps practices and Future works on DevOps
ARTICLE | doi:10.20944/preprints202011.0020.v1
Online: 2 November 2020 (10:38:31 CET)
The explosion of data has transformed the world since much more information is available for collection and analysis than ever before. To extract valuable information from the data in different dimensions, various deep learning models have been developed in the past years. Although these models have demonstrated their strong capability on improving products and services in various applications, training them is still a time-consuming and resource-intensive process. Presently, cloud, one of the most powerful computing infrastructures, has been used for the training. However, how to manage cloud computing resources and to perform the training efficiently is still challenging current techniques. For example, general resource scheduling approaches, such as spread priority and balanced resource schedulers, actually do not work well with deep learning workloads. Besides, the resource allocation problem on a cluster can be divide into two subproblems: (1) local resource optimization: improve resource configuration for a single machine; (2) global resource optimization: improve the cluster-wide resource allocation. In this thesis, we propose two novel container schedulers, FlowCon and SpeCon, that are designed to address these two subproblems respectively and specifically to optimize performance of short-lived deep learning applications in the cloud. FlowCon focuses on resource configuration of single-node in a cluster, as show that it efficiently improves deep learning tasks completion time and resource utilization, and reduces the completion time of a specific job by up to 42.06\% without sacrificing the overall system time. SpeCon targets on cluster-wide resource configuration that speculatively migrate slow-growing models to release resources for fast-growing ones. Based on our experiments, SpeCon improves makespan for up to 24.7\%, compared to current approaches.
ARTICLE | doi:10.20944/preprints202011.0017.v1
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Container Scheduling, Resource Management, Deep Learning, Cloud Computing
Online: 2 November 2020 (10:31:01 CET)
The advent of deep learning has completely reshaped our world. Now, our daily life is fulfilled with many well-known applications that adopt deep learning techniques, such as self-driving cars and face recognition. Furthermore, robotics developed more forms of technology which share the same principle with face recognition, such as hand pose recognition and fingerprint recognition. Image recognition technology requires a huge database and various learning algorithms, such as convolutional neural network and recurrent neural network, that requires lots of computational power, such as CPUs and GPUs. Thus, clients could not be satisfied with the computational resource of the local machine. The cloud resource platform emerged at a historic moment. Docker containers play a significant role of microservices-based applications in the next generation. However, it could not guarantee the quality of service. From clients’ perspective, they have to balance the budget and quality of experiences (e.g. response time). The budget leans on individual business owners and the required Quality of Experience (QoE) depends on usage scenarios of different applications, for instance, an autonomous vehicle requires real-time response, but, unlocking your smartphone can tolerate delays. Plenty of on-going projects developed user-oriented optimization resource allocation to improve the quality of the service. Considering the users’ specifications, including accelerating the training process and specifying the quality of experience, this thesis proposes two differentiate containers scheduling for deep learning applications: TRADL and DQoES .
ARTICLE | doi:10.3390/sci2020022
Online: 2 April 2020 (00:00:00 CEST)
On 25 May 2018, the General Data Protection Regulation (GDPR)Article 17, the Right to Erasure (‘Right to be Forgotten’) came into force making it vital for organisations to identify, locate and delete all Personally Identifiable Information (PII) where a valid request is received from a data subject to erase their PII and the contractual period has expired. This must be done without undue delay and the organisation must be able to demonstrate reasonable measures were taken. Failure to comply may incur significant fines, not to mention impact to reputation. Many organisations do not understand their data, and the complexity of a hybrid cloud infrastructure means they do not have the resources to undertake this task. The variety of available tools are quite often unsuitable as they involve restructuring so there is one centralised data repository. This research aims to demonstrate compliance with GDPR’s Article 17 Right to Erasure (‘Right to be Forgotten’) is achievable in a Hybrid cloud environment by following a list of recommendations. However, 100% retrieval, 100% of time will not be possible, but we show that small organisations running an ad-hoc Hybrid cloud environment can demonstrate that reasonable measures were taken to be Right to Erasure (‘Right to be Forgotten’) compliant.
REVIEW | doi:10.20944/preprints202001.0378.v1
Subject: Life Sciences, Other Keywords: workflows; containers; cloud computing; Kubernetes; big data; reproducibility
Online: 31 January 2020 (05:15:01 CET)
Containers are gaining popularity in life science research as they provide a solution for encompassing dependencies of provisioned tools, simplify software installations for end users and offer a form of isolation between processes. Scientific workflows are ideal for chaining containers into data analysis pipelines to aid in creating reproducible analyses. In this manuscript we review a number of approaches to using containers as implemented in the workflow tools Nextflow, Galaxy, Pachyderm, Argo, Kubeflow, Luigi and SciPipe, when deployed in cloud environments. A particular focus is placed on the workflow tool’s interaction with the Kubernetes container orchestration framework.
ARTICLE | doi:10.20944/preprints201907.0058.v1
Subject: Mathematics & Computer Science, General & Theoretical Computer Science Keywords: laser scanning; point cloud; tree modelling; precision forestry
Online: 3 July 2019 (09:38:08 CEST)
Laser scanning is an effective tool for acquiring geometric attributes of trees and vegetation, which lays a solid foundation for 3-dimensional tree modelling. Existing studies on tree modelling from laser scanning data are vast. Nevertheless, some works don’t ensure sufficient modelling accuracy, while some other works are mainly rule-based and therefore highly depend on user inputs. In this paper, we propose a novel method to accurately and automatically reconstruct tree branches from laser scans. We first extract an initial tree skeleton from the input tree point cloud, then simplify the skeleton through iteratively removing redundant components. A global-optimization approach is performed to fit a sequence of cylinders to approximate the geometry of the tree branches. Experiments on various types of trees from different data sources demonstrate the effectiveness and robustness of our method. The resulted tree models can be further applied in the precise estimation of tree attributes, urban landscape visualization, etc.
TECHNICAL NOTE | doi:10.20944/preprints202102.0304.v1
Subject: Engineering, Automotive Engineering Keywords: Field Information Modeling (FIM)™; point cloud to BIM; point cloud vs. BIM; n-D information modeling; digital engineering and construction
Online: 12 February 2021 (13:19:06 CET)
This study presents established methods, along with new algorithmic developments, to automate the point cloud processing within the Field Information Modeling (FIM)™ framework. More specifically, given an n-D designed information model, and the point cloud’s spatial uncertainty, the problem of automatic assignment of the point clouds to their corresponding elements within the designed model is considered. The methods addressed two classes of field conditions, namely, (i) negligible construction errors; and (ii) existence of construction errors. The emphasis was given to describing and defining the assumptions in each method and shed light on some of their potentials and limitations in practical settings. Considering the shortcomings of current point cloud processing frameworks, three new and generic algorithms were developed to help solve the point cloud to model assignment in field conditions with both negligible, and existence (or speculation) of construction errors. The effectiveness of the new methods was demonstrated in real-world point clouds, acquired from construction projects, with promising results.
ARTICLE | doi:10.20944/preprints202211.0250.v1
Subject: Earth Sciences, Environmental Sciences Keywords: snow remote sensing; cloud screening; atmospheric correction; radiative transfer
Online: 14 November 2022 (09:38:42 CET)
We present the update of the Snow and Ice (SICE) property retrieval algorithm proposed initially by Kokhanovsky et al. (2019). The algorithm is based on the spectral measurements of Ocean and Land Color Instrument (OLCI) onboard Sentinel-3 satellites combined with the asymptotic radiative transfer theory valid for weakly absorbing turbid media. The main improvements include the introduction of a new atmospheric correction, retrieval of snow impurity load and properties, retrievals for partially snow-covered ground and also accounting for various thresholds to be used to assess the retrieval quality. The algorithm is available as python and Fortran packages at https://github.com/GEUS-SICE/pySICE. The technique can be applied to various optical sensors (satellite and ground-based) operated in the visible and near infrared regions of electromagnetic spectra.
ARTICLE | doi:10.20944/preprints202203.0141.v1
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: rural areas; urban areas; cloud server; monitoring; road monitoring
Online: 10 March 2022 (07:47:15 CET)
We are living in a world where people always strive for the luxuries they can get with less hard work and for these smart systems are developed and established all over the world to make life easier for the people. However, in Pakistan being a third-world country we are unable to achieve better results due to the lack of proper transit from one area to another. Sometimes the roads are not in their optimum conditions and hence people have to face a lot of problems while traveling. In urban areas, people cannot reach their destination in time, in some cases, they damage their vehicles due to the cracks on the roads, in medical emergency patient dies most of the time in their transit from rescuing point to the hospital. Similarly, in rural areas where farmers face countless problems while bringing their yield of the season towards the markets. Therefore, having described the severe damage the bad quality of roads is making. A solution is proposed to solve the problems of both the rural and urban population in our project. The project aims to provide the solution to their problem to a certain extent by monitoring the conditions of the roads. The sensors in the system will calculate the values and send them to the cloud server. The cloud server used is the blynk platform where the data will be stored. Moreover, data can be provided to the government in the future for the timely maintenance of the roads, and hence the citizens and lifestyle of the people will be changed. It is expected that the problems of both urban and rural population will be solved to a greater percentage.
ARTICLE | doi:10.20944/preprints202111.0548.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: Failure Prediction; Fault-tolerance; Cloud Computing; Artificial Intelligence; Reliability
Online: 29 November 2021 (15:39:23 CET)
Identifying and anticipating potential failures in the cloud is an effective method for increasing cloud reliability and proactive failure management. Many studies have been conducted to predict potential failure, but none have combined SMART (Self-Monitoring, Analysis, and Reporting Technology) hard drive metrics with other system metrics such as CPU utilisation. Therefore, we propose a combined metrics approach for failure prediction based on Artificial Intelligence to improve reliability. We tested over 100 cloud servers’ data and four AI algorithms: Random Forest, Gradient Boosting, Long-Short-Term Memory, and Gated Recurrent Unit. Our experimental result shows the benefits of combining metrics, outperforming state-of-the-art.
COMMUNICATION | doi:10.20944/preprints202110.0147.v1
Subject: Physical Sciences, Astronomy & Astrophysics Keywords: gravity; entropic force; Jeans instability; isothermal self-gravitating cloud.
Online: 8 October 2021 (16:42:50 CEST)
An entropic origin of gravity is re-visited. Isothermal self-gravitating cloud seen as an ideal gas is analyzed. Gravitational attraction within the isothermal cloud in equilibrium is balanced by the pressure, which is of a pure entropic nature. The notion of the Jeans entropy of the cloud corresponding to the entropy of the self-gravitating cloud in mechanical and thermal equilibrium is introduced. Balance of the gravitational compression and the entropic repulsion yields the scaling relation hinting to the entropic origin of the gravitational force. The analysis of the Jeans instability enables elimination of the “holographic screen” or “holographic principle” necessary for grounding of the entropic origin of gravity.
ARTICLE | doi:10.20944/preprints202105.0594.v1
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: docker swarm; leader election; privilege escalation; defense evasion; cloud
Online: 25 May 2021 (08:57:28 CEST)
With the advent of microservice-based software architectures, an increasing number of modern cloud environments and enterprises use operating system level virtualization, often referred to as containers. Docker Swarm is one of the most popular container orchestration infrastructures, providing high availability and fault tolerance. Occasionally discovered container escape vulnerabilities allow adversaries to execute code on the host operating system and operate within the cloud infrastructure. We show that docker swarm is currently not secured against misbehaving manager nodes and allows a high impact, high probability privilege escalation attack that we refer to as leadership hijacking. Cloud lateral movement and defense evasion payloads allow an adversary to leverage the docker swarm functionality to control each and every host in the underlying cluster. We demonstrate an end-to-end attack, in which an adversary with access to an application running on the cluster achieves full control of the cluster. To reduce the probability of a successful high impact attack, container orchestration infrastructures must reduce the trust level of participating nodes and in particular, incorporate adversary immune leader election algorithms.
ARTICLE | doi:10.20944/preprints202102.0279.v1
Subject: Engineering, Electrical & Electronic Engineering Keywords: distributed sensors; Sagnac interferometer; microring sensors; electron cloud sensors
Online: 11 February 2021 (11:05:52 CET)
A micro Sagnac interferometer integration is proposed for electron cloud distributed sensors. The Sagnac interferometer consists of four microring probes integrated into a Sagnac loop. Each of the microring probes is embedded with the silver bars to form the plasmonic wave oscillation. At the center microrings, electrons are trapped and oscillated by the whispering gallery modes (WGMs), where the plasmonic antennas are established and applied for wireless fidelity (WiFi) and light fidelity (LiFi) transmissions for distributed sensors. The antenna gains are 2.59dB, 0.93dB, 1.75dB, and 1.16dB respectively for the four antennas formed at the center microrings. The polarized light of 1.50µm wavelength is fed into the interferometer input, which is polarized randomly into upstream and downstream directions. The polarization components can be obtained by the space-time modulation control. By controlling the electron cloud spin orientation, the space-time projection can be applied, and the ultra-high measurement resolution can be obtained in terms of fast switching time (change in phase). In manipulation, the applied stimuli are substituted by the change in input source power. The light input power variation causes a change in electron cloud density. Similarly, when the electron cloud is excited by the microscopic medium, which can be employed as the microscopic sensors. The WGM sensors have sensitivities of 1.35µm-2, 0.90µm-2, 0.97µm-2 and, 0.81µm-2, respectively. The WGMs behave as a four-point probe for the electron cloud distributed sensors, where the electron cloud sensitivities of 2.31 prads-1mm3 (electrons)-1, 2.27prads-1mm3 (electrons)-1, 2.22 prads-1mm3(electrons)-1, 2.38prads-1mm3(electrons)-1 are respectively obtained.
ARTICLE | doi:10.20944/preprints202010.0577.v1
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Cloud Computing; Health Systems; Security; Privacy; Data Protection; GDPR
Online: 28 October 2020 (10:00:55 CET)
Currently, there are several challenges that Cloud-based health-care Systems, around the world, are facing. The most important issue is to ensure security and privacy or in other words to ensure the confidentiality, integrity and availability of the data. Although the main provisions for data security and privacy were present in the former legal framework for the protection of personal data, the General Data Protection Regulation (GDPR) introduces new concepts and new requirements. In this paper, we present the main changes and the key challenges of the General Data Protection Regulation, and also at the same time we present how the Cloud-based Security Policy methodology proposed in  could be modified in order to be compliant with the GDPR and how Cloud environments can assist developers to build secure and GDPR compliant Cloud-based health Systems. The major concept of this paper is, primarily, to facilitate Cloud Providers in comprehending the framework of the new General Data Protection Regulation and secondly, to identify security measures and security policy rules for the protection of sensitive data in a Cloud-based Health System, following our risk-based Security Policy Methodology that assesses the associated security risks and takes into account different requirements from patients, hospitals, and various other professional and organizational actors.
ARTICLE | doi:10.20944/preprints202007.0461.v1
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: cloud; storage; Mobile Adhoc Networks; services; bluetooth; mobile devices
Online: 20 July 2020 (08:43:00 CEST)
Smartphone has become one of the most revolutionary devices in the history of computing. With various kinds of applications, the scope of smartphone has been significantly broadened in the past few years including almost every aspect in our daily life. However, due to the limited on-board resources such as CPU, storage, network bandwidth and battery power, smartphones and the mobile network serving them bring new challenges that have not been encountered in the traditional computing and networking environments. This dissertation focuses on the research areas of improving the network architecture and enhancing the current applications on smartphones. It mainly investigates the areas in the following two directions for three representative categories of mobile services. - In the first direction, the dissertation aims to develop new communication models for smartphone Ad-Hoc networks to achieve efficient communication in the proximity. It is motivated by the fact that smartphone Ad-Hoc networks can help improve the current location-based services and propel new applications. Moreover, the new communication models provide complementary alternatives to the traditional infrastructure-based wireless networks. - In the second direction, the dissertation focuses on improving the other two categories of services, cloud storage services and real-time video streaming services for mobile devices. In the field of cloud storage, we introduce a cloud-assisted approach to provide a set of advanced file operations, such as encryption, decryption and compression, on smartphones. Furthermore, by utilizing the on-board Near Field Communication(NFC) module, we develop an algorithm to securely share the files between mobile devices. For the services of real-time video streaming, we propose approaches that identify a user’s status by analyzing the accelerometer data and then, dynamically adjust the buffer mechanism to save network bandwidth on smartphones.
ARTICLE | doi:10.20944/preprints201902.0088.v1
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: side-channel cache attacks; cache misses; AES; cloud computing
Online: 11 February 2019 (10:40:08 CET)
In recent years, CPU caches have revealed themselves as one of the most powerful sources of information leakage. This information leakage affects any implementation whose memory accesses, to data or instructions, depend on sensitive information such as private keys. In most cases, side-channel cache attacks do not require any specific permission and just need access to a shared cache. This fact, combined with the spread of cloud computing, where the infrastructure is shared between different customers, have made these attacks quite popular. In this paper, we present a novel approach to exploit the information obtained from the CPU cache. First, we introduce a non-access attack that provides a 97\% reduction in the number of encryptions required to obtain a 128-bit AES key. Next, this attack is adapted and extended in what we call the encryption-by-decryption cache attack or EBD, to obtain a 256-bit AES key. When EBD is applied to AES-256, we are able to obtain the 256 bits of the key with less than 10000 encryptions. These results make EBD, to the best of our knowledge, the first practical attack on AES-256 and also demonstrate that AES-256 is only about 3 times more complex to attack than AES-128 via cache attacks. In both cases the target is the AES T-table-based implementation, and we also demonstrate that our approach works in a cross-VM scenario.
ARTICLE | doi:10.20944/preprints201810.0354.v1
Subject: Earth Sciences, Geoinformatics Keywords: open LiDAR; terrestrial images; building reconstruction; point cloud registration
Online: 16 October 2018 (11:20:43 CEST)
Recent advances in open data initiatives allow us to free access to a vast amount of open LiDAR data in many cities. However, most of these open LiDAR data over cities are acquired by airborne scanning, where the points on façades are sparse or even completely missing due to the viewpoint and object occlusions in the urban environment. Integrating other sources of data, such as ground images, to complete the missing parts is an effective and practical solution. This paper presents an approach for improving open LiDAR data coverage on building façades by using point cloud generated from ground images. A coarse-to-fine strategy is proposed to fuse these two different sources of data. Firstly, the façade point cloud generated from terrestrial images is initially geolocated by matching the SFM camera positions to their GPS meta-information. Next, an improved Coherent Point Drift algorithm with normal consistency is proposed to accurately align building façades to open LiDAR data. The significance of the work resides in the use of 2D overlapping points on the outline of buildings instead of limited 3D overlap between the two point clouds and the achievement to a reliable and precise registration under possible incomplete coverage and ambiguous correspondence. Experiments show that the proposed approach can significantly improve the façades details of buildings in open LiDAR data and improving registration accuracy from up to 10 meters to less than half a meter compared to classic registration methods.
ARTICLE | doi:10.20944/preprints201802.0003.v1
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: LiDAR; smooth contour line; break line; point cloud; forests
Online: 1 February 2018 (03:37:44 CET)
A methodology for both accurate and smooth contour line generation from Light Detection and Ranging (LiDAR) point clouds is proposed in this paper. In order to improve the accuracy of contour lines in the area of forests, constrained triangulation networks with break lines are then constructed to generate contour lines. In break line extraction, a bi-threshold method for edge line detection is used to extract both complete and reliable break lines. A point clouds elevation adjustment with constrain of break lines and an interpolator considering a contour interval is proposed to improve the smoothness of contour lines. The proposed interpotator is also can avoid contour line intersection when contour lines are interpolated. Statistical parameters and shape index are then used to evaluate quantitatively the accuracy and smoothness of the resultant contour lines, which fill in the blank of contour lines evaluation in theory. The experiments show that high-quality contours in terms of smoothness and accuracy can be generated from LiDAR point clouds.
ARTICLE | doi:10.20944/preprints201710.0051.v1
Online: 9 October 2017 (12:40:34 CEST)
The primary attraction of IaaS is providing elastic resources on demand. It becomes imperative that IaaS-users have an effective methodology for learning what resources they require, how many resources and for how long they need. However, the heterogeneity of resources, the diversity resource demands of different cloud applications and the variation of application-user behaviors pose IaaS-users big challenge. In this paper, we purpose a unified resource demand forecasting model suiting for different applications, various resources and diverse time-varying workload patterns. With the model, taking input from parameterized applications, resources and workload scenarios, the corresponding resources demands during any time interval can be deduced as output. The experiments configure concrete functions and parameters to help understanding the above model.
ARTICLE | doi:10.20944/preprints201708.0089.v1
Subject: Earth Sciences, Geoinformatics Keywords: WebGIS; Landscape; heritage; personalized learning; the cloud; distance learning
Online: 25 August 2017 (18:39:41 CEST)
The value of landscape, as part of collective heritage, can be acquired by GIS due to the multilayer approach of the spatial configuration. Proficiency in geospatial technologies in order to collect, process, analyze, interpret, visualize and communicate geographic information is being increased by undergraduate and graduate students, but in particular by those who are training to become geography teachers at secondary education. This training can be carried out through personalized learning and distance learning methodology. Personalized GIS education aims to integrate students and enhance their understanding of landscape. Some teaching experiences are shown whereby opportunities offered by WebGIS will be described, through quantitative tools and techniques that will allow this modality of learning and improve its effectiveness. Results of this research show that students, through geospatial technologies, learn landscape as a diversity of elements but also the complexity of physical and human factors involved. Several conclusions will be highlighted: i) the contribution of geospatial training to education for sustainable development; ii) spatial analysis as a mean of skills acquisition about measures for landscape conservation; iii) expanding and applying acquired knowledge to other geographic spaces and different landscapes.
ARTICLE | doi:10.20944/preprints202212.0274.v1
Subject: Earth Sciences, Atmospheric Science Keywords: Tonga Volcano; Volcanic cloud; COSMIC-2 RO; stratospheric water vapor
Online: 15 December 2022 (07:39:24 CET)
The Hunga Tonga-Hunga Ha’apai underwater volcano (20.57°S, 175.38°W) violently erupted on 15 January 2022. The volcanic plume evolution during its initial stages is delineated by using Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC)-2 radio occultation (RO) measurements. The bending angle (BA) anomaly over the Tonga volcanic plume (within 200 km from the eruption center) at 5:17 UTC on 15 January shows a prominent peak at higher stratospheric heights. The top of the BA anomaly revealed that negative to positive change occurred at ~38 km indicating the first height where the RO line-of-sight encounters the volcanic plume. The BA anomaly further revealed an increase of ~50% at ~36 km and confirms the volcanic plume reached above ~36 km. Further, the evolution of BA perturbations within 24 hours after the initial explosion is also discussed. From collocated RO profiles with the volcanic plume, we find a clear descending of the peak altitude of the BA perturbation from ~36 km to ~30 km within 24 hours after the initial eruption. The results from the study will provide some insights into advancing our understanding of volcanic cloud dynamics and their implementation in volcanic plume modeling.
ARTICLE | doi:10.20944/preprints202005.0174.v1
Subject: Keywords: cloud computations; task timing; genetics algorithm; response time; virtual machine
Online: 10 May 2020 (16:15:14 CEST)
Cloud computations are based on the computer networks such as Internet which presents a new pattern to provide, consume and deliver services such as infrastructure, software, ground and other resources using network. The inappropriate timing of assigning loads to the virtual machines in the computational space could lead to unbalance in the system. One of the challenging planning problems. In the cloud data centers is considering both assigning and migration (transfer) of the virtual machines with the ability of reconfiguration and the integrated features of the hosting physical machines. In this article, we introduce an integrated and dynamic timing algorithm based on the Genetic evolution algorithm. The suggested method was evaluated based on these factors and different inputs. Our suggested method is done using Java programming language and cloud-SME simulation. The results show that the execution time and the response time were improved by 12 and 1 percent respectively.
ARTICLE | doi:10.20944/preprints201905.0174.v2
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: cloud computing; big data; fog computing; software-defined; networking; network management; resource management; topology.
Online: 26 February 2020 (15:34:25 CET)
Cloud infrastructure provides computing services where computing resources can be adjusted on-demand. However, the adoption of cloud infrastructures brings concerns like reliance on the service provider network, reliability, compliance for service level agreements (SLAs), etc. Software-defined networking (SDN) is a networking concept that suggests the segregation of a network’s data plane from the control plane. This concept improves networking behavior. In this paper, we present an SDN-enabled resource-aware topology framework. The proposed framework employs SLA compliance, Path Computation Element (PCE) and shares fair loading to achieve better topology features. We also present an evaluation, showcasing the potential of our framework.
ARTICLE | doi:10.20944/preprints201905.0004.v1
Subject: Earth Sciences, Environmental Sciences Keywords: waveform decomposition; Hyper Point Cloud; deconvolution; waveform voxel; composite waveform
Online: 3 May 2019 (14:15:41 CEST)
A wealth of Full Waveform (FW) LiDAR data are available to the public from different sources, which is poised to boost the extensive application of FW LiDAR data. However, we lack a handy and open source tool that can be used by potential users for processing and analyzing FW LiDAR data. To this end, we introduce waveformlidar, an R package dedicated to FW LiDAR processing, analysis and visualization as a solution to the constraint. Specifically, this package provides several commonly used waveform processing methods such as Gaussian, adaptive Gaussian and Weibull decompositions, and deconvolution approaches (Gold and Richard-Lucy (RL)) with users’ customized settings. In addition, we also developed functions to derive commonly used waveform metrics for characterizing vegetation structure. Moreover, a new way to directly visualize FW LiDAR data is developed through converting waveforms into points to form the Hyper Point cloud (HPC), which can be easily adopted and subsequently analyzed with existing discrete-return LiDAR processing tools such as LAStools and FUSION. Basic explorations of the HPC such as 3D voxelization of the HPC and conversion from original waveforms to composite waveforms are also available in this package. All of these functions are developed based on small-footprint FW LiDAR data, but they can be easily transplanted to the large footprint FW LiDAR data such as Geoscience Laser Altimeter System (GLAS) and Global Ecosystem Dynamics Investigation (GEDI) data analysis. It is anticipated that these functions will facilitate the widespread use of FW LiDAR and be beneficial for better estimating biomass and characterizing vegetation structure at various scales. The package and code examples can be found at https://github.com/tankwin08/waveformlidar.
ARTICLE | doi:10.20944/preprints201904.0106.v1
Subject: Engineering, Other Keywords: cloud computing; security patterns; privacy patterns; software and system architecture
Online: 9 April 2019 (11:46:02 CEST)
Requirements for cloud services include security and privacy. Although many security patterns, privacy patterns, and non-pattern-based knowledge have been reported, knowing which pattern or combination of patterns to use in a specific scenario is challenging due to the sheer volume of options and the layered cloud stack. To deal with security and privacy in cloud services, this study proposes the Cloud Security and Privacy Metamodel (CSPM). CSPM uses a consistent approach to classify and support existing security and privacy patterns. In addition, CSPM is used to develop a security and privacy awareness process to develop cloud systems. The effectiveness and practicality of CSPM is demonstrated via several case studies.
ARTICLE | doi:10.20944/preprints201807.0563.v1
Subject: Social Sciences, Microeconomics And Decision Sciences Keywords: technological innovation; cloud computing; compound binomial options; investment risk; uncertainty
Online: 30 July 2018 (07:40:39 CEST)
The purpose of this paper is to evaluate the timing of innovative investment in technology product life cycles using a compound binomial option with management flexibility. Considering the business cycles changes in the macroeconomic will affect consumer purchasing power. The focus is how to evaluate the optimal investment strategy and the project value. It was applied to different product stages (three stages including production innovation, manufacture innovation, and operation innovation) and factored to different risks to build a technology innovation strategy model. An aim of this study is the options premium of the best strategy timing for each innovation stage. Its application of the compound binomial options for the manufacture innovation will only be considered after the execution of the production innovation. The same condition is applied to the operation innovation, which will only be considered after the execution of the manufacture innovation. Then, this paper constructs the dynamic investment sequential decision model, assesses the feasibility of an investment strategy, and makes a decision on the appropriate project value and options premium for each stage under the possible change of Gross Domestic Product (GDP). This paper investigates the product life cycle innovation investment topic by using the compound binomial options method and will provide a more flexible strategy decision compared with other trend forecast criteria.
ARTICLE | doi:10.20944/preprints201805.0274.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: artificial intelligence; semantic web; natural language; Google cloud speech; SPARQL
Online: 21 May 2018 (12:38:00 CEST)
The main restriction of the Semantic Web is the difficult of the SPARQL language, that is necessary to extract information from the Knowledge Representation also known as ontology. Making the Semantic Web accessible for people who do not know SPARQL, is essential the use of friendlier interfaces and a good alternative is Natural Language. This paper shows the implementation of a friendly prototype interface to query and retrieve, by voice, information from website building with the Semantic Web tools. In that way, the end users avoid the complicated SPARQL language. To achieve this, the interface recognizes a speech query and converts it into text, it processes the text through a java program and identifies keywords, generates a SPARQL query, extracts the information from the website and read it in voice, for the user. In our work Google Cloud Speech API makes Speech-to-Text conversions and Text-to Speech conversions are made with SVOX Pico. As results, we have measured three variables: The success rate in queries, the response time of query and a usability survey. The values of the variables allows the evaluation of our prototype. Finally the interface proposed provides us a new approach in the problem, using the Cloud like a Service, reducing barriers of access to the Semantic Web for people without technical knowledge of Semantic Web technologies.
ARTICLE | doi:10.20944/preprints201611.0010.v1
Subject: Earth Sciences, Atmospheric Science Keywords: millimeter-wavelength cloud radar; attenuation correction; dual-radar; data fusion
Online: 1 November 2016 (10:05:18 CET)
In order to correct attenuated millimeter-wavelength (Ka-band) radar data and address the problem of instability, an attenuation correction methodology (attenuation correction with variation trend constraint; VTC) was developed. Using synchronous observation conditions and multi-band radars, the VTC method adopts the variation trends of reflectivity in X-band radar data captured with wavelet transform as a constraint to adjust reflectivity factors of millimeter-wavelength radar. The correction was evaluated by comparing reflectivities obtained by millimeter-wavelength cloud radar and X-band weather radar. Experiments showed that attenuation was a major contributory factor in the different reflectivities of the two radars when relatively intense echoes exist, and the attenuation correction developed in this study significantly improved data quality for millimeter-wavelength radar. Reflectivity differences between the two radars were reduced and reflectivity correlations were enhanced. Errors caused by attenuation were eliminated, while variation details in the reflectivity factors were retained. The VTC method is superior to the bin-by-bin method in terms of correction amplitude and can be used for attenuation correction of shorter wavelength radar assisted by longer wavelength radar data.
ARTICLE | doi:10.20944/preprints202208.0477.v1
Subject: Earth Sciences, Geoinformatics Keywords: Merapi Volcano; Indonesia; Natural Hazards; Disaster Risk and Point-cloud technology
Online: 29 August 2022 (08:34:39 CEST)
Spatial approach based on the deformation measurement of volcanic dome and crater rim is key to evaluate the activity of a volcano, such as Merapi volcano where associated disaster risk is regularly taking lives. Within this framework, this study aime to detect localized deformation and change in the summit area that has occurred concomitantly with the dome growth and explosion reported. The methodology was focused on two sets of data, one LiDAR-based dataset of 2012 and one UAV-dataset of 2014. The results show that during the period 2012-2014, the crater walls are 100 m to 120 m high above the crater floor at its maximum (North to East-South-East sector), while the West and North sector presents a topographic range of 40 to 80 m. During the period 2012 – 2014, the evolution of the crater rim around the dome is generally stable (no large collapse). The opening of a new vent on the surface of the dome has displaced an equivalent volume of 2.04 E+04 m3 corresponding to a maximum -9 m (+/- 0.9 m) vertically. This concludes that during the period 2012 – 2014 when the dome of Merapi experienced phreatic or phreatomagmatic explosions, the topography around the dome rose. This rise does not seem to be related to large wall collapses, and it is likely that modification in the subsurface have triggered those changes.
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: containers; virtual machines; cloud; COVID-19; serverless; analytics; software defined infrastructure
Online: 19 February 2021 (11:31:42 CET)
TThe XPRIZE Foundation designs and operates multi-million-dollar, global competitions to incentivize the development of technological breakthroughs that accelerate humanity toward a better future. To combat the COVID-19 pandemic, the Foundation coordinated with several organizations to make available data sets about different facets of the disease and to provide the computational resources needed to analyze those data sets. This is paper is a case study of the requirements, design, and implementation of the XPRIZE Data Collaborative, a cloud-based infrastructure that enables the XPRIZE to meet its COVID-19 mission and host future data-centric competitions. We examine how a Cloud Native Application can use an unexpected variety of Cloud technologies, ranging from containers, serverless computing, to even older ones like Virtual Machines. We also search and document the effects that the pandemic had on application development in the Cloud. We include our experiences of having users successfully exercise the Data Collaborative, detailing the challenges encountered and areas for improvement and future work.
ARTICLE | doi:10.20944/preprints202011.0537.v1
Subject: Engineering, Civil Engineering Keywords: BIM model; point cloud; tunnel engineering; data fusion; cross-section analysis
Online: 20 November 2020 (11:09:19 CET)
This paper introduces a method for tunnel point cloud and BIM model integration and cross-section monitoring, providing information to analyse tunnel cross-sections and surrounding rock deformation, and support for tunnel maintenance and reconstruction. Three types of data are processed for the integration: laser scanning point cloud, BIM tunnel model and terrain model from oblique photogrammetry. An adaptive BIM modelling scheme is proposed for tunnels with alien structures. Precise spatial registration of the data sets is conducted by applying singular value decomposition (SVD) algorithm to calculate transformation parameters from the point cloud model to the BIM model. Since the tunnel central line has high-order derivability, a cross-section calculation method based on tangent vector is proposed to obtain the cross-sectional profile of tunnels at any mileage. The proposed method has been verified by applying it to a tunnel reconstruction project. The experiment results show that the tunnel point cloud and the BIM model were highly coincident after the integration. The developed program can effectively get the cross-section of the tunnel at any mileage, and correctly express the spatial relationship between the BIM tunnel, the point cloud of tunnel and the external mountainous terrain.
ARTICLE | doi:10.20944/preprints202008.0487.v1
Subject: Social Sciences, Geography Keywords: Twitter; data reliability; risk communication; data mining; Google Cloud Vision API
Online: 22 August 2020 (02:32:40 CEST)
While Twitter has been touted to provide up-to-date information about hazard events, the reliability of tweets is still a concern. Our previous publication extracted relevant tweets containing information about the 2013 Colorado flood event and its impacts. Using the relevant tweets, this research further examined the reliability (accuracy and trueness) of the tweets by examining the text and image content and comparing them to other publicly available data sources. Both manual identification of text information and automated (Google Cloud Vision API) extraction of images were implemented to balance accurate information verification and efficient processing time. The results showed that both the text and images contained useful information about damaged/flooded roads/street networks. This information will help emergency response coordination efforts and informed allocation of resources when enough tweets contain geocoordinates or locations/venue names. This research will help identify reliable crowdsourced risk information to enable near-real time emergency response through better use of crowdsourced risk communication platforms.
ARTICLE | doi:10.20944/preprints202005.0384.v1
Subject: Engineering, Industrial & Manufacturing Engineering Keywords: 5G Wireless Technology; Artificial Intelligence; Blockchain; Cloud Computing; Cyber-Physical System
Online: 24 May 2020 (16:10:26 CEST)
The landscape of centralized cloud computing is now changing to distributed and decentralized clouds with promising impacts on energy consumption, resource availability, resilience, and customer experience. This research highlights the impacts of emerging IT trends, namely, 5G wireless technology, blockchain, and industrial Artificial Intelligence (AI) in development and realization of the next generation of cloud computing. Integration of these technologies in cyber-physical system and cloud manufacturing paradigms is explained and a unified edge-fog-cloud architecture is proposed for successful implementation in manufacturing systems.
ARTICLE | doi:10.20944/preprints202003.0150.v1
Subject: Engineering, General Engineering Keywords: CFD; numerical optimization; CAD parametrization; cloud-based; design space exploration; SSIM
Online: 9 March 2020 (09:50:23 CET)
In this manuscript, an automated framework dedicated to design space exploration and design optimization studies is presented. The framework integrates a set of numerical simulation, computer-aided design, numerical optimization, and data analytics tools using scripting capabilities. The tools used are open-source and freeware, and can be deployed on any platform. The main feature of the proposed methodology is the use of a cloud-based parametrical computer-aided design application, which allows the user to change any parametric variable defined in the solid model. We demonstrate the capabilities and flexibility of the framework using computational fluid dynamics applications; however, the same workflow can be used with any numerical simulation tool (e.g., a structural solver or a spread-sheet) that is able to interact via a command line interface or using scripting languages. We conduct design space exploration and design optimization studies using quantitative and qualitative metrics, and to reduce the high computing times and computational resources intrinsic to these kinds of studies, concurrent simulations and surrogate-based optimization are used.
ARTICLE | doi:10.20944/preprints201804.0253.v1
Subject: Earth Sciences, Atmospheric Science Keywords: compact cloud discharges; narrow bipolar pulses; propagation effects; finitely conducting ground
Online: 19 April 2018 (11:39:20 CEST)
Propagation effects on the Narrow Bipolar Pulses (NBPs) or the radiation fields generated by compact cloud discharges as they propagate over finitely conducting ground are presented. The results are obtained using a sample of NBPs recorded with high time resolution from close thunderstorms in Sri Lanka. The results show that the peak amplitude and the temporal features such as the Full Width at Half Maximum (FWHM), zero crossing time and the time derivative of NBPs can be significantly distorted by propagation effects. For this reason the study of peak amplitudes and temporal features of NBPs and the remote sensing of current parameters of compact cloud discharges should be conducted using NBPs recorded under conditions where the propagation effects are minimal.
ARTICLE | doi:10.20944/preprints202212.0471.v1
Subject: Mathematics & Computer Science, Other Keywords: Deep learning; Convoluted Neural Networks; LSTM; MediaPipe; Google Cloud; Object detection; Classification
Online: 26 December 2022 (04:10:07 CET)
The Median American Sign Language Interpretation Software (ASL) Interpretation Software is a web application that is capable of interpreting American Sign Language in real-time, utilizing an internet connection and a primary web camera, complete with basic phrases and letters. Extensive use of Deep Learning and Neural Networks, specifically Convoluted Neural Networks, enables Median to interpret video inputs and generate accurate results directly displayed to the user in text format. The ultimate goal for Median is to have it act as a bridge between hearing people and members of the deaf community, allowing deaf people to communicate with non-signing people using American Sign Language. Furthermore, Median has been designed to benefit people who lack access to a human ASL Translator, as its format as a website allows it to be accessible anywhere at any time, giving increased availability over human interpreters. Median is designed to be a very versatile program with great potential for growth and expansion.
ARTICLE | doi:10.20944/preprints202111.0410.v1
Subject: Engineering, Other Keywords: Data compression; data hiding; psnr; mse; virtual data; public cloud; quantization error
Online: 22 November 2021 (15:17:12 CET)
Nowadays, information security is a challenge especially when transmitted or shared in public clouds. Many of researchers have been proposed technique which fails to provide data integrity, security, authentication and another issue related to sensitivity data. The most common techniques were used to protect data during transmission on public cloud are cryptography, steganography, and compression. The proposed scheme suggests an entirely new approach for data security on public cloud. Authors have suggested an entirely new approach that completely makes secret data invisible behind carrier object and it is not been detected with the image performance parameters like PSNR, MSE, entropy and others. The details of results are explain in result section of paper. Proposed technique have better outcome than any other existing technique as a security mechanism on a public cloud. Primary focus of suggested approach is to minimize integrity loss of public storage data due to unrestricted access rights by uses. To improve reusability of carrier even after data concealed is really a challenging task and achieved through suggested approach.
ARTICLE | doi:10.20944/preprints202107.0429.v1
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Blockchain; Multi-Factor Authentication; Access Control; Internet of Vehicles; Cloud-Enabled Systems
Online: 20 July 2021 (09:27:29 CEST)
Continuous and emerging advances in Information and Communication Technology (ICT) have enabled IoT-to-Cloud applications to be induced by data pipelines coupled with Edge Intelligence-based architectures. Advanced vehicular networks greatly benefit from these architectures due to the implicit functionalities that are focused on realizing the Internet-of-Vehicle (IoV) vision. However, IoV is susceptible to attacks, where adversaries can easily exploit existing vulnerabilities. Several attacks may succeed due to inadequate or weaker authentication techniques. Hence, there is a timely need for hardening the authentication process through cutting-edge access control mechanisms. This paper proposes a Blockchain-based Multi-Factor authentication model that uses an embedded Digital Signature (MFBC_eDS) for vehicular clouds and Cloud-enabled IoV. Our proposed MFBC_eDS model consists of a scheme that integrates the Security Assertion Mark-up Language (SAML) to the Single Sign-On (SSO) capabilities for a connected Edge-to Cloud ecosystem. MFBC_eDS draws an essential comparison with the baseline authentication scheme suggested by Karla and Sood. Based on the foundations of Karla and Sood’s scheme, an embedded Probabilistic Polynomial-Time Algorithm (ePPTA) and an additional Hash function for the Pi generated during Karla and Sood’s authentication are proposed and discussed. The preliminary analysis of the proposition shows that the approach is more suitable to counter major adversarial attacks in an IoV-centered environment based on Dolev-Yao adversarial model while satisfying aspects of the CIA triad.
ARTICLE | doi:10.20944/preprints202105.0564.v1
Subject: Engineering, Automotive Engineering Keywords: Heritage-BIM; Stone paved road; Procedural Modeling; VPL scripting; CAD; Point Cloud.
Online: 24 May 2021 (11:04:51 CEST)
The transition from Building Information Modelling (BIM) to Heritage Building Information Modelling (H-BIM) is intended to pursue an adequate knowledge of the artefact that is to be preserved, progressively replacing the traditional methods of the restoration and structural reinforcement projects with new tools for managing both existing information and new interventions. The aim of the paper is to show the application of the H-BIM method to a stone pavement road located in the Archaeological Site of Pompeii. In detail, starting from a Laser Scanner based survey, juxtaposed by coordinated points georeferenced through a total station, points clouds were handled by means of several BIM-based tools to perform the road design process, starting from the digital elevation model (DEM) to the corridor representation. Subsequently, a visual programming application based on Python language was adopted to update corridor information by means of the object property set. As preliminary results, a tool, complete with graphical and non-graphical information, is proposed to be used in conservation, maintenance and restoration project.
ARTICLE | doi:10.20944/preprints202101.0331.v1
Subject: Engineering, Construction Keywords: 3D Laser Scanners 1; Point-cloud Data 2; Reality Capture; BIM; Refurbishment
Online: 18 January 2021 (12:23:24 CET)
The urgent need to improve performance in the construction industry has led to the adoption of many innovative technologies. 3D laser scanners are amongst the leading technologies being used to capture and process assets or construction project data for use in various applications. Due to its nascent nature, many questions are still unanswered about 3D laser scanning, which in turn contribute to the slow adaptation of the technology. Some of these include the role of 3D laser scanners in capturing and processing raw construction project data. How accurate is the 3D laser scanner or point cloud data? How does laser scanning fit with other wider emerging technologies such as Building Information Modelling (BIM)? This study adopts a proof-of-concept approach, which in addition to answering the afore-mentioned questions, illustrates the application of the technology in practice. The study finds that the quality of the data, commonly referred to as point cloud data is still a major issue as it depends on the distance between the target object and 3D laser scanner’s station. Additionally, the quality of the data is still very dependent on data file sizes and the computational power of the processing machine. Lastly, the connection between laser scanning and BIM approaches is still weak as what can be done with a point cloud data model in a BIM environment is still very limited. The aforementioned findings reinforce existing views on the use of 3D laser scanners in capturing and processing construction project data.
ARTICLE | doi:10.20944/preprints202011.0650.v1
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Internet of Thing (IoT); Hierarchical Routing; Overlap clustering; Fog computing; Cloud computing
Online: 25 November 2020 (15:16:33 CET)
Considering the great growth of IoT networks and the need for highly reliable networks and also considering the manufacturing divides of IoT equipment which are highly limited by their memory, processing power and battery; we need a highly efficient routing for guaranteeing our network's high life span. So, this paper has suggested an efficient energy routing method based on the overlapping clustering method which is inspired from the Grey theory. The Overlap clustering method means that some Things collect data that must be sent to two or more Fog nodes for processing. In the suggested method the best node is selected as the cluster head based on factors such as remaining energy, distance, link expiration time, signal power for receiving data from things by the Fog nodes. In the next step the Fog node's data are sent in a hierarchical method using a symmetrical tree of processed data to the server. Thus, the main issue here is making using a proper routing method for data sending to the Cloud that doesn't just focus on energy, but also considers other factors such as delay and network life span. The simulation results show that the HR-IoT reduces the average end to end delay more than 17.2% and 23.1%, decreases the response time more than 20.1% and 25.78% and increase packet delivery rate more than 23.1% and 28.78% and lifetime more than 25.1% and 28.78% compared to EECRP and ERGID approaches.
ARTICLE | doi:10.20944/preprints202011.0321.v1
Subject: Keywords: MPS FESTO workstation; production management; cloud computing; process management; Android; iOS; RFID
Online: 10 November 2020 (15:09:37 CET)
Industria 4.0 is present in smart and digital manufacturing, making manufacturing companies improve productivity, reducing delivery time and related costs. The objective of this work is to demonstrate through three integrated MPS Festo stations (Distribution, Pick \& Place and Sorting), using the Internet of Things and Google Analytics technologies, the benefits in relation to remote performance monitoring. The intended objective is achieved through the implementation of the monitoring system at the three MPS Festo stations. The data obtained through the integration of the Festo stations and their respective sensors are processed and analyzed in a cloud infrastructure, so that the main metrics are visualized and transmitted on a panel. This monitoring system improves the perception of process performance, as the main performance metrics are displayed, such as productivity, cycle time and parts produced. The cloud infrastructure allows remote viewing and monitoring of the system.
ARTICLE | doi:10.20944/preprints202002.0269.v1
Subject: Mathematics & Computer Science, Analysis Keywords: IIoT; Platform Selection; Multi criteria analysis; MCDA; AHP; PROMETHEE-II; Cloud; Methodology
Online: 19 February 2020 (04:02:12 CET)
Industry 4.0 is having a great impact in all smart efforts. This is not a single product, but is composed of several technologies, being one of them Industrial Internet of Things (IIoT). Currently, there are very varied implementation options offered by several companies, and this imposes a new challenge to companies that want to implement IoT in their processes. This challenge suggests to use multi-criteria analysis to make a repeatable and justified decision, requiring a set of alternatives and criteria. This paper proposes a new methodology and comprehensive criteria to help organizations to take an educated decision by applying multi-criteria analysis. Here, we suggest a new original use of PROMETHEE-II with full example from weight calculation up to IoT platform selection, showing this methodology as an effective study for other organizations interested to select an IoT platform. The criteria proposed outstands from previous work by including not only technical aspects, but economic and social criteria, providing a full view of the problem analyzed. A case of study was used to prove this proposed methodology.
ARTICLE | doi:10.20944/preprints201912.0260.v1
Subject: Earth Sciences, Atmospheric Science Keywords: GEMS; UV; VIS; hyperspectral data; deep convective cloud; vicarious calibration; OMI; TROPOMI
Online: 19 December 2019 (13:14:55 CET)
As one of GEO-constellation for environmental monitoring in the next decade, Geostationary Environment Monitoring Spectrometer (GEMS) is designed to observe the Asia Pacific region to provide the information on the atmospheric chemicals, aerosol and cloud properties. For the continuous monitoring of the sensor performance after its launch in early 2020, here we suggest deep convective clouds (DCCs) as a possible target for the vicarious calibration of GEMS, the first UV/VIS hyperspectral sensor onboard a geostationary satellite. Tropospheric Monitoring Instrument (TROPOMI) and Ozone Monitoring Instrument (OMI) are used as a proxy of GEMS, and a conventional DCC detection approach applying the thermal threshold test is used for the DCC detection based on the collocations with Advance Himawari-8 Imager (AHI) onboard Himawari-8 geostationary satellite. DCCs are frequently detected over the GEMS observation area on average over 200 pixels in a single observation scene. Considering the spatial resolution of GEMS, 3.5 km×7 km which is similar to TROPOMI, and its temporal resolution (8 times a day), availability of DCCs for vicarious calibration of GEMS is expected to be sufficient. Inspection of the DCC reflectivity spectra estimated from the OMI and TROPOMI data also shows a promising result. Even though, their observation geometry and sensor characteristics are quite a different, the estimated DCC spectra agree quite a well within a known uncertainty range with comparable spectral features. When the DCC detection is further improved by applying both visible and infrared tests, the variability of DCC reflectivity from the TROPOMI data is reduced by half, from 10% to 5%. This is mainly due to the efficient screening of cold thin cirrus with the visible test and of bright warm clouds with the infrared test. The precise DCC detection is also expected to contribute to the accurate characterization of the cloud reflectivity, which will be further investigated.
ARTICLE | doi:10.20944/preprints201709.0074.v1
Subject: Engineering, Electrical & Electronic Engineering Keywords: recommendation system; context awareness; location based services; mobile computing, cloud-based computing
Online: 18 September 2017 (08:54:04 CEST)
The ubiquity of mobile sensors (such as GPS, accelerometer and gyroscope) together with increasing computational power have enabled an easier access to contextual information, which proved its value in next generation of the recommender applications. The importance of contextual information has been recognized by researchers in many disciplines, such as ubiquitous and mobile computing, to filter the query results and provide recommendations based on different user status. A context-aware recommendation system (CoARS) provides a personalized service to each individual user, driven by his or her particular needs and interests at any location and anytime. Therefore, a contextual recommendation system changes in real time as a user’s circumstances changes. CoARS is one of the major applications that has been refined over the years due to the evolving geospatial techniques and big data management practices. In this paper, a CoARS is designed and implemented to combine the context information from smartphones’ sensors and user preferences to improve efficiency and usability of the recommendation. The proposed approach combines user’s context information (such as location, time, and transportation mode), personalized preferences (using individuals past behavior), and item-based recommendations (such as item’s ranking and type) to personally filter the item list. The context-aware methodology is based on preprocessing and filtering of raw data, context extraction and context reasoning. This study examined the application of such a system in recommending a suitable restaurant using both web-based and android platforms. The implemented system uses CoARS techniques to provide beneficial and accurate recommendations to the users. The capabilities of the system is evaluated successfully with recommendation experiment and usability test.
ARTICLE | doi:10.20944/preprints201612.0016.v1
Subject: Earth Sciences, Environmental Sciences Keywords: mobile mapping system; LiDAR point cloud; 2D-3D registration; panoramic sensor model
Online: 2 December 2016 (10:58:19 CET)
For multi-sensor integrated systems, such as a mobile mapping system (MMS), data fusion at sensor-level, i.e., the 2D-3D registration between optical camera and LiDAR, is a prerequisite for higher level fusion and further applications. This paper proposes a line-based registration method for panoramic images and LiDAR point cloud collected by a MMS. We first introduce the system configuration and specification, including the coordinate systems of the MMS, the 3D LiDAR scanners, and the two panoramic camera models. We then establish the line-based transformation model for panoramic camera. Finally, the proposed registration method is evaluated for two types of camera models by visual inspection and quantitative comparison. The results demonstrate that the line-based registration method can significantly improve the alignment of the panoramic image and LiDAR datasets under either the ideal spherical or the rigorous panoramic camera model, though the latter is more reliable.
ARTICLE | doi:10.20944/preprints201610.0004.v1
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: Vehicular Ad Hoc Network (VANET); Cloud Computing; privacy preservation; security and privacy
Online: 3 October 2016 (20:40:50 CEST)
This project designed a set of network security mechanisms for cloud applications in VANETs. At present, cloud computing is one of the government’s development priorities in the industry. We divide cloud into public, private, and hybrid clouds in this project. Vehicles or passengers can access road conditions or public transport information through the public cloud; public transportation can access current driving records and users can access relevant enterprise information on the private cloud and hybrid cloud is a combination of public and private clouds. Both cloud computing or VANETs require network and information security. At present, many researches on VANETs security only focus on message communication, and neglect information storage. Researches on cloud computing security only focus on information protection, and neglect user privacy and anonymity. This project designed a set of network and information security mechanisms in line with the requirements of Confidentiality, Authentication, Non-repudiation, Conditional Anonymity, and Conditional Untraceability. This project primarily needs to achieve the following: 1.an authentication mechanism to verify the identity of each other between the passenger and the vehicle and to verify the identity with Single Sign-On; 2. vehicle or user privacy and anonymity, which need to be able to replace the anonymous ID and related parameters for the vehicle or the user; 3. a private communication mechanism, which enables any vehicle or user to communicate privately; and 4. an information security encryption method, which can encrypt the information on a cloud server to avoid unauthorized access by internal personnel or hackers.
ARTICLE | doi:10.20944/preprints201609.0076.v1
Subject: Mathematics & Computer Science, Analysis Keywords: performance evaluation; cloud services; group decision making; multicriteria decision making; fuzzy sets
Online: 22 September 2016 (10:45:55 CEST)
This paper formulates the performance evaluation of cloud services as a multicriteria group decision making problem, and presents a fuzzy multicriteria group decision making method for evaluating the performance of cloud services. Interval-valued intuitionistic fuzzy numbers are used to model the inherent subjectiveness and imprecision of the performance evaluation process. An effective algorithm is developed based on the technique for order preference by similarity to ideal solution method and the Choquet integral operator for adequately solving the performance evaluation problem. An example is presented to demonstrate the applicability of the proposed fuzzy multicriteria group decision making method for solving the multicriteria group decision making problem in real world situations.
ARTICLE | doi:10.20944/preprints202201.0070.v1
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: Web app; Cloud computing; High Availability; High performance computing; Docker container; Horizontal Scaling
Online: 6 January 2022 (10:33:58 CET)
This study analyses some of the leading technologies for the construction and configuration of IT infrastructures to provide services to users. For modern applications, guaranteeing service continuity even in very high computational load or network problems is essential. Our configuration has among the main objectives of being highly available (HA) and horizontally scalable, that is, able to increase the computational resources that can be delivered when needed and reduce them when they are no longer necessary. Various architectural possibilities are analysed, and the central schemes used to tackle problems of this type are also described in terms of disaster recovery. The benefits offered by virtualisation technologies are highlighted and are bought with modern techniques for managing Docker containers that will be used to build the back-end of a sample infrastructure related to a use-case we have developed. In addition to this, an in-depth analysis is reported on the central autoscaling policies that can help manage high loads of requests from users to the services provided by the infrastructure. The results we have presented show an average response time of 21.7 milliseconds with a standard deviation of 76.3 milliseconds showing excellent responsiveness. Some peaks are associated with high-stress events for the infrastructure, but the response time does not exceed 2 seconds even in this case. The results of the considered use case studied for nine months are presented and discussed. In the study period, we improved the back-end configuration and defined the main metrics to deploy the web application efficiently.
ARTICLE | doi:10.20944/preprints202105.0468.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: 3D reconstruction; ICP; Azure Kinect; RGB-D image processing; point cloud filtering; rapeseed
Online: 20 May 2021 (09:52:14 CEST)
The 3D reconstruction method using RGB-D camera has a good balance in hardware cost, point cloud quality and automation. However, due to the limitation of inherent structure and imaging principle, the acquired point cloud has problems such as a lot of noise and difficult registration. This paper proposes a three-dimensional reconstruction method using Azure Kinect to solve these inherent problems. Shoot color map, depth map and near-infrared image of the target from six perspectives by Azure Kinect sensor. Multiply the 8-bit infrared image binarization with the general RGB-D image alignment result provided by Microsoft to remove ghost images and most of the background noise. In order to filter the floating point and outlier noise of the point cloud, a neighborhood maximum filtering method is proposed to filter out the abrupt points in the depth map. The floating points in the point cloud are removed before generating the point cloud, and then using the through filter filters out outlier noise. Aiming at the shortcomings of the classic ICP algorithm, an improved method is proposed. By continuously reducing the size of the down-sampling grid and the distance threshold between the corresponding points, the point clouds of each view are continuously registered three times, until get the complete color point cloud. A large number of experimental results on rape plants show that the point cloud accuracy obtained by this method is 0.739mm, a complete scan time is 338.4 seconds, and the color reduction is high. Compared with a laser scanner, the proposed method has considerable reconstruction accuracy and a significantly ahead of the reconstruction speed, but the hardware cost is much lower and it is easy to automate the scanning system. This research shows a low-cost, high-precision 3D reconstruction technology, which has the potential to be widely used for non-destructive measurement of crop phenotype.
ARTICLE | doi:10.20944/preprints202103.0283.v1
Subject: Social Sciences, Accounting Keywords: Cloud Manufacturing(CMfg); 3D Printing Device Resources; HPSO; Muti-objective Optimization; Baldwin effect
Online: 10 March 2021 (13:20:59 CET)
Focusing on service control factors, rapid changes in manufacturing environments, the difficulty of resource allocation evaluation, resource optimization for 3D printing services (3DPSs) in cloud manufacturing environments and so on, an indicator evaluation framework is proposed for the cloud 3D printing (C3DP) order task execution process based on a Pareto optimal set algorithm that is optimized and evaluated for remotely distributed 3D printing equipment resources. Combined with the multi-objective method of data normalization, an optimization model for C3DP order execution based on the Pareto optimal set algorithm is constructed with these agents' dynamic autonomy and distributed processing. This model can perform functions such as automatic matching and optimization of candidate services, and it is dynamic and reliable in the C3DP order task execution process based on the Pareto optimal set algorithm. Finally, a case study is designed to test the applicability and effectiveness of the C3DP order task execution process based on the analytic hierarchy process and technique for order of preference by similarity to ideal solution (AHP-TOPSIS) optimal set algorithm and the Baldwin effect.
Subject: Engineering, Industrial & Manufacturing Engineering Keywords: Cloud manufacturing, Computer Numerical Control (CNC), Control as a Service, Cyber-physical system
Online: 28 May 2019 (10:25:13 CEST)
Cloud-based CNC is an emerging paradigm of Industry 4.0 where computer numerical control (CNC) functionalities are moved to the cloud and provided to manufacturing machines as a service. Among many benefits, C-CNC allows manufacturing machines to leverage advanced control algorithms running on cloud computers to boost their performance at low cost, without need for major hardware upgrades. However, a fundamental challenge of C-CNC is how to guarantee safety and reliability of machine control given variable Internet quality of service, especially on public Internet networks. We propose a three-tier redundant architecture to address this challenge. We then prototype tier one of the architecture on a 3D printer successfully controlled via C-CNC over public Internet connections, and discuss follow-on research opportunities.
ARTICLE | doi:10.20944/preprints202206.0155.v1
Subject: Social Sciences, Other Keywords: word segmentation; word cloud analysis; TF-IDF weight analysis; co-word analysis; network analysis
Online: 10 June 2022 (08:18:59 CEST)
A digital text abstract presents the essential information of an article, and we can find the trend and value of the research by analyzing it rigorously and digging up knowledge. Therefore, this study focuses on the abstracts of index journals in China and Taiwan from July 2010 to June 2020 (a total of 3,283 abstracts). Through the concepts of text mining and natural language processing (NLP), it constructs processes such as text retrieval, text segmentation and word cloud analysis, TF-IDF weight analysis, co-word analysis, network analysis, and trend analysis, and analyses a large amount of text data. The results show that the scope of research in China covers the fields of social sports and sports science, and research in Taiwan covers both natural and social sciences. The network diagram highlights the richness of sports-related research fields in the two regions, but research on sports philosophy is relatively rare. It is suggested that all disci-plines/departments should re-allocate the same resources, so as to show a balanced development trend and help expand a new chapter in the sports academic field.
ARTICLE | doi:10.20944/preprints202111.0162.v1
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: Internet of Things; IoTivity; HEMS; HAN; Cloud; Backend-as-a-Service; RTOS; Contiki-OS
Online: 9 November 2021 (09:22:51 CET)
In developing countries today, population growth and the penetration of higher standard of living appliances in homes has resulted in a rapidly increasing residential load. In South Africa, the recent rolling blackouts and electricity price increase only highlighted this reality calling for sustainable measures to reduce the overall consumption and peak load. The dawn of the smart grid concept, embedded systems and ICTs have paved the way to novel HEMS design. In this regard, the Internet of Things (IoT), an enabler for smart and efficient energy management systems is seeing increasing attention for optimizing HEMS design and mitigate its deployment cost constraints. In this work, we propose an IoT platform for residential energy management applications focusing on interoperability, low-cost, technology availability and scalability. We focus on the backend complexities of IoT Home Area Networks (HAN) using the OCF IoTivity-Lite middleware. To augment the quality, servicing and reduce cost and complexities, this work leverages open-source Cloud technologies from Back4App as BaaS to provide consumer and Utilities with a data communication platform within an experimental study illustrating time and space agnostic “mind-changing” energy feedback, Demand Response Management (DRM) and appliance operation control via a HEM App via an Android smartphone.
ARTICLE | doi:10.20944/preprints202108.0078.v1
Subject: Biology, Anatomy & Morphology Keywords: biodiversity; insolation, biogeography; lidar; point-cloud; multi-spectral imagery; spatial prediction model; forest canopy
Online: 3 August 2021 (13:05:43 CEST)
Incident solar radiation (insolation) passing through the forest canopy to the ground surface is either absorbed or scattered. This phenomenon, known as radiation attenuation, is measured using the extinction coefficient (K). The amount of radiation at the ground surface of a given site is effectively controlled by the canopy’s surface and structure, determining its suitability for plant species.Menhinick’s and Simpson biodiversity indexes were selected as spatially explicit response variables for the regression equation using canopy structure metrics as predictors. Independent variables include modeled area solar radiation, LiDAR derived canopy height, effective leaf area index data derived from multi-spectral imagery, and canopy strata metrics derived from LiDAR point-cloud data. The results support the hypothesis that, 1.) canopy surface and strata variability may be associated with understory species diversity due to habitat partitioning and radiation attenuation, and that, 2.) such a model can predict both this relationship and biodiversity clustering.The study data yielded significant correlations between predictor and response variables and was used to produce a multiple-linear model comprising canopy relief, texture of heights, and vegetation density to predict understory plant diversity. When analyzed for spatial autocorrelation, the predicted biodiversity data exhibited non-random spatial continuity.
ARTICLE | doi:10.20944/preprints202106.0544.v1
Subject: Earth Sciences, Atmospheric Science Keywords: Snowfall Retrieval; Snow Water Equivalent; Cloud Liquid Water; Emissivity; Brightness Temperature; Passive Microwave; GPM
Online: 22 June 2021 (14:22:16 CEST)
Falling snow alters its own microwave signatures when it begins to accumulate on the ground, making retrieval of snowfall challenging. This paper investigates the effects of snow-cover depth and cloud liquid water content on microwave signatures of terrestrial snowfall using reanalysis data and multi-annual observations by the Global Precipitation Measurement (GPM) core satellite with particular emphasis on the 89 and 166 GHz channels. It is found that over shallow snow cover (snow water equivalent (SWE) ≤ 100 kg m-2) and low values of cloud liquid water path (LWP 100–150 g m-2), the scattering of light snowfall (intensities ≤ 0.5 mm h−1) is detectable only at frequency 166 GHz, while for higher snowfall rates, the signal can also be detected at 89 GHz. However, when SWE exceeds 200 kg m-2 and the LWP is greater than 100–150 g m-2, the emission from the increased liquid water content in snowing clouds becomes the only surrogate microwave signal of snowfall that is stronger at frequency 89 than 166 GHz. The results also reveal that over high latitudes above 60°N where the SWE is greater than 200 kg m-2 and LWP is lower than 100–150 g m-2, the snowfall microwave signal could not be detected with GPM without considering a priori data about SWE and LWP. Our findings provide quantitative insights for improving retrieval of snowfall in particular over snow-covered terrain.
ARTICLE | doi:10.20944/preprints202009.0512.v1
Subject: Earth Sciences, Geoinformatics Keywords: OpenStreetMap; cities; slums; network analysis; remote sensing; human development; urban planning; GIS; cloud computing
Online: 22 September 2020 (08:58:35 CEST)
The recent growth of high-resolution spatial data, especially in developing urban environments, is enabling new approaches to civic activism, urban planning and the provision of services necessary for sustainable development. A special area of great potential and urgent need deals with urban expansion through informal settlements (slums). These neighborhoods are too often characterized by a lack of connections, both physical and socioeconomic, with detrimental effects to residents and their cities. Here, we show how a scalable computational approach based on the topological properties of digital maps can identify local infrastructural deficits and propose context-appropriate minimal solutions. We analyze 1 terabyte of OpenStreetMap (OSM) crowdsourced data to create worldwide indices of street block accessibility and local cadastral maps and propose infrastructure extensions with a focus on 120 Low and Middle Income Countries (LMIC) in the Global South. We illustrate how the lack of physical accessibility can be identified in detail, how the complexity and costs of solutions can be assessed and how detailed spatial proposals are generated. We discuss how these diagnostics and solutions provide a multiscalar set of new capabilities – from individual neighborhoods to global regions – that can coordinate local community knowledge with political agency, technical capability, and further research.
ARTICLE | doi:10.20944/preprints201806.0009.v1
Subject: Engineering, Industrial & Manufacturing Engineering Keywords: Indoor Positioning Technology; Bluetooth 4.0; Manufacturing Private Cloud; Internet of Things; Indoor Positioning Technology;
Online: 1 June 2018 (08:15:12 CEST)
To enhance industrial competitiveness and increase productivity, every country has strived to create a smart factory by introducing technologies such as Internet of Things, big data and artificial intelligence into production line and build cyber-physical system for the purpose of promoting manufacturing efficiency. For mission assignment, production line management or manufacturing field analysis, the location information of employee, machine and material is very essential. To promote manufacturing efficiency, of course, the location information became more important. A Bluetooth low energy (BLE) positioning system for the manufacturing is developed in this research. A "Tag tracking" mechanism is addressed and adopted, which uses Beacon to catch the location information and a BLE receiver is also used to receive the broadcasting information from Beacon. The position information from the BLE receiver will be compared with the data in the database for calculating the location of the target. The status of the target may also be obtained by using the data from the BLE receiver. Comparing with the mobile device, this method can reduce energy consumption and make the maintenance simple and easy. In the real applications, the target may not be limited to human. The "Regional label positioning technology" is also investigated in this research. Defining a suitable zone location and arranging BLE receiver location, and positioning analysis theory are the key factors included in this developed technology. The developed system will be tested for real industry applications. The test results show that the feasibility of this technology.
CONCEPT PAPER | doi:10.20944/preprints202111.0418.v1
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: general theory of information; named set; knowledge structure; structural machine; autopoietic machine; multi-cloud infrastructure.
Online: 23 November 2021 (10:42:36 CET)
The General Theory of Information (GTI) tells us that information is represented, processed and communicated using physical structures. The physical universe is made up of structures combining matter and energy. According to GTI, “Information is related to knowledge as energy is related to matter.” GTI also provides tools to deal with transformation of information and knowledge. We present here, the application of these tools for the design of digital autopoietic machines with higher efficiency, resiliency and scalability than the information processing systems based on the Turing machines. We discuss the utilization of these machines for building autopoietic and cognitive applications in a multi-cloud infrastructure.
ARTICLE | doi:10.20944/preprints202102.0463.v1
Subject: Earth Sciences, Environmental Sciences Keywords: atmospheric correction; cloud mask; water vapor content; spectral radiance; surface spectral albedo; aerosol optical thickness
Online: 22 February 2021 (12:01:13 CET)
In this work, we propose simple and robust technique for the retrieval of underlying surface spectral albedo using spaceborne observations. It can be used to process both multispectral moderate resolution satellite data and also multi - zone high spatial resolution data. The technique can work automatically for different types of land surfaces without using huge databases accumulated in advance. The new cloud discrimination and retrieval of the water vapor content in atmosphere procedures are presented. The key point of the proposed atmospheric correction technique is the suggested single-wavelength method for determining the atmospheric aerosol optical thickness without reference to a specific type of underlying surface spectrum. The retrievals of spectral albedo for various land surfaces with developed technique, performed using computer simulation and experimental data, have demonstrated a high retrieval accuracy.