ARTICLE | doi:10.20944/preprints202105.0230.v1
Online: 11 May 2021 (10:31:35 CEST)
Cloud computing nowadays is not an emerging topic, and virtualization is an indispensable technology to expedite cloud computing to become the next sign of the coming Internet revolution. In real life, scientists never stop at exploring the possibilities from such technology by investigating millions of experiments and applications to enhance the quality of virtual services. However, isolated construction for the virtual machine doesn’t save the technology from unwanted data volumes or insensitive processing time. Containers are created to address such problems, by distributing applications without initiating the entire virtual machine. Docker, as an important player in this game, is an open-source application of the container family. The management tool from Docker containers, Swamskit, does not take heterogeneities in either virtualized containers or physical nodes. There are different nodes in the cluster, and each node is different in configurations, resource availability, or concerning resource, etc. Furthermore, the requirements initiated by different services change all the time. The demand might be CPU-intensive (e.g. Clustering services) and also memory-intensive (e.g. Web services), or completely at the opposite. In this paper, we focus on exploring the Docker container cluster and designing, DRAPS, a resource-aware placement scheme, to improve the system performance in a heterogeneous cluster.
ARTICLE | doi:10.20944/preprints202010.0534.v1
Online: 27 October 2020 (07:38:57 CET)
In the past decade, we have witnessed a dramatically increasing volume of data collected from varied sources. The explosion of data has transformed the world as more information is available for collection and analysis than ever before. To maximize the utilization, various machine and deep learning models have been developed, e.g. CNN  and RNN , to study data and extract valuable information from different perspectives. While data-driven applications improve countless products, training models for hyperparameter tuning are still time-consuming and resource-intensive. Cloud computing provides infrastructure support for the training of deep learning applications. The cloud service providers, such as Amazon Web Services , create an isolated virtual environment (virtual machines and containers) for clients, who share physical resources, e.g., CPU and memory. On the cloud, resource management schemes are implemented to enable better sharing among users and boost the system-wide performance. However, general scheduling approaches, such as spread priority and balanced resource schedulers, do not work well with deep learning workloads. In this project, we propose SpeCon, a novel container scheduler that is optimized for shortlived deep learning applications. Based on virtualized containers, such as Kubernetes  and Docker , SpeCon analyzes the common characteristics of training processes. We design a suite of algorithms to monitor the progress of the training and speculatively migrate the slow-growing models to release resources for fast-growing ones. Specifically, the extensive experiments demonstrate that SpeCon improves the completion time of an individual job by up to 41.5%, 14.8% system-wide and 24.7% in terms of makespan.
SHORT NOTE | doi:10.20944/preprints202010.0508.v1
Online: 26 October 2020 (09:55:52 CET)
In the last decades, we have witnessed a spectacular information explosion over the Internet. Millions of users are consuming the Internet through various services, such as mobile applications, and online games. The service providers, at the back-end side, are supported by state-of-art infrastructures. Targeting on providing the services at scale, virtualization is one of the emerging technologies used in data centers and cloud environments to improve the quality of services. In this project, we aim to develop a dynamic resource management scheme based on virtual containers. It collects the runtime job progress from the running tasks and allocates the resources dynamically to improve the overall system performance.
ARTICLE | doi:10.20944/preprints202011.0020.v1
Online: 2 November 2020 (10:38:31 CET)
The explosion of data has transformed the world since much more information is available for collection and analysis than ever before. To extract valuable information from the data in different dimensions, various deep learning models have been developed in the past years. Although these models have demonstrated their strong capability on improving products and services in various applications, training them is still a time-consuming and resource-intensive process. Presently, cloud, one of the most powerful computing infrastructures, has been used for the training. However, how to manage cloud computing resources and to perform the training efficiently is still challenging current techniques. For example, general resource scheduling approaches, such as spread priority and balanced resource schedulers, actually do not work well with deep learning workloads. Besides, the resource allocation problem on a cluster can be divide into two subproblems: (1) local resource optimization: improve resource configuration for a single machine; (2) global resource optimization: improve the cluster-wide resource allocation. In this thesis, we propose two novel container schedulers, FlowCon and SpeCon, that are designed to address these two subproblems respectively and specifically to optimize performance of short-lived deep learning applications in the cloud. FlowCon focuses on resource configuration of single-node in a cluster, as show that it efficiently improves deep learning tasks completion time and resource utilization, and reduces the completion time of a specific job by up to 42.06\% without sacrificing the overall system time. SpeCon targets on cluster-wide resource configuration that speculatively migrate slow-growing models to release resources for fast-growing ones. Based on our experiments, SpeCon improves makespan for up to 24.7\%, compared to current approaches.
ARTICLE | doi:10.20944/preprints202011.0017.v1
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Container Scheduling, Resource Management, Deep Learning, Cloud Computing
Online: 2 November 2020 (10:31:01 CET)
The advent of deep learning has completely reshaped our world. Now, our daily life is fulfilled with many well-known applications that adopt deep learning techniques, such as self-driving cars and face recognition. Furthermore, robotics developed more forms of technology which share the same principle with face recognition, such as hand pose recognition and fingerprint recognition. Image recognition technology requires a huge database and various learning algorithms, such as convolutional neural network and recurrent neural network, that requires lots of computational power, such as CPUs and GPUs. Thus, clients could not be satisfied with the computational resource of the local machine. The cloud resource platform emerged at a historic moment. Docker containers play a significant role of microservices-based applications in the next generation. However, it could not guarantee the quality of service. From clients’ perspective, they have to balance the budget and quality of experiences (e.g. response time). The budget leans on individual business owners and the required Quality of Experience (QoE) depends on usage scenarios of different applications, for instance, an autonomous vehicle requires real-time response, but, unlocking your smartphone can tolerate delays. Plenty of on-going projects developed user-oriented optimization resource allocation to improve the quality of the service. Considering the users’ specifications, including accelerating the training process and specifying the quality of experience, this thesis proposes two differentiate containers scheduling for deep learning applications: TRADL and DQoES .
ARTICLE | doi:10.20944/preprints202012.0194.v1
Subject: Mathematics & Computer Science, Other Keywords: container terminal; simulation; simulation-based optimisation; meta-heuristic; horizontal transportation
Online: 8 December 2020 (09:59:50 CET)
At container terminals, many cargo handling processes are interconnected and take place in parallel. Within short time windows, many operational decisions need to be taken considering both time and equipment efficiency. During operation, many sources for disturbance exist. These are the reason why perfectly coordinated processes are possibly unraveled. An approach that considers disturbance factors while optimizing a given objective is simulation-based optimization. This study analyses simulation-based optimization as a procedure to simultaneously scale the number of utilized equipment and to adjust the choice and tuning of operational policies. The four meta-heuristics Tree-structured Parzen Estimator, Bayesian Optimization, Simulated Annealing, and Random Search guide the simulation-based optimization process. The results show that simulation-based optimization is suitable to identify the amount of required equipment and well-performing policies. Thereby, there is no clear ranking which of the meta-heuristics finds the best approximation of the optimum. The approximated optima suggest that pooling terminal trucks as well as a yard block assignment close to the quay crane is preferable. With an increasing number of quay cranes, the number of optimal terminal trucks for each quay crane decreases as well as the range of truck utilization within one experiment.
ARTICLE | doi:10.20944/preprints202007.0362.v1
Subject: Engineering, Civil Engineering Keywords: Maritime transport; Automatic mooring system; Container vessel; TEU; CO2 emissions.
Online: 16 July 2020 (13:45:05 CEST)
Taking into account the increase in the emission of greenhouse gases produced by ships, during navigation and maneuvering in the port, a direct consequence of the increase in maritime traffic, the international community has developed a broad set of regulations to limit such emissions. The installation in commercial ports of automatic mooring systems by means of vacuum suction cups (AMS), thus reducing considerably the time required to carry out the mooring and unmooring maneuvers of ships, is a factor that is considerably influencing the decrease in Emissions of polluting gases in commercial ports with high traffic. The objective of the present work is to verify the influence of the use of the AMS on the emissions of polluting gases produced in the facilities destined to the traffic of container ships. It examines the CO2 emissions of container ships that call in the only three container ports equipped with AMS: Salalah (Oman), Beirut (Lebanon) and Ngqura (South Africa). Between them, these three ports supported the transit of 6 million TEUS in 2017. The calculation of emissions is made taking into account the time saved when performing the mooring maneuvers using the new AMS system compared with when it is not used. To do this, two different calculation methods are used: EPA and ENTEC to then compare the results of the two and thus obtain the reduction in emissions per TEU in these terminals during the mooring maneuvers. The paper concludes with a discussion on the values of the reductions in emissions obtained and the advantages of the installation of AMS in commercial ports located near population centres.
ARTICLE | doi:10.20944/preprints201911.0015.v1
Subject: Engineering, Civil Engineering Keywords: phase change materials (pcms); metals; container; latent heat storage; corrosion
Online: 3 November 2019 (15:06:53 CET)
Phase Change Materials (PCMs) are latent heat storage media with high potential of integration in building structures and technical systems. Their solid-liquid transition is commonly utilized for thermal energy storage in building applications. It also means that some kind of encapsulation is necessary. This is often solved with metal containers that also have high thermal conductivity and resistance to mechanical damage enhancing the performance these so called latent heat thermal energy storage (LHTES) systems. However selection of suitable metal is rather challenging. It depends, among other things, on the elimination of undesirable interaction between storage medium and surrounding metal. Heat storage medium must be reliably sealed in metal container especially when the storage system is integrated in systems like domestic hot water storage tanks, where PCM leaks can negatively affect human health. The aim of this study was evaluation of interaction between selected commercially available organic and inorganic PCMs and metals. The evaluation is based on the calculation of corrosion rate and use gravimetric method for determination of the weigh variations of the metal samples. Results show that aluminium is the most suitable container material with lowest mass loss and suffered only minimal visual changes on the surface after prolonged exposure to PCMs.
ARTICLE | doi:10.20944/preprints202201.0070.v1
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: Web app; Cloud computing; High Availability; High performance computing; Docker container; Horizontal Scaling
Online: 6 January 2022 (10:33:58 CET)
This study analyses some of the leading technologies for the construction and configuration of IT infrastructures to provide services to users. For modern applications, guaranteeing service continuity even in very high computational load or network problems is essential. Our configuration has among the main objectives of being highly available (HA) and horizontally scalable, that is, able to increase the computational resources that can be delivered when needed and reduce them when they are no longer necessary. Various architectural possibilities are analysed, and the central schemes used to tackle problems of this type are also described in terms of disaster recovery. The benefits offered by virtualisation technologies are highlighted and are bought with modern techniques for managing Docker containers that will be used to build the back-end of a sample infrastructure related to a use-case we have developed. In addition to this, an in-depth analysis is reported on the central autoscaling policies that can help manage high loads of requests from users to the services provided by the infrastructure. The results we have presented show an average response time of 21.7 milliseconds with a standard deviation of 76.3 milliseconds showing excellent responsiveness. Some peaks are associated with high-stress events for the infrastructure, but the response time does not exceed 2 seconds even in this case. The results of the considered use case studied for nine months are presented and discussed. In the study period, we improved the back-end configuration and defined the main metrics to deploy the web application efficiently.
ARTICLE | doi:10.20944/preprints202005.0313.v1
Subject: Social Sciences, Business And Administrative Sciences Keywords: Vehicle Routing Problem; Delivery and Pickup; Time Windows; Left-over Cost; Reusable Container
Online: 19 May 2020 (09:27:52 CEST)
A lot of previous research have proposed various frameworks and algorithms to optimize routes to reduce the total transportation cost, which accounts for over 70% of overall logistics cost. However, it is very hard to find the cases applied the mathematical models or algorithms to the practical business environment cases, especially daily operating logistics services like convenient stores. Most of previous research have considered the developing an optimal algorithm which can solve the mathematical problem within the practical time while satisfying all constraints such as the capacity of delivery and pick-up, and time windows. For the daily pick-up and delivery service like supporting several convenient stores, it is required to consider the unit transporting container as well as the demand, capacity of trucks, traveling distance and traffic congestion. Especially, the reusable transporting container, trays, should be regarded as the important asset of logistics center. However, if the mathematical model focuses on only satisfying constraints related delivery and not considering the cost of trays, it is often to leave the empty trays on the pick-up points when there is not enough space in the track. In this research, it has been proposed to build the mathematical model for optimizing pick-up and delivery plans by extending the general vehicle routing problem of simultaneous delivery and pickup with time windows while considering left-over cost. With the numerical experiments, it has been proved that the proposed model may reduce the total delivery cost. It may be possible to apply the proposed approach to the various logistics business which uses the reusable transporting container like shipping containers, refrigerating containers, trays, and pallets.
ARTICLE | doi:10.20944/preprints201909.0151.v1
Subject: Engineering, Civil Engineering Keywords: STS container crane; uplift and derailment; time history analysis; pushover analysis; fragility assessment
Online: 15 September 2019 (06:26:29 CEST)
While the container crane is an important part of daily port operations, it has received little attention compared with other infrastructures, such as buildings and bridges. Crane collapse due to earthquake affects the operation of the port, and indirectly impacts the economy. This study proposes fragility analyses for various damage levels of the container crane that allow the port owner and partners to better understand the seismic vulnerability presented by container cranes. A large quantity of nonlinear time history analyses was applied for a three-dimensional (3D) finite element model to quantify the vulnerability of the container crane in considering the uplift and derailment behavior. The uncertainty of demand and capacity of the crane structures were also considered through random variables, i.e. elastic modulus of members, ground motion profile, and intensity. The results analyzed in the case of a Korean container crane showed that the probability of exceeding the first uplift with or without derailment is shown before the crane reaches the structure’s limit states. This means that under low seismic excitation, the crane might be derailed without any structural damage. But when the crane reaches the minor damage state, it is always coupled with a certain probability of uplift with or without derailment. This study also proposes the fragility curves developed for different structural periods to enable port stakeholders to assess the risk of their container crane.
REVIEW | doi:10.20944/preprints202008.0496.v1
Subject: Biology, Horticulture Keywords: Vaccinium corymbosum interspecific hybrids; high tunnel; greenhouse; plant factory; non-dormant; substrate; container; evergreen; high density
Online: 24 August 2020 (02:56:10 CEST)
Southern highbush blueberry plantations have been expanded into worldwide non-traditional growing areas with elite cultivars and improved horticultural practices. This article presents a comprehensive review of current production systems – alternatives to traditional open field production – such as production in protected environments, high-density plantings, evergreen production, and container-based production. We discuss the advantages and disadvantages of each system and compare their differences to the open field production. In addition, potential solutions have been provided for some of the disadvantages. We also highlight some of the gaps existing between academic studies and production in industry, providing a guide for future academic research. All these alternative systems have shown the potential to produce high yields with high quality berries. Alternative systems, compared to the field production, require higher establishment investments and thus create an entry barrier for new producers. Nevertheless, with their advantages, alternative productions have potential to be profitable.
ARTICLE | doi:10.20944/preprints201807.0276.v1
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: cloud computing; service-oriented architecture; SOA; cloud-native; serverless; microservice; container; unikernel; distributed cloud; P2P; service-to-service; service-mesh
Online: 16 July 2018 (10:57:39 CEST)
This paper presents a review of cloud application architectures and its evolution. It reports observations being made during the course of a research project that tackled the problem to transfer cloud applications between different cloud infrastructures. As a side effect we learned a lot about commonalities and differences from plenty of different cloud applications which might be of value for cloud software engineers and architects. Throughout the course of the research project we analyzed industrial cloud standards, performed systematic mapping studies of cloud-native application related research papers, performed action research activities in cloud engineering projects, modeled a cloud application reference model, and performed software and domain specific language engineering activities. Two major (and sometimes overlooked) trends can be identified. First, cloud computing and its related application architecture evolution can be seen as a steady process to optimize resource utilization in cloud computing. Second, this resource utilization improvements resulted over time in an architectural evolution how cloud applications are being build and deployed. A shift from monolithic servce-oriented architectures (SOA), via independently deployable microservices towards so called serverless architectures is observable. Especially serverless architectures are more decentralized and distributed, and make more intentional use of independently provided services. In other words, a decentralizing trend in cloud application architectures is observable that emphasizes decentralized architectures known from former peer-to-peer based approaches. That is astonishing because with the rise of cloud computing (and its centralized service provisioning concept) the research interest in peer-to-peer based approaches (and its decentralizing philosophy) decreased. But this seems to change. Cloud computing could head into future of more decentralized and more meshed services.