Preprint
Article

An Efficient Placement Approach of Visual Sensors Under Observation Redundancy and Communication Connectivity Constraints

Altmetrics

Downloads

94

Views

19

Comments

0

Submitted:

29 March 2024

Posted:

01 April 2024

You are already at the latest version

Alerts
Abstract
Visual Sensor Networks (VSNs) have been the focus of numerous studies, emphasizing the quality of the information they generate. The primary goal of these networks is to capture data from an area, extract the pertinent information from the visual scene, and facilitate communication. However, the presence of objects in the study area can impacted the network's efficiency, linking the quality of acquired data directly to the positioning of the cameras. Strategically positioning multiple cameras in a potentially complex area, while ensuring effective communication between nodes, presents a challenging issue in VSNs applications. This paper introduces a specific method to enhance the deployment of visual sensors in indoor environments. The research addresses the theme of multi-objective optimization of VSN deployment, integrating various criteria: maximizing overall visual coverage, maximizing the observation of zones of interest, maximizing observation redundancy of zones of interest and ensuring communication connectivity for data transfers. To tackle this problem, we simulate the observed environment incorporating cameras, obstacles, and considering zones of interest. Camera positioning is performed while considering visual constraints and the scene's characteristics. The quality of positioning is evaluated based on the defined objective functions. Initially, a comparative study is conducted with a state-of-the-art approach, integrating the first three objective functions. Subsequently, we incorporate the fourth objective function to ensure data transfers. The proposed method demonstrates efficient camera network deployments in indoor areas, considering global coverage and observation redundancy. In contrast to the state-of-the-art, these results are achieved in areas with visual obstacles while ensuring communication connectivity between the different cameras.
Keywords: 
Subject: Computer Science and Mathematics  -   Other

1. Introduction

Sensor networks have been a subject of research for several decades. When these networks incorporate cameras, they are referred to as Visual Sensor Networks (VSNs). A VSN regroups small visual sensor nodes, each one including an image sensor and a communication module [1]. Data analysis can be centralized, requiring the transfer of data to a central processing unit, or distributed across the network. VSNs offer innovative solutions for various applications, providing information-rich descriptions of captured events. They find natural applications in video surveillance, facilitating the detection and tracking of people [2,3], as well as in diverse fields such as agriculture [4] and the military [5].
To address to the deployment’s quality, it is imperative to evaluate predefined metrics aligned with the application’s objectives. In network deployment studies, the typical objective is coverage, encompassing the overall study field, the presence of zones of interest in the scene (ZoI), etc. Some applications consider ZoI as a grouping of a set of target points. When targets are present in the scene, each target may require observation by multiple cameras to mitigate information loss. The incorporation of redundant observations of targets could be crucial to mitigate particular acquisition uncertainties or, alternatively, to offer a multi-view perspective of the area.
Once the targets are observed, sensor nodes can communicate with each other and transmit the acquired data. This emphasizes that the positioning of cameras is crucial for acquiring relevant information for potential data analysis. Deploying visual sensors in a potentially obstructed scene must simultaneously adhere to the aforementioned application objectives, is not be trivial. This paper deals with a multi-objective optimization problem of a VSN in an indoor environment considering obstacles and integrating the notions of redundant coverage and communication. We propose an optimization approach which combines two interconnected modules: a visual coverage simulator based on the ray-tracing method, and an optimization module using a genetic algorithm. For an effective deployment of a VSN, the different camera parameters (orientation angle, opening angle, position of the camera in the plane) must be adjusted. The position and the orientation of the cameras are gradually refined through an optimization process to meet the application requirements while taking into account the characteristics of the study zone. This deployment must allow us to acquire data from the scene and to evaluate them while respecting the configuration of the complex scene.
The paper is organized as follows. Section 2 features some related work. The problem of deploying visual sensor nodes is formalized in Section 3. Then, Section 4 describes the proposed optimization platform and its main modules. Experiments and results are presented and discussed in Section 5. Finally, Section 6 presents the paper’s contributions and proposes future works.

2. Related Works

The efficiency of data collection in sensor networks depends on their position in the study zone. In the literature, there are several sensor placement methods [6,7]. These methods can be classified into two main families depending on whether the deployment is performed randomly or deterministically. Random deployment is used when the study zone is large scale and virtually unknown [8]. Deterministic deployment, on the other hand, guaranties a homogeneous placement in the form of a grid of sensors throughout the study zone ensuring total supervision of the scene [9].
One of the most studied sensors in the state of the art is the visual sensors [10,11] due to the wealth of information present in the observed scenes [12]. The placement of visual sensors, similar to cameras [6], must promote satisfactory monitoring of the study zone and fulfill the objectives which are specific to the application domain. For this, it is necessary to consider the parameters of the scene (presence of obstacle or not, zone of interest, etc..) which affect the efficiency of the network. The presence of obstacles in the study zone is generally a constraint. Obstacles degrade network efficiency, both in terms of data acquisition and transmission [13].
Considering the scene complexity, efficient camera automatic deployments can be performed based the objective criteria revealed by the state of the art.
The coverage is one of the most significant criteria according to the state of art [14]. Three kinds of coverage are considered [15]: zone coverage which consist of maximizing the coverage of an entire zone [16], region or target coverage where the goal is to maximize the coverage of the different targets in the study zone [17], and barrier coverage which deals with barrier intrusion detection [18].
In the field of VSNs, we focus our study on the optimal or near-optimal placement of cameras deployed in an environment potentially equipped with obstacles in order to optimize the coverage of the observed zone [17,19]. The studies of [20] and [21] propose a random placement of cameras with a fixed view of angles. The coverage of the predefined zones is then optimized by modifying the position and orientation of the deployed cameras. However, this approach only allows to visualize the viewing angle covered by the camera. It is therefore necessary to think of solutions which place the cameras more efficiently and which consider certain parameters such as position, orientation angle, selected aperture (opening angle), range. The problem of coverage optimization in visual sensor networks is therefore essential to obtain or approach an optimal deployment.
In some applications, the targets must be covered by several sensors. The aims is to implement and guarantee redundancy. In addition to maximizing coverage, this redundancy enhances network robustness by raising the level of fault tolerance in the event of hardware failure. In the literature, when this type of deployment is setup, it is known as K-Coverage [22].
Many VSN applications address the notion of coverage redundancy, depending on the intended objectives. A point is considered redundant in VSNs when it is observed by at least two cameras ( K 2 ) [22]. The optimization of the coverage redundancy depends on the application’s constraints. The redundancy can become an objective function to be maximized in target coverage applications. Meanwhile, other applications require to minimize it [23]. Moreover, the redundant nodes are used to replace easily any the defective nodes [24]. The redundant nodes are put on standby if no problem is detected to increases network efficiency by improving sensor life.
Alaei et al. propose in their work a relevant redundancy coverage threshold rate [25]. Beyond this threshold, the redundant sensors are put on standby and are used to replace the faulty ones afterward. In doing so, they address the problem of maximum target coverage with a minimum of sensors. Conversely, some works [26] rely on a given number of sensors to maximize the redundancy of target coverage and promote good data quality.
The notion of K-Coverage [27,28] is still mentioned in order to maximize and guarantee the quality of target acquisition by a given number of sensors. Full supervision of a set of points of interest can be ensured by guaranteeing coverage from several perspectives, depending on the dimensions of the zone. The authors of [29] and [30] propose this kind for approach and a random deployment of visuals sensors in an obstacle-free zone with the aim of maximizing target coverage redundancy through acquisition of data from all perspectives. On the other hand, Costa et al [31] address the target coverage problem by implementing coverage redundancy as an objective parameter in an evolutionary algorithm to cover all targets.
Some works, as [32] integrate different constraints related to VSNs, including connectivity. A network considers a fully connected if whatever the pair of nodes chosen, there is a path that connects them. The connectivity of a network allows the exchange of data between the nodes of the network. It is considered in many Visual Sensor Network (VSN) [33] applications and depending to the application domain. Tossa et al. in their study approach show that it is an objective criterion to be optimized to guarantee total network connectivity [34]. However, there are application cases for which total connectivity is not required, and in this case the network may comprise several connected components.
To achieve good coverage redundancy, one of the most important issue is the camera placement. Automatic camera positioning methods are mainly based on conventional optimization methods. These methods can be classified into two groups: exact methods (Integer Linear Programming, Monte Carlo, etc.), which find the optimal solution to the problem by evaluating all possible solutions [35], and approximate methods (evolutionary algorithms, simulated annealing, Taboo search, gradient descent, etc.), which obtain a good solution in a relatively short time [36]. To explore the suitable space, these optimization methods can (i) rely on a single solution that will be iteratively improved [37] or (ii) simultaneously process several solutions (population) whose overall quality will be improved over the course of iterations [38].
In [31], authors propose an approach based on the Centralized Prioritized Greedy Algorithm (CPGA). This algorithm automatically determines the orientations of visual sensors in order to improve redundant vision on a set of known targets. Specifically, the proposed algorithm takes into account the relevance of targets to the video surveillance context. Indeed, relevant targets, corresponding to zones to be observed as a priority, are specified as the algorithm’s entry point.
In a similar vein, Silva et al. in [39] propose two methods. The first method, ECPGA (Enhanced Centralized Prioritized Greedy Algorithm) is an extension of CPGA; it estimates the orientation of all cameras to improve redundancy. This method increases the number of possible camera orientation ranges in order to deliver better performances than the CPGA results (covering more targets). In the second method, called RCMA (Redundancy Coverage Maximization Algorithm), optimization is achieved simultaneously by considering two objective functions. The approach consists of seeking not only a total coverage of network targets, but also a maximum redundancy of the latter.
Rangel et al. [17], and [39] attempt to solve the redundant coverage optimization problem by proposing two algorithms that select the orientations of visual sensors with the aim of covering as many targets as possible. Therefore, the objectives that should be maximized are therefore the coverage, the minimum redundancy and the average redundancy. In this study, it is a multi-objective optimization. The authors present two methods of solving this problem. The first method, known as lexicographic, enables prioritization to be expressed at first sight, before the optimization process begins. The second method processes the various objective functions simultaneously. The priority among these functions is expressed in hindsight (at the end of the optimization process). The comparative study proposed in this work demonstrates that the hindsight method achieves better results than the first sight method, but at high computational cost.
The aim of this study is to efficiently position the cameras in an indoor environment with the potential obstacles and the zones of interest using evolutionary optimization methods. The specific optimization criteria are: the region coverage and target coverage, depending on the redundancy and connectivity aspects.

3. Sensor Modeling and Deployment Assessment Metrics

This section describes the visual sensor model used as well as the metrics used to evaluate the quality of a VSN deployment.

3.1. Sensor Modeling and Observation Process

We consider visual sensor nodes provided with directional cameras. This means that each camera has a limited Field of View (FoV) which represents the zone in which objects can be observed. The size of the FoV is defined according to the camera’s angle of view and its range (distance of view). This paper addresses a VSN placement problem in a 2-dimensional plan. This means that all the cameras are placed at the same altitude. Sensors can only be moved horizontally. Tilt and zoom are not allowed.
As shown in Figure 1, each camera is described by the following parameters:
  • x , y : the Cartesian coordinates of the camera
  • α : the angle of view
  • θ : the angle orientation
  • R: the range or distance of view
We assume that α and R are physical characteristics of the cameras and will therefore be considered as inputs. These two parameters will not be optimized. Thus, the problem comes down by finding, for each camera, the best coordinates ( x , y ) and the best orientation angle ( θ ) .
To evaluate the coverage, we use the ray-tracing method, which is one of the most efficient in terms of and realism [40] and allows to accurately compute the coverage area [19]. Its principle is to generate an image in which each pixel is independently processed [41]. A ray is launched from the camera towards the plane, and at each intersection, the nature and position of the corresponding pixel is determined [42]. This technique is used to determine whether a pixel is observed. The concepts of reflections and refractions are not addressed in this work.

3.2. Visual Observation Metrics

In this paper, four metrics related to visual observation are considered to evaluate the quality of a VSN deployment: the overall coverage area, the target coverage, the minimum redundancy of target coverage and the average redundancy ratio. A fifth metric for communications will be presented in Section 3.3.
  • T h e o v e r a l l c o v e r a g e a r e a
    This metric is the ratio of the area observed by the cameras to the total area of the study field. To compute this criterion we consider the cumulative FoV of the different cameras. Note that the presence of obstacles in the study field can obstruct target in the study zone.
  • T h e t a r g e t c o v e r a g e
    The second metric is to maximize the number of targets observed. A target is considered observed if at least one camera can detect it. First, the FoV of each camera is determined on the basis of the ray-tracing method. Then, given the target position, the Euclidean distance between the target and all cameras is estimated. Finally, we test whether the target’s position is included in the field of view of all potential cameras close enough to it. In this way, we can simulate scenarios such as buildings with obstacles (walls for instance) that make the study zone more complex; and also define Zone of Interest (ZoI) formed by several target points. Figure 2 illustrates an example of a study zone with obstacles (wall partitions) and ZoI (collection of target points). We consider that a ZoI is observed if the majority of its target points are observed. We have defined a minimum of ρ target points coverage for a ZoI to be considered as covered. In the current study ρ = 80 % .
  • T h e m i n i m u m r e d u n d a n c y o f t a r g e t c o v e r a g e
    In some cases, for VSN deployment, ensuring the redundancy of observation is mandatory to acquire different perspectives of a target in order to eliminate detection doubts. It consists in the observation of a target by at least two cameras. This is known as k coverage, that is each target has to be observed by at least k visual sensor nodes [43]. While in some works, coverage redundancy is avoided in order to extend network lifetime, in our study, we aim to maximize this redundancy, as it can be useful for improving the quality of acquired data and having a fault-tolerant network.
  • T h e a v e r a g e r e d u n d a n c y r a t i o
    This metric is inspired by [17]. It indicates the average number of redundant observations over targets. It determines the average number of cameras that can observe each target.

3.3. Communication Metric: The Network Connectivity

Visual sensors acquire data which must subsequently be processed. This processing task can be done in a distributed manner between the sensors or by a central node (often refereed to as the sink). This requires data transfer within the network. Two neighboring nodes can communicate directly if and only if the distance separating them is less than the communication radius ( R c ). Furthermore, given the presence of obstacles, the signals can be weakened and reduce the effective communication radius. The use of the attenuation coefficient ( τ c ) is described in Section 4.2.5.
To ensure communications, the VSN deployment must ensure network connectivity. A network is said to be connected if, whatever the pair of nodes chosen, there exists a path that can interconnect them.
In the remainder of this work, the metrics described in Section 3.2 and Section 3.3 are considered as objective functions of our optimization problem to assess the VSN deployment. To solve this multi-objective problem, we set up an optimization platform which is described in Section 4.

4. The Proposed Optimization Approach

To solve VSN deployment problems, optimization methods are often used to find good solutions. Since exact optimization methods are generally resource- and computationally-intensive, the use of metaheuristics is an effective alternative. In our study, we use a genetic algorithm as an optimization method.
The approach we use is based on a platform (Figure 3) that combines an optimization module and a visual sensor network simulator.
  • The optimization module of the platform shown in Figure 3 enables an efficient exploration of the search space. It generates solutions (composed of the decision variables of the problem) and passes them on to the simulation module for evaluation. We used the well-known NSGA-II [44] genetic algorithm implemented in the jMetal framework [45].
  • The simulation module on the right describes the study zone (and its parameters) to be evaluated and the deployment of sensors through the decision variables received from the optimization algorithm. This module returns to the optimization module the values of the metrics described in Section 3.2 and Section 3.3.

4.1. Generation of the Initial Population

The optimization process starts with the generation of a random set of solutions. This set of solutions is called the initial population. A solution is composed of the problem’s decision variables. In our case, these variables are the cameras’ coordinates ( x , y ) and their orientation angles θ . For a study with n cameras, a solution S will is : S = { x 1 , y 1 , θ 1 , x 2 , y 2 , θ 2 , x n , y n , θ n } Thus, if we have n cameras, our problem will include 3 × n decision variables.
After the generation of the initial population, each solution is evaluated. This leads to determine the quality of the solution with respect to the problem’s objective functions. In our case, we compute f 1 , f 2 , f 3 , f 4 and f 5 (as defined in Section 4.2). This computation is performed by the simulation module.

4.2. Evaluation of the Population

To evaluate solutions, the optimization module transmits decision variables to the simulation module. The latter consists in field-of-view assessment based on the ray-tracing method.
In order to calculate the values of the objective functions, the image generated after ray-tracing must be processed. There are several methods for extracting information from a rendered image scene. Segmentation, which is one of the most widely used method [46], is an image processing technique that groups image pixels together according to their common characteristics, enabling objects to be dissociated. After segmentation, not all the perceived data are necessarily important. Therefore, the image is filtered by eliminating noise using mathematical morphology techniques.
According to the metrics described in Section 3, five objective functions could be defined as in Section 3.2to Section 3.3.

4.2.1. Overall Coverage Area

Let P be the set of points on the study field, C the set of cameras, T the set of targets and S a deployment solution.
The first objective function is the overall coverage area function, f 1 , which entails the observation of all pixels in the study field by at least one sensor. Expressed as a percentage, it is calculated using Equation 1.
f 1 ( S ) = ( x , y ) P o b s ( x , y )
where o b s ( x , y ) is defined by Eq. (2).
o b s ( x , y ) = 1 i f c C / ( x , y ) i s w i t h i n t h e F o V o f c 0 o t h e r w i s e

4.2.2. Target Coverage

The second objective function, f 2 , focuses on target coverage. It involves the observation of targets present in the study field by at least one sensor, calculating the percentage of observed targets (see Equation 3).
f 2 ( S ) = t T c o v ( t )
where c o v ( t ) is defined by Eq. (4).
c o v ( t ) = 1 i f c C / t i s w i t h i n t h e F o V o f c 0 o t h e r w i s e

4.2.3. Minimum Redundancy of Target Coverage

The third objective function, denoted as f 3 , represents the redundancy in target observation. It involves the observation of targets present in the study field by at least two sensors. The formulation is provided in Equation 5.
f 3 ( S ) = t T R e d ( t ) | T | × 100
where R e d ( t ) is defined by Eq. (6).
R e d ( t ) = 1 i f c , c C , ( c c ) / t i s w i t h i n t h e F o V o f c a n d c 0 o t h e r w i s e

4.2.4. Average Redundancy Ratio

The redundant observation rate, denoted as f 4 , quantifies the average number of redundant observations made by the cameras (see Equation 7).
f 4 ( S ) = t T V i e w s ( t ) | T | × 100
where V i e w s ( t ) return the number of sensors that can observe t T .

4.2.5. Network Connectivity

The fifth objective function ( f 5 ), referred to as connectivity, indicates the number of connected components in the network. This function is optimized based on the application requirements. It is computed according to Equation 8.
f 5 ( S ) = | C c o n n e x |
where C c o n n e x represents the set of connected components (see Equation 9). A component is said to be connected if and only if, whatever the pair of sensors, there is a path enabling them to communicate.
C c o n n e x = { C 1 , C 2 , C n }
where C i is the i t h connected component.
Note that given two sensor nodes c and c , they are considered as directly connected if d ( c , c ) R c × τ c n . Where d ( c , c ) is the distance between c and c , τ c is the signal propagation attenuation coefficient and n is the number of obstacles between c and c .

4.3. Recombination Operators

After the evaluation step, the solutions are selected for recombination leading to the generation of new solutions. This operation is based on the theory of natural selection. The best solutions will have better chance for being chosen for recombination and therefore transmission of their characteristics to their offspring. There are several selection operators, such as roulette wheel, selection by rank, selection by tournament or Boltzmann selection [34,47]. In our case, solutions are selected for crossover using a roulette wheel operator. The proportion of each solution on the wheel is proportional to its fitness value. This wheel is then rotated at random to select solutions that will help shape the next generation [48].
Each pair of selected solutions (called parents) is recombined using a crossover operator to generate new solutions called offspring. Offspring’s decision variables are copied from parents. Figure ? illustrates a single-point crossover operation. There are various types of crossover, such as single-point crossover, two-point crossover, k-point crossover, uniform crossover, partially paired crossover, order, priority-preserving crossover, shuffle, reduced substitute and cycle [34]. In our case, we use a single-point crossover. One can observe that decision variables of offspring 1 and offspring 2 are inherited from parents.
The mutation step introduces diversity during the optimization process. It allows the algorithm to avoid local optima traps. For example, a simple mutation operator can randomly choose a decision variable and modify its value.

5. Experiments and Results

This paper focuses on the optimization of the deployment of a visual sensor network in an indoor environment. We first consider obstacle-free environments to tackle target coverage issues with respect to redundancy needs. Thereafter more complex scenarios (with obstacles and communications constraints between sensors) are presented. We evaluate the performances of our method named Optimized Camera Placement Method (OCPM) with respect to some other methods proposed in the literature. This comparison study assesses the adaptability and effectiveness of OCPM.

5.1. Scenes Without Obstacle

We compare our method (OCPM) to the approaches presented in [17], while addressing the random deployment of a sensor network for a Redundant Maximization Coverage problem.

5.1.1. Simulation Parameters

The visual sensor nodes are randomly deployed. To improve the coverage of targets, the algorithms search for the best orientation angles. Moreover, the algorithms maximize the target observation redundancy. The simulation parameters are presented in Table 1. It’s important to note that unlike Rangel et al.’s work [17], which models a camera’s field of view as a triangle, we use a more realistic model (shown in Figure 1). In this work, we consider our pixel size to be 4 c m × 4 c m .
We consider three objective functions:
  • T a r g e t c o v e r a g e : defined by equation (3). Each target present in study field is defined by a pixel.
  • M i n i m u m r e d u n d a n c y o f t a r g e t g o v e r a g e : defined by equation (5).
  • A v e r a g e r e d u n d a n c y r a t i o : defined by equation (7).
Rangel et al. [17] use two different approaches to deal with this optimization problem: (i) a lexicographic method where the preferences among the metrics are a priori expressed by the decision maker and (ii) NSGA-II which is based on Pareto dominance (the preferences among the metrics area a posteriori expressed). To make a comparative study possible, we have performed simulations with the same parameters as the NSGA-II method, except for the population size, which is set to 50 for our work and 200 for [17]. The total number of generations is set to 1000.

5.1.2. Results and Comments

Figure 5 and Figure 6 show the minimum and average redundancy results, depending on the target coverage. These figures illustrates the results of three methods: NSGA-II whose performances are taken from [17], OCPM- ( θ ) where the orientation angles are the only decision variables and OCPM- ( x , y , θ ) where OCPM has more flexibility (x, y coordinates, as well as the orientation angle θ can be tuned).
OCPM- ( θ ) outperforms NSGA-II (+ 20 % in terms of minimum redundancy and + 0.8 for average redundancy). In terms of target observation, OCPM- ( θ ) reaches 95 % These results can be explained by the effectiveness of our method in ensuring the cameras orientation efficiently, and furthermore the shape of our realistic cone-shaped cameras which improves the evaluation of the cameras’ field of view. The gain in terms of field of view (for each camera) is given by Equation 10.
Gain = θ sin ( θ ) sin ( θ )
OCPM- ( x , y , θ ) allows the coverage of all the targets ( 100 % of coverage) and its performances are globally better than those of the two other methods. This is explained by the flexibility of the camera positioning and the ability of our method to achieve quickly effective solutions. These preliminary results suggest that our method is more effective than the comparison method in achieving its objectives.
We also compare compare OCPM to four optimization methods, namely CPGA, ECPGA, Lexicographic and RCMA (these methods and their performances are detailed in [17]). In Figure 7, we present the Best Compromise among OCPM results (denoted BC OCPM).
This shows the cumulative values of the best compromise according to their objective functions. It can be highlighted that OCPM presents a better cumulative value of the best compromise.
We carried out further experiments to evaluate the performance of OCPM with respect to the number of visual sensor nodes. For this purpose, we vary the number of cameras from 10 to 70. We consider two scenarios with 25 and 50 randomly placed targets.
Figure 8, Figure 9 and Figure 10 respectively illustrate, the variation in terms of target coverage, minimum redundancy and average redundancy, for a given number of targets and cameras.
The presented solutions are selected based on priority, following the order of highest coverage, followed by redundancy, and finally redundancy rate. This selection aligns with the high-speed convergence of our method, adhering to the specified priority order.
With 10 cameras, the 50-targets configuration achieves a superior coverage of 73 % , surpassing the 25-targets configuration, although full target coverage is not attained in either case. This improvement can be justified to the detriment of the redundancy, which is higher in the 25-targets setup. This performance is due to an insufficient number of cameras compared to targets. However, the coverage difference between the two deployments is nearly 5 % .
With 20 cameras, the coverage of the 25-targets configuration surpasses that of the 50-targets configuration, reaching nearly 100 % . However, this enhanced coverage in the 25-target setup negatively affects the redundancy value, which is more favorable for the 50-targets configuration. The notable increase in the number of cameras in the 25-targets configuration accounts for its complete coverage.
For a configuration of 30 cameras, the coverage performance reaches a maximum of 100 % for a setup with 25 targets, and it attains 98 % for a setup with 50 targets. The redundancy outcome is more successful in the 25-targets setup compared to the 50-targets setup, as the coverage cannot be further optimized. These results can be explained by the over-provisioned context in the 25-targets setup, leading to a rapid convergence towards optimal solutions. Meanwhile, redundancy for the 50-targets setup increases at a slower rate because the process is still in the phase of maximizing coverage.
With 40 cameras deployed in both configurations, coverage performance is close to 100 % as all targets are covered. When the coverage function has already been maximized, the system focuses on maximizing redundancy. Consequently, redundancy performance approaches the maximum value. The average redundancy logically increases, with the value for the 50-targets configuration being better than that for the 25-targets configuration. In both setups, the average redundancy performance exceeds 2, indicating that at least two cameras observe each point of interest.
With 50 cameras, all targets in both the 25-targets and 50-targets setups are covered, and the redundancy observed reaches 100 % . This is attributed to the substantial number of deployed sensors. Furthermore, the redundancy rate is expected to continue maximizing for configurations with 60 and 70 cameras.
From this analysis, it can be inferred that when the number of sensors is equal or greater than the number of targets in the scene, our method ensures maximum coverage and redundancy in a randomly deployed sensor configuration. This scenario is characterized as an over-provisioned context.
In contrast, in the under-provisioned context, error bars are observed in both deployments, indicating that the algorithm at times explores suboptimal solutions. These bars define the confidence level of the obtained results. As optimal values are approached, these error bars decrease or disappear.
These analyses provide justification for the adaptability and effectiveness of our method in various environments containing targets.
Numerous state-of-the-art studies focus on target coverage and redundancy concepts. In this section, we conducted a comparative analysis of our method against an existing approach in the literature, and our results demonstrated greater satisfaction. We systematically varied the number of cameras and targets, yielding compelling solutions.
Furthermore, the challenge intensifies when acquiring scene data in the presence of physical obstacles. In the subsequent section, we enhance our approach by incorporating these features in the analysis of a complex scene with obstructions.

5.2. Scene with Obstacles

In the following, we have made the scene more complex by integrating obstacles and zones of interest. The presence of obstacles (doors, walls, people, metal objects) in a scene does not facilitate data acquisition in sensor networks. This poses a significant challenge in this field, as the objective is to maximize various functions that may be impeded by the presence of obstacles.
  • O v e r a l l c o v e r a g e (denoted f 1 ), is defined in equation 1.
  • T a r g e t c o v e r a g e (denoted f 2 ), it is defined in equation 3.
  • R e d u n d a n t t a r g e t c o v e r a g e (denoted f 3 ), it is defined in equation 5.
In this study, we address the presence of obstructions without delving into the specific nature of the objects causing the obstructions. We consider obstacles that obstruct the field of view of the cameras, as shown in Figure 11 with sizes 1400 × 700 . We place our cameras on walls for possible support or power supply. Each camera has an observation range of 700 pixels. This scene consists of 5 cameras. Additionally, there are 6 zones of interest, resembling a building structure. The description of walls and zones of interest can be found in Section 3.2.
To extend our tests, we consider scenarios with 5, 12 and 20 cameras. This allows us to deal with underprovisioned and over provisioned applications.
We consider a zone of interest as covered if at least 80% of its area is observed by cameras. The redundancy optimized here is the smallest value of all the zones of interest. If a redundancy of 100 % of targets is noted, then all targets are redundantly observed. If zero redundancy is noted, this means that there is at least one zone of interest that is not observed redundantly.
Figure 12 shows the diagrams illustrating the cumulative objective values of the solutions obtained for each deployment. The presence of obstacles in a scene introduces complexity to the acquisition of scene information. We observe that, with 5 cameras in the under-provisioned context, our method allows an overall coverage ranging from 58 % to 66 % . This could be attributed to the limitations in the field of view and the constrained number of cameras deployed. The algorithm then seeks to guarantee coverage of the zones of interest between 83 % and 100 % , but without guaranteeing the total redundancy of target zones in account of the insufficient number of cameras.
In the over provisioned context, with twice as many cameras as zones of interest, the overall coverage achieved is 92 % , with target zone coverage ranging from 98.5 % to 100 % . Target redundancy therefore reaches 100 % , meaning that we can observe redundantly all the targets. These values can be explained by a sufficient number of cameras in the study zone.
With 20 cameras, the algorithm proposes an overall coverage of between 96 % and 98 % for a target zone coverage of 100 % . The smallest redundancy recorded is 30 % . However, in most cases it reaches 100 % .
In an under provisioned context, achieving maximum objective values is more complicated when obstacles are present. This can be explained by an insufficient number of sensors and complexity of scene. However, targets are observed at 100 % .
The algorithm converges to achieve 100 % for all objective functions with twice as many cameras as targets. Cumulative solutions’ score are close to 280.
Note that, our method succeeds in proposing better solutions for target observation at 100 % in different tests, which is essential and principal measurement are monitored in several applications in VSN.

5.3. Considering Communication Requirement and Constraints

To ensure global communication within the network, the nodes should be connected and able to transmit data. This data transfer occurs either directly from a sensor node to the sink node (single-hop communication) or through neighboring nodes, ultimately reaching the sink node (multi-hop communication) [33].
Depending on the application, it is possible to define several connected components to streamline resource usage. Optimizing the number of connected components in the network is a non-trivial task. This optimization process entails minimizing the number of connected components (the optimum being to have a single connected component).
In this paper, we consider communications that use omnidirectional wireless within an environment with obstacles (both with visual and communication obstacles). When crossing an obstacle, the communication radius ( R c ) between nodes decreases according to the signal propagation attenuation coefficient τ . This means that given two sensor nodes c and c they can directly communicate (a point-to-point communication) if d ( c , c ) R c × τ c n . Where n is the number of obstacle between c and c . For our experiments, we consider τ = 50 % .
Note that the communication radius considered for our experiments is equal to the cameras’ observation range.
The best solutions are chosen based on priority, with connectivity being the primary consideration in this network type. It is followed by the redundancy observation of targets present in the scene and, finally, the goal of achieving the overall coverage. The objective is to:
  • E n s u r e a c o n n e c t e d n e t w o r k w i t h o n e r e l a t e d c o m p o n e n t f 5 : expressed in equation 8,
  • O b s e r v e a l l t a r g e t s f 2 a n d f 3 : illustrated in equations equation 3, and 5,
  • M a x i m i z e t h e c o v e r a g e o f t h e e n t i r e s t u d y z o n e f 1 : showed in equation 1.
Figure 13, and Figure 15 showcase the deployment results of ours cameras to ensure overall and target coverage. In the Figure 14 and Figure 16, all existing links between nodes are represented by the green line, organized in batches.
Figure 13 illustrates the result of scene implemented in the previous section for 12 cameras, integrating communication on the nodes.
The overall coverage of the scene varies between 82 % and 98 % , justified by the excess over-provisioned deployment, while that of the target zones reaches 100 % . With a doubled number of cameras, our method returns solutions that strike a compromise between coverage and target redundancy. Importantly, our method successfully proposes a better compromise with a redundancy value of almost 98 % . At this point, the number of related components of the network is equal to 1, as seen in Figure 14. Note that cameras C 7 and C 11 are not directly connected to C 10 due to obstruction by two obstacles which attenuates the communication range. However, a multi-hop link is available for them.
Therefore ,we run simulations with a number of cameras equal to that of the targets as in Figure 15.
The best selected compromise yielded a connectivity result of 1, as shown in Figure 16. The algorithm positioned the cameras to cover all targets at 100 % and redundantly at 90 % . The performance of overall coverage achieve 63 % , attribute to the observation limit of the cameras. Cameras C 3 and C 6 are not evenly connected due to the presence of an obstacle between the two cameras which degrades the communication range.
These results illustrate the effectiveness and adaptability of our approach. Moreover, the literature rarely tackles multi-objective optimization of all these criteria simultaneously. This allows us to emphasize the contributions of our method.

6. Conclusion

This paper focuses on the problem of efficient placement of visual sensor nodes in an indoor environment. It proposes a multi-objective optimization method for camera placement, called OCPM. This method offers promising prospects for future applications in the field of computer vision, visual detection and target tracking. Scalable approaches applied to the design of these networks have shown their robustness and ability to explore an extended solution space, leading to optimal placement that outperforms traditional deployment.
In our work, we initially focused on optimizing target coverage and redundancy in a simplified context. Then, we offer free placement for our cameras, and double-check target coverage and redundancy. In all these aspects, a comparative study carried out with the results of recent work, shows that the OCPM method is bringin up better solutions. This demonstrates the adaptability, effectiveness of the proposed OCPM method and the ability of genetic algorithms to converge towards efficient solutions, so making it possible to optimize the performance of visual sensor networks.
Secondly, our cameras are placed in an environment such as a building, considering obstacles, target areas and the potential connectivity between the sensors. The proposed complex scene represents a challenge in VSN applications, in addition to the integration of the communication aspect between sensors. The results obtained make it possible to cover all the targets present in the scene, while guaranteeing a network connectivity level of one for the transfer of data between the sensor nodes. In the contexts of over and under provisioning, our method respectively achieves maximum coverage and offers at least coverage of more than 50 % of the overall scene.
The advantages of this methodology are manifested in various fields such as object and person detection, image segmentation and visual pattern recognition, building monitoring, etc.
In terms of perspectives, it is important to highlight that challenges remain, including doubt removal, device failure issues, and sensor power consumption. To this end, our prospects consider some challenges that remain such as the integrating of a mobile camera to overcome doubt and device failure issues and the reducing of the power consumption of our equipment to ensure sustainable network activity.

References

  1. Soro, S.; Heinzelman, W. A Survey of Visual Sensor Networks. Advances in Multimedia 2009, 2009. [Google Scholar] [CrossRef]
  2. Chakrabarty, K.; Iyengar, S.; Qi, H.; Cho, E. Grid coverage for surveillance and target location in distributed sensor networks. IEEE Transactions on Computers 2002, 51, 1448–1453. [Google Scholar] [CrossRef]
  3. Marroquin, R.; Dubois, J.; Nicolle, C. WiseNET: An indoor multi-camera multi-space dataset with contextual information and annotations for people detection and tracking. Data in Brief 2019, 27, 104654. [Google Scholar] [CrossRef] [PubMed]
  4. Ojha, T.; Misra, S.; Raghuwanshi, N.S. Wireless sensor networks for agriculture: The state-of-the-art in practice and future challenges. Computers and Electronics in Agriculture 2015, 118, 66–84. [Google Scholar] [CrossRef]
  5. Durisic, M.; Tafa, Z.; Dimic, G.; Milutinovic, V. A Survey of Military Applications of Wireless Sensor Networks. 2012, pp. 196–199.
  6. Charfi, Y.; Wakamiya, N.; Murata, M. Challenging issues in visual sensor networks. Wireless Communications, IEEE 2009, 16, 44–49. [Google Scholar] [CrossRef]
  7. Njoya, A.N.; Ari, A.A.A.; Awa, M.N.; Titouna, C.; Labraoui, N.; Effa, J.Y.; Abdou, W.; Guéroui, A. Hybrid Wireless Sensors Deployment Scheme with Connectivity and Coverage Maintaining in Wireless Sensor Networks. Wirel. Pers. Commun. 2020, 112, 1893–1917. [Google Scholar] [CrossRef]
  8. Senouci, M.; Mellouk, A.; Aissani, A. Random deployment of wireless sensor networks: A survey and approach. Int. J. of Ad Hoc and Ubiquitous Computing 2014, 15, 133–146. [Google Scholar] [CrossRef]
  9. Abdulwahid, H.M.; Mishra, A. Deployment Optimization Algorithms in Wireless Sensor Networks for Smart Cities: A Systematic Mapping Study. Sensors 2022, 22. [Google Scholar] [CrossRef] [PubMed]
  10. Costa, D.G.; Guedes, L.A. The Coverage Problem in Video-Based Wireless Sensor Networks: A Survey. Sensors 2010, 10, 8215–8247. [Google Scholar] [CrossRef]
  11. Kone, C.; Mathias, J.D.; Sousa, G. Adaptive management of energy consumption, reliability and delay of wireless sensor node: Application to IEEE 802.15.4 wireless sensor node. PLOS ONE 2017, 12, e0172336. [Google Scholar] [CrossRef]
  12. Wang, Z.; Wang, F. Wireless Visual Sensor Networks: Applications, Challenges, and Recent Advances. 2019 SoutheastCon, 2019, pp. 1–8. [CrossRef]
  13. Pan, L. Preventing forest fires using a wireless sensor network. Journal of Forest Science 2020, 66, 97–104. [Google Scholar] [CrossRef]
  14. Hsu, Y.C.; Chen, Y.T.; Liang, C.K. The Coverage Problem in Directional Sensor Networks with Rotatable Sensors. Ubiquitous Intelligence and Computing; Hsu, C.H., Yang, L.T., Ma, J., Zhu, C., Eds.; Springer Berlin Heidelberg: Berlin, Heidelberg, 2011; pp. 450–462. [Google Scholar]
  15. Mnasri, S.; Thaljaoui, A.; Nasri, N.; Val, T. A genetic algorithm-based approach to optimize the coverage and the localization in the wireless audiosensors networks. IEEE International Symposium on Networks, Computers and Communications (ISNCC 2015);, 2015; pp. 1–6. [CrossRef]
  16. Fu, J.; Li, X.; Li, Y.; Dong, X. Coverage Control for Directional Sensor Networks with Visual Sensing Constraints. 2022 IEEE International Conference on Unmanned Systems (ICUS), 2022, pp. 387–392. [CrossRef]
  17. Rangel, E.O.; Costa, D.G.; Loula, A. On redundant coverage maximization in wireless visual sensor networks: Evolutionary algorithms for multi-objective optimization. Applied Soft Computing 2019, 82, 105578. [Google Scholar] [CrossRef]
  18. Cheng, C.F.; Tsai, K.T. Barrier coverage in Wireless Visual Sensor Networks with importance of image consideration. 2015 Seventh International Conference on Ubiquitous and Future Networks, 2015, pp. 793–798. [CrossRef]
  19. Faga, Y.; Abdou, W.; Dubois, J. An Optimised Indoor Deployment of Visual Sensor Networks. IEEE SITIS conference, 2023, pp. 1–8 pages. [CrossRef]
  20. Ai, J.; Abouzeid, A. Coverage by directional sensors in randomly deployed wireless sensor networks. J. Comb. Optim. 2006, 11, 21–41. [Google Scholar] [CrossRef]
  21. Osais, Y.; St-Hilaire, M.; Yu, F. Directional Sensor Placement with Optimal Sensing Range, Field of View and Orientation. Mobile Networks and Applications 2008, 15, 216–225. [Google Scholar] [CrossRef]
  22. Jesus, T.C.; Costa, D.G.; Portugal, P.; Vasques, F. A Survey on Monitoring Quality Assessment for Wireless Visual Sensor Networks. Future Internet 2022, 14. [Google Scholar] [CrossRef]
  23. Costa, D.G.; Silva, I.; Guedes, L.A.; Portugal, P.; Vasques, F. Selecting redundant nodes when addressing availability in wireless visual sensor networks. 2014 12th IEEE International Conference on Industrial Informatics (INDIN), 2014, pp. 130–135. [CrossRef]
  24. Costa, D.G.; Vasques, F.; Portugal, P. Enhancing the availability of wireless visual sensor networks: Selecting redundant nodes in networks with occlusion. Applied Mathematical Modelling 2017, 42, 223–243. [Google Scholar] [CrossRef]
  25. Alaei, M.; Barcelo-Ordinas, J.M. Node Clustering Based on Overlapping FoVs for Wireless Multimedia Sensor Networks. 2010 IEEE Wireless Communication and Networking Conference, 2010, pp. 1–6. [CrossRef]
  26. Rangel, E.O.; Costa, D.G.; Loula, A. Redundant Visual Coverage of Prioritized Targets in IoT Applications. Proceedings of the 24th Brazilian Symposium on Multimedia and the Web; Association for Computing Machinery: New York, NY, USA, 2018; WebMedia ’18, p. 307–314. [Google Scholar] [CrossRef]
  27. Hefeeda, M.; Bagheri, M. Randomized k-Coverage Algorithms For Dense Sensor Networks. IEEE INFOCOM 2007 - 26th IEEE International Conference on Computer Communications, 2007, pp. 2376–2380. [CrossRef]
  28. Malek, S.M.B.; Sadik, M.M.; Rahman, A. On Balanced K-Coverage in Visual Sensor Networks. J. Netw. Comput. Appl. 2016, 72, 72–86. [Google Scholar] [CrossRef]
  29. Altahir, A.A.; Asirvadam, V.S.; Hamid, N.H.B.; Sebastian, P.; Saad, N.B.; Ibrahim, R.B.; Dass, S.C. Optimizing Visual Sensor Coverage Overlaps for Multiview Surveillance Systems. IEEE Sensors Journal 2018, 18, 4544–4552. [Google Scholar] [CrossRef]
  30. Costa, D.G.; Silva, I.; Guedes, L.A.; Vasques, F.; Portugal, P. Optimal sensing redundancy for multiple perspectives of targets in wireless visual sensor networks. 2015 IEEE 13th International Conference on Industrial Informatics (INDIN), 2015, pp. 185–190. [CrossRef]
  31. Costa, D.G.; Silva, I.; Guedes, L.A.; Portugal, P.; Vasques, F. Enhancing Redundancy in Wireless Visual Sensor Networks for Target Coverage. Proceedings of the 20th Brazilian Symposium on Multimedia and the Web; Association for Computing Machinery: New York, NY, USA, 2014; WebMedia ’14, p. 31–38. [Google Scholar] [CrossRef]
  32. Kulkarni, U.M.; Regal, P.S.; Kenchenavvar, H.H. Connectivity Models in Wireless Sensor Network: A Review. 2018 International Conference on Computational Techniques, Electronics and Mechanical Systems (CTEMS), 2018, pp. 198–203. [CrossRef]
  33. Farsi, M.; Elhosseini, M.A.; Badawy, M.; Arafat Ali, H.; Zain Eldin, H. Deployment Techniques in Wireless Sensor Networks, Coverage and Connectivity: A Survey. IEEE Access 2019, 7, 28940–28954. [Google Scholar] [CrossRef]
  34. Tossa, F.; Abdou, W.; Ansari, K.; Ezin, E.; Gouton, P. Area Coverage Maximization under Connectivity Constraint in Wireless Sensor Networks. Sensors 2022, 22, 1712. [Google Scholar] [CrossRef]
  35. Ahn, J.W.; Chang, T.W.; Lee, S.H.; Seo, Y. Two-Phase Algorithm for Optimal Camera Placement. Scientific Programming 2016, 2016, 1–16. [Google Scholar] [CrossRef]
  36. Bouzid, S.E.; Seresstou, Y.; Raoof, K.; Omri, M.N.; Mbarki, M.; Dridi, C. MOONGA: Multi-Objective Optimization of Wireless Network Approach Based on Genetic Algorithm. IEEE Access 2020, 8, 105793–105814. [Google Scholar] [CrossRef]
  37. Rani, P.; Chae, H.K.; Nam, Y.; Abouhawwash, M. Energy-Efficient Clustering Using Optimization with Locust Game Theory. Intelligent Automation & Soft Computing 2023, 36, 2591–2605. [Google Scholar] [CrossRef]
  38. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation 2002, 6, 182–197. [Google Scholar] [CrossRef]
  39. Silva, T.; Costa, D. Centralized Algorithms for Redundant Coverage Maximization in Wireless Visual Sensor Networks. IEEE Latin America Transactions 2016, 14, 3378–3384. [Google Scholar] [CrossRef]
  40. Halé, A.; Trouvé-Peloux, P.; Volatier, J.B. End-to-end sensor and neural network design using differential ray tracing. Opt. Express 2021, 29, 34748–34761. [Google Scholar] [CrossRef] [PubMed]
  41. Alwajeeh, T.; Combeau, P.; Aveneau, L. An Efficient Ray-Tracing Based Model Dedicated to Wireless Sensor Network Simulators for Smart Cities Environments. IEEE Access 2020, 8, 206528–206547. [Google Scholar] [CrossRef]
  42. Gómez, J.; Tayebi, A.; Hellín, C.J.; Valledor, A.; Barranquero, M.; Cuadrado-Gallego, J.J. Accelerated Ray Launching Method for Efficient Field Coverage Studies in Wide Urban Areas. Sensors 2023, 23. [Google Scholar] [CrossRef]
  43. Mini, S.; Udgata, S.K.; Sabat, S.L. Sensor Deployment and Scheduling for Target Coverage Problem in Wireless Sensor Networks. IEEE Sensors Journal 2014, 14, 636–644. [Google Scholar] [CrossRef]
  44. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation 2002, 6, 182–197. [Google Scholar] [CrossRef]
  45. Durillo, J.J.; Nebro, A.J. jMetal: A Java framework for multi-objective optimization. Advances in Engineering Software 2011, 42, 760–771. [Google Scholar] [CrossRef]
  46. Gao, K.; Wang, H.; Nazarko, J. An Efficient Data Acquisition and Processing Scheme for Wireless Multimedia Sensor Networks. Computational Intelligence and Neuroscience 2022, 2022. [Google Scholar] [CrossRef] [PubMed]
  47. Gupta, S.K.; Kuila, P.; Jana, P.K. Genetic algorithm approach for k-coverage and m-connected node placement in target based wireless sensor networks. Computers & Electrical Engineering 2016, 56, 544–556. [Google Scholar]
  48. Katoch, S.; Chauhan, S.S.; Kumar, V. A review on genetic algorithm: past, present, and future. Multimedia tools and applications 2021, 80, 8091–8126. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Camera’s parameters
Figure 1. Camera’s parameters
Preprints 102628 g001
Figure 2. An example of complex scene [3]
Figure 2. An example of complex scene [3]
Preprints 102628 g002
Figure 3. Flowchart of the optimization platform
Figure 3. Flowchart of the optimization platform
Preprints 102628 g003
Figure 5. Coverage and minimum redundancy results
Figure 5. Coverage and minimum redundancy results
Preprints 102628 g004
Figure 6. Coverage and average redundancy results
Figure 6. Coverage and average redundancy results
Preprints 102628 g005
Figure 7. Performance comparison with state-of-art methods
Figure 7. Performance comparison with state-of-art methods
Preprints 102628 g006
Figure 8. Coverage evolution considering the number of cameras
Figure 8. Coverage evolution considering the number of cameras
Preprints 102628 g007
Figure 9. Redundancy evolution considering the number of cameras
Figure 9. Redundancy evolution considering the number of cameras
Preprints 102628 g008
Figure 10. Average redundancy evolution considering the number of cameras
Figure 10. Average redundancy evolution considering the number of cameras
Preprints 102628 g009
Figure 11. Scene with 5 cameras considering obstacles
Figure 11. Scene with 5 cameras considering obstacles
Preprints 102628 g010
Figure 12. Deployment of 5 - 12 - 20 cameras considering 6 zones of interest
Figure 12. Deployment of 5 - 12 - 20 cameras considering 6 zones of interest
Preprints 102628 g011
Figure 13. Deployment of 12 cameras in an indoor scene
Figure 13. Deployment of 12 cameras in an indoor scene
Preprints 102628 g012
Figure 14. Resulting connectivity with 12 cameras
Figure 14. Resulting connectivity with 12 cameras
Preprints 102628 g013
Figure 15. Deployment of 6 cameras in an indoor scene
Figure 15. Deployment of 6 cameras in an indoor scene
Preprints 102628 g014
Figure 16. Resulting connectivity with 6 cameras
Figure 16. Resulting connectivity with 6 cameras
Preprints 102628 g015
Table 1. Simulation parameters
Table 1. Simulation parameters
Study field 20 m × 20 m ( 500 p i x e l s × 500 p i x e l s )
Obstacles None
Camera opening angle 60
Sensing range 70 p i x e l s
Number of sensors 40
Number of targets 50
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated