Preprint
Article

This version is not peer-reviewed.

A Unified Architectural Model for Multi-Connectivity in Heterogeneous Wireless Networks

Submitted:

30 January 2026

Posted:

02 February 2026

You are already at the latest version

Abstract
The convergence of the metaverse and ubiquitous computing presents a new paradigm for applications that connect humans, machines, and their virtual counterparts. These applications are built upon wireless networks and demand massive data transfers between device clusters. While several technological candidates are in active development to meet these needs, the question remains whether technological progress alone can keep pace with their growing requirements. This paper argues that, alongside advances in wireless technology, the current layered architecture must be reimagined to inherently promote network heterogeneity. This approach allows multiple physical layer technologies to be integrated and individual layer protocols to better adapt to the network environment. We propose a novel augmented network model with a physical abstraction layer to seamlessly integrate diverse wireless technologies, enabling application data to be distributed across multiple network interfaces. We present an analytical proof of concept to evaluate its performance using key metrics such as throughput, packet delay, and server utilization. The results demonstrate that our proposed model significantly improves throughput and minimizes packet delay under heavy network loads, outperforming single-interface solutions. This work proves the feasibility and substantial performance benefits of wireless heterogeneity in future communication networks.
Keywords: 
;  ;  ;  ;  ;  ;  

1. Introduction

Metaverse is one key application that is driving future wireless networks beyond 5G. Metaverse will be a set of immersive and interconnected digital spaces that represents the evolution of the Internet as it is known today. For example in metaverse, users can test and experience a digital replica of an automobile. In healthcare, patient consultation and advanced diagnostics can be virtualized, providing users with human experiences remotely. The possibilities are boundless, but there is an underlying implication on several layers underneath to make the metaverse a reality. In [1], the authors say that a large-scale metaverse implementation encompasses three significant elements, namely (i) hardware improvements and wireless infrastructure; (ii) development of recognition and expression models; and (iii) availability of content that people immerse and participate in. These three requirements can be visualized as stacking on top of each other where the hardware and the wireless infrastructure are at the bottom. This paper separates the wireless network from device hardware to narrow the scope of this work. The wireless infrastructure layer is tasked to transport data across devices, whereas the device hardware layers pertain to faster data processing and delivering user experience at the station using graphics processing units (GPUs) and other hardware accelerators. Hence, they are logically separable and can be studied and discussed separately in research.
Devices of the future including metaverse end-user equipment will be a collection of wireless devices as illustrated in Figure 1 that will encompass several sensors, glasses, and headsets where data will be ported amongst these devices or from the devices to a group central node and from there to a cloud node, a fog node, or any other computing node. This also implies that volumes of data will be at an unprecedented rate given the human-to-human and human-to-machine interaction scenarios. Data generated from these devices will vary in data type and format, hence the data rate requirements will also vary from device to device. This presents a significant challenge in creating metaverse-ready wireless networks as generations from 3G to 5G have not been designed to handle these scenarios [2]. 5G will be insufficient to provide adequate support for applications where large volumes of data are generated from interconnected devices. Many of these applications are expected to demand higher bitrates in terabytes and low latency of less than 1 ms.
Expediting the reimagination of data flow is crucial for both wireless and fixed network communications. The primary requirement is to simultaneously deliver high speeds and ultra-low latency along with low jitter. The exponential increase in data connections and volumes further exacerbates this problem, and these requirements will severely stress both fixed and mobile networks. Network communication follows a layered architecture where protocols and algorithms in a particular layer are largely agnostic of layers above and below except for cross-layer optimization algorithms that exist predominantly at the bottom layers of the TCP/IP network model [3]. This paper partially draws inspiration from an article from Meta [4], which prompts or directs researchers to think about novel schemes and approaches to create metaverse-ready wireless networks, as technology advancements and improvements alone may be inadequate to support these applications of the future. This work is discussed in the larger context of the ability of individual layers of the network architecture to dynamically adapt to the inherent and rapid variations in user requirements and network environments in future wireless networks. The primary contribution of this paper is to explicate the concept of inherently supporting multiple data paths over multiple network interfaces by introducing a physical abstraction layer and present cross-layer fabric as a collective intelligence mechanism for individual layers to make dynamic decisions. The proposed model is validated by an analytical model, showing a significant improvement in packet latency, throughput, and server utilization. The envisioned future features an intelligent, reliable, and scalable wireless network that integrates multiple technologies to achieve wireless ubiquity. This paper outlines a vision for the future of wireless communications. We propose an augmented TCP/IP network model to natively enable wireless heterogeneity, analyze its potential use cases, and detail the key enabling facets for its realization.
The remainder of this paper is organized as follows; Section 2 highlights the potential to combine technological advancements and wireless heterogeneity to meet metaverse and other future application requirements. Section 3 highlights the limitations of the current layered network model in supporting these futuristic scenarios and paves the way for the proposed augmented layered structure, citing previous work that also supports the feasibility of the proposed approach. The proposed layered structure, which extends the classic TCP/IP model, is discussed in Section 4. Section 5 elaborates on the proposed amendments in detail, which require protocol and algorithmic changes to the existing TCP/IP layers. Section 6 presents a comparative analysis of the proposed model against the TCP/IP model. An analytical model for the proposed system is detailed in Section 7, while Section 8 provides an analysis of the results. Finally, the manuscript concludes in Section 9, in which the objective and value of this work are reiterated.

2. Wireless Network Evolution and New Directions

Every generation of networks from 3G to 5G is developed on a common theme of providing higher data rates and reducing network maintenance and operational costs. 6G is expected to extend the capabilities of 5G to higher levels where millions of connected devices and applications can operate seamlessly with trust, low latency, and high bandwidth. In 6G and beyond networks, the leap required to enable rich multisensory experiences involving humans, machines, and the physical world [4] is much greater than that required in previous generations. Much work is in progress to make 6G and future generations of wireless networks a reality [5]. Terahertz band utilization, Massive Multiple Input Multiple Output (MIMO), intelligent surfaces and environments, and network automation are prominent technologies that are being researched and considered for 6G and beyond [6].
Along with technological innovations in the physical layer, considerable research efforts are underway in the upper network layers to support capacity increases, improve maintainability, and lower latency [7]. Several works on 6G [5,7] call for improved radio access networks (RANs) to lower network costs and improve resource efficiency. These ongoing projects [5] emphasize various aspects of the network, such as capacity increase, spectrum utilization, security, and AI-driven network management. Newer paradigms, such as semantic communication [8], focus on the accurate recovery of the meaning conveyed from the source to the receiver as opposed to the accurate recovery of transmitted symbols. The aim is to represent the semantic structure of the message, not necessarily the exact message, by sharing an encoded message that surpasses the performance of best-in-class compression techniques and ultimately reduces the physical bandwidth required between the transmitter and receiver. Cognitive and cooperative communication [9] are strategies to exploit or rather efficiently use available spectrum and enable spectrum sharing among technologies. The recent surge in artificial intelligence (AI)/ machine learning (ML) algorithms including deep learning and generative AI has triggered significant research of these algorithms in wireless communication. These algorithms significantly improve channel estimation and selection, resource management, beam formation, QoS provisioning, and energy management [10]. Generative AI models are found to unlock optimization problems in next-generation wireless networks that were previously considered difficult to solve. Generative AI is shown to improve channel modeling, resource allocation, and enhance overall network performance including load balancing and backhaul optimization [11].
It is not in the purview of this paper to elaborate on all the ongoing 6G efforts and futuristic wireless communication strategies. However, it is important to understand and acknowledge these projects as the proposed architecture complements these efforts. In the following sections, this paper elaborates on architectural changes that can significantly increase the ability to support futuristic communication scenarios when combined with these efforts. This study explores the convergence of technologies at the lower layers to enhance the network capacity. This is referred to as wireless heterogeneity, where an application can be operated by multiple radio technologies. The selection of appropriate radio technologies requires inputs from higher layers and is termed cross-layer interaction.

3. Directions

The key premise of this paper is to promulgate the use of dual or multiple wireless interfaces in a device to improve performance. There have been numerous studies on the usage of dual interfaces in cellular communication in both uplink, downlink [12,13] and in sensor networks [14] to improve performance, particularly data rates. There have been multiple performance reports based on simulations and modeling studies on the performance gains of dual connectivity in networks [12]-[14]. Various aspects of dual connectivity such as dynamic traffic splitting among interfaces are studied in these reports. There have been field trials using both millimeter wave and mid-band frequencies to satisfy quality of service requirements in 5G New Radio (5G-NR) [15]. The above studies are conducted in environments using preconfigured interfaces where the load sharing takes place between them based on fixed thresholds. However a generic, wide-scale deployment with multiple interfaces is still far off. This is because dynamically switching between available frequency bands and wireless technologies in real-world environments introduces significant complexities, such as interface selection, constrained optimization, and retransmission. The selection of interfaces and traffic splitting along interfaces cannot be confined to a certain layer of the network model as network environments and device characteristics change over time and the information available at any layer of the network model is largely insufficient to make a constrained optimization decision. For example, the performance of dual connectivity can be significantly improved by basing the decision on information from multiple layers, such as channel estimates, packet drop rates, application characteristics, and device mobility. In the subsequent paragraph, the focus is on exploring newer directions to significantly scale wireless networks in the future where a device and the applications running on the device are agnostic of the frequency bands and the wireless technologies.
It is important to understand the limitations of the current wireless network models to allow us to think of newer directions. This alludes to understanding the restrictions posed by the TCP/IP model which is the standard protocol suite for applications on the Internet. While TCP/IP model presents interoperability and scalability, it restricts or rather does not encourage cross-layer communication by design. Cross-layer design is the exchange of information among layers to improve the efficiency of algorithms at the layers. Cross-layer communication removes or overrides the boundaries between layers by exchanging information between layers. The notion of cross-layer design or the exchange of information between certain network layers in mobile wireless communication has been studied over the past two decades. In [16] the authors illustrate various architectural blueprints that create a taxonomy for classifying existing cross-layer design proposals. The central idea of cross-layer design protocols is to make the protocol stack responsive to variations in the underlying network conditions to obtain optimal performance. A significant challenge in building cross-layer functionality is devising an information sharing mechanism across layers. This paper advocates an open application programming interface (API) approach for direct communication between layers via Open APIs. An open API approach has several benefits as it takes interoperability into the execution environment domain where there is already a richly defined set of common languages and existing APIs that can be exploited [16,17]. Examples of successfully deployed open APIs are universal link layer API (UULA), generic network interface (GENI), and common application requirements interface (CAPRI) which offer generic, technology independent interfaces for message exchange between layers [17]. Here the authors create a cognitive resource manager for multiple physical layer technologies. This work validates the feasibility of the proposed model herein where the authors demonstrate the possibility of using an open API to exchange information across the disparate layers.
Over the past decade, cross-layer design has been a prominent area of research, with a substantial body of work focusing on the lower layers of cellular networks (specifically the Physical, Link, and Network layers) for applications like handoff management [3,18]. Concurrently, a significant amount of research has been dedicated to sensor networks, particularly in the media access and routing layers [19]. In sensor networks, the constraint is primarily the energy of the device. Constraints on energy, spectrum resources, and QoS requirements have led to widespread research and development of cross-layer solutions in cellular wireless and sensor networks. There has been work in cross-layer resource scheduling algorithms where the quality and available capacity of the radio channel are considered to grant radio resources to a user [3]. In other words, the QoS requested by the user is also considered while allocating resources [3]. This would imply that a user application needing low latency and high throughput will get appropriate resource allocation based on the channel estimates. Cross-layer algorithms can significantly improve congestion control and resource utilization efficiency. However, all or most of the proposed cross-layer design protocols [3,18] exchange information with one layer above and below, and these algorithms operate at the lower layers where the primary focus is on resource utilization efficiency. This paper examines cross-layer interactions across the layers of the network to empower the physical layer with cognitive capabilities, enabling it to adapt to dynamic network conditions and optimize resource allocation.
There has been recent literature where cross-layer optimization is considered for scalable video transmission in cognitive radio enabled device-to-device networks [20]. This scheme focuses on optimal resource allocation for multimedia transmissions. In [21], the authors propose a hierarchical architecture for the Internet of Vehicles comprising of the data layer, virtualization layer, control layer, and application layer which presents a cross-layer protocol stack. The authors here necessitate and highlight the need to have co-design of corresponding communication protocols and service algorithms. This paper sets directions and provides the premise for the proposed model to precisely define specific communication scenarios between the layers and untangle the challenges to realize these scenarios.
The other characteristic of a layered structure is the linear flow of data across layers. In other words, this implies that data from layer 2 can only go to a corresponding layer 1 downwards or layer 3 upwards. This paper proposes the idea of a flexible network interface layer where data can be transmitted across multiple physical layer technologies. This idea is the central theme of all dual connectivity schemes. The purpose of this discussion is to set the stage for a scalable and realizable network model that is independent of wireless technology, protocols, various network entities, and the applications running on a device. The concepts and prior research discussed in this section provides the foundation of the model proposed in this paper. This paper stems from a broader theme of fast adaptation of all layers from top to bottom, depending on the inherent variation of user requirements and network environments in future network generations. This theme is particularly relevant in futuristic Space-Air-Ground-Sea Integrated Networks (SAGSIN) where multi-dimensional networks comprising air balloons, drones, aerial vehicles, and ground networks are proposed to provide global seamless networking [2].

4. Proposed Network Model

As discussed earlier, the TCP/IP model is intended to abstract communication from the application layer to the lower layers. The underlying principle of this model is to segregate the flow of information from the user end to physical or air media. Figure 2 presents two models for wireless communication. Figure 2a represents a linear communication flow where each layer is mapped to a corresponding layer and is the original TCP/IP model. Figure 2b represents a novel communication flow where the upper layer data is split across multiple network interface technologies and is re-gathered at the receiver’s application layer. In the forthcoming discussion, the term physical layer is used to highlight the specific wireless communication technology excluding the link layer portion.
This scenario violates the network modularity principle, because the link layer prepares technology or protocol specific frames. The advantage of this approach is that multiple physical layer interfaces can be employed to carry data to the destination which improves the net data rate. Take an example where a user’s application data of 10 MB is split into two parts where one half is transmitted via WLAN 802.11ac and the other half via a cellular network. The two halves go through a backbone network and are reassembled at the receiver end. This scenario violates or rather deviates from the current TCP/IP model as each layer builds on top of the other. The link layer builds on top of the physical layer and so does the network layer on top of the link layer and the same with other layers. This scenario presents a challenge on what would happen to frames or data that are lost in a certain transmission flow and how that data would be re-transmitted. Should it be transmitted via the same physical media where it was originally transmitted? If so, then the link layer should keep track of the fragments that were transmitted across physical layers. This would make the link layer unduly complex and likely reduce its efficiency. These questions arise and are valid in the context of the current TCP/IP layer model. Another computational intelligence required at a layer above the physical layer will be to determine the optimal apportionment of data across multiple physical media to maximize net throughput. In the above example where data is split into two parts, the task of determining the split ratio across the interfaces is a maximization problem to yield maximal data rate, lower packet loss and minimize latency. There are newer scenarios where the compression rate of the data can be varied dynamically depending on the lower layer transmission rate. The compression rate can be automatically adjusted to best exploit the available physical interface and network load. This would imply that the top layer is aware of the modulation scheme and the network latency. The decision to transmit via multiple physical layer technologies should be dynamic based on the application requirement and varying network conditions. This scenario is not typical in the current layered model where the top layers do not have access to the information from the bottom layers. The rest of this paper proposes amendments to the classical TCP/IP network model to support these scenarios and many other newer scenarios. The relevance of this approach arises with the presence of multiple network interfaces in a device and with the need for higher data transmission rates. The authors reiterate that the goal of this paper is two-fold: a) to explore and elucidate the concept of natively supporting multiple data paths over multiple physical interfaces, and b) to enable cross-layer optimization for fast adaptation across all layers, from application to physical, based on varying user requirements and network environments. These two ideas combined offer significant potential for scaling wireless networks. These concepts also broaden the scope of the network layer where the network can dynamically be constituted with the advent of network slicing and cloud computing.
Having established the context, this paper proposes an augmentation to the classic TCP/IP model to encompass new scenarios that aid the development of innovative approaches to provide solutions to future network challenges. The proposed model is shown in Figure 3 where an upper layer flow is not necessarily mapped to a physical layer technology. The goal of this architecture is to extend the boundaries of the current TCP/IP model to pave the way to create metaverse ready networks where the network can handle significant increases in the number of devices, higher data rates, and varying QoS requirements. Figure 3 illustrates the proposed architecture which enables information exchange among layers and transmit/receive using multiple physical interfaces. The cross-layer fabric enables direct communication between the layers using an open API exposed at each layer. The physical abstraction layer enables simultaneous distribution of traffic across multiple physical media. There is a corresponding resource manager for each physical layer technology, where its role is to manage physical layer resources in terms of spectrum usage and allocating spectral resources to users. The layers in the classic model are intact, but the layers are extended to provide additional functionalities, namely supporting multiple physical layer paths and cross-layer communication. This model does not circumvent the functionality of the layers, but rather improves the efficiency and effectiveness of each layer in a multi-layered and multidimensional network.
In the proposed model as illustrated in Figure 3, the individual layers have access to information from across the layers of the network. The access to cross-layer fabric information enables the individual layers to improve performance and enhance functionality. The application layer sets the compression ratios and encryption methods for the user application session. It applies certain compression techniques to reduce data bits sent over the network. In the current scenario, the compression rate and encryption are set for the application at the start of the session. Alternatively, compression rates can be varied based on the instantaneous network performance indicators and physical layer characteristics. Dynamically varying compression rates will improve the network capacity in terms of number of users and performance. The segment size parameter set at the transport layer is typically a set static value for the application. TCP connections have a maximum segment size (MSS) at each end. MSS value is presented by each end to denote the maximum segment size it can receive, and hence can be varied to tune TCP performance. The challenge of changing MSS and the number of transport streams on the fly is that this constrained optimization requires information from other layers to make an effective decision. The underlying premise of this proposed model is to enable the development of intelligent algorithms at every layer by providing access to a range of information from other layers. Below are the possible new scenarios which are not possible with the conventional approach:
1.
The application layer has access to the bottom layer information such as network congestion metrics, resource availability, link error rates, and physical layer metrics which enables higher layers to evaluate application needs and set parameters such as encryption length and compression rate accordingly.
2.
Middle layers as Network and Transport layer will have access to upper layer information such as user application type, encryption length, and lower layer information such as physical resource utilization, and channel estimates which will enable them to provision, create or merge networks dynamically.
3.
Lower layers such as the physical layer and link layer have access to upper layer information such as application requirements, network congestion rates and other layer’s information. This will enable lower layers to schedule resources efficiently.
4.
In the conventional layered model, TCP session starts out with a three-way handshake between the end points. A different approach is to set up multiple sessions across multiple interfaces for an application and merge the sessions at the receiver. This will require intelligence at the transmitter to split the sessions and combine them at the receiver end.
5.
Link layers transmit frames across multiple physical layer interfaces as illustrated in Figure 2. Conceptually the idea is to split a network flow across available physical media and reassemble frames at the receiving link layer. A physical abstraction layer is proposed to abstract network layer from the network interface layer. This splitting of information at the network interface layer is abstracted from the network layer as the distribution is dynamic.
The five scenarios outlined above extend the concept of cross-layer network communication and dynamic networks. The contribution or goal of this proposed model is to create a mechanism for information exchange between layers and increase overall network efficiency that is agnostic of the specific technology at each layer.

5. Protocol Design

The proposed model is an extension or rather an addendum to the TCP/IP model to include new scenarios and improve the capacity for future user scenarios. Much of the ongoing research stated in section 2 of this paper is focused on improving the technology. In addition to improving technology at the various layers, this enhanced model provides a mechanism for integrating various technologies, resulting in synergies between technologies and architectures. This section highlights the protocol design by describing the new elements and their sub-elements.
As shown in Figure 3, the cross-layer fabric is where all layers expose their respective information via an open API. For an application layer, it can be the desired performance benchmarks in terms of response time, response volumes, compression scheme deployed, or the encryption algorithm. The transport layer provides protocol information about protocol type, segment size, retransmissions, success rate, and other parameters. The network layer provides information on the network traffic parameters, routing metrics, and other information. The link layer provides information on frame retransmissions, physical resource utilization efficiency, physical resource availability, and other parameters. Finally, the physical layer provides information on the physical interfaces available on the device, the available capacity at each interface, and the channel characteristics. As stated earlier, one of the goals of this enhanced approach is to make information accessible and available to other layers in a structured manner under the premise that each of the algorithms or functions at each layer can improve or enhance its contribution towards the overall network efficiency if it has access to other layer’s operating parameters. The consideration here is to avoid optimization loops where layers attempt to optimize themselves in isolation, and each layer’s optimization efforts may have a detrimental impact on the performance of other layers. A question that will arise in the computation and data storage model is whether these optimizations will be performed within the layer or at a centralized entity. The authors envision a distributed model in which individual layers have complete control to perform tasks without interdependence on other layers. Hence, cross-layer fabric is to be visualized as a set of Open APIs called by each layer, and layers have the freedom to use the information as appropriate, abiding by the protocol and avoiding optimization loops. An example is a cognitive resource manager block residing at the media access layer that interacts with other layers to allocate resources optimally.
The proposed model introduces a novel approach where network packets from the network layer can be distributed or placed onto multiple network interfaces by utilizing the cross-layer information from the upper layers at the network interface layer. This involves decoupling adjacent layers to allow a single network flow to be distributed across various physical layer interfaces. This refers to the design of the physical abstraction layer illustrated in Figure 4. The role of the physical abstraction layer is to map a network flow onto network interface interfaces such that the link layer frames can be sent across available network interfaces to provide the requisite data rate and quality of experience (QoE) to the user. The proposed physical abstraction layer structure envisions the network as technology agnostic, where higher layer packets can be placed on any physical layer technology that is available. The cross-layer fabric can be implemented via Open APIs that facilitate hierarchical optimizations. This framework is particularly suited for deploying AI algorithms in areas like radio resource management and transport layer optimizations. Figure 4 illustrates the data flow, where the physical abstraction layer, within the network interface device, obtains metrics from the upper layers, indicated by the arrows. A key task of the abstraction is to garner the intelligence to be able to select the network interface avoiding optimization loops where the layers are constantly making flow changes and devices constantly switch between interfaces due to fluctuating metrics, leading to instability and performance degradation. This decision making can be facilitated and improved through recommender systems that learn and construe information over a large data set. A recommender system is a ML technique that leverages large amounts of data to make recommendations. This is shown in Figure 4 where the abstraction layer interacts with a recommender system on an external network. The system leverages ML algorithms to learn patterns and relationships within a large dataset of network performance data collected over time. This dataset includes historical metrics, user behaviour, network performance metrics, application requirements, network interface channel parameters, and even contextual information like location or time of day. By analysing this data, the recommender system can predict the expected performance of each available network interface under various conditions, recommend suitable network interfaces to maximize network performance, minimize optimization loops and switching of interfaces. For example, consider a user streaming a high-definition video on their mobile device on the Wi-Fi interface. The physical abstraction layer monitors the Wi-Fi connection and detects a sudden drop in signal strength, leading to increased latency and packet loss. The abstraction layer queries the recommender system. The recommender system, having learned from past data, recognizes that in this specific location, the application performance can be improved by splitting the traffic across both the cellular and Wi-Fi interface. Alternately the system may also recommend not splitting the traffic across the cellular interface if the network is congested and may likely degrade the performance. Another use scenario is making the decision to split streams based on user location and mobility pattern. The physical abstraction layer communicates with a recommender system residing on an external network. This externalization or local implementation allows for more complex models and larger datasets to be used, improving the accuracy and effectiveness of the recommendations. A similar methodology is followed in software defined networks (SDNs) when the data plane receives a packet. When a packet arrives at a switch in an SDN, the control plane analyses the packet, based on specific rules and current network conditions, determines the best path for forwarding it. The control plane then uses the OpenFlow protocol to instruct the switch accordingly [22].
In the proposed model, using a recommender system, the physical abstraction layer can make more informed and stable decisions about splitting data among available network interfaces, leading to improved overall network performance and user experience by avoiding unnecessary and detrimental switching between interfaces. Figure 4 highlights the design elements of the proposed system. The bidirectional arrows between the physical abstraction layer and the network interface layer indicate the splitting of network layer data into multiple interface-specific queues at the physical abstraction layer, driven by input from all layers. Table 1 provides a summary of the protocol elements discussed herein, together with their objectives.

6. Comparative Analysis

A layered network model aims to minimize the complexity of the network and allow interoperability between all types of hardware and software in the network. This has led to an emphasis on the modularity aspect. Modularity refers to the division of functionality into several modules, each of which focuses on a specific function, dividing the entire network stack into services, protocols, and functions. However, the modular or layered structure imposes certain restrictions on interactions between the layers that could impede the performance at each layer, thereby decreasing the overall network performance. The limitations of the TCP/IP layered model are highlighted in the previous sections. The fundamental objective of the proposed augmented model is to create an abstraction whenever there is a need to separate a function or service while attempting to group similar functions and layers together. The linchpin of the proposed model is to provide clear separation between layers through open interfaces and the cross-fabric layer, which cuts vertically across layers. For example, in the proposed model, the lower layer (the physical layer combined with the link layer) transmits packets to the upper layer via a possible radio interface at that point in time. This is the primary difference between the conventional TCP/IP model, in which the flow is linear from top to bottom during a communication session. Cross-layer interaction between layers is realized using a cross-layer fabric that assists the layers to make the best decisions. As a result of the proposed layering, manufacturers can create new products related to a layer that will interact with other products. In fact, the layered model will make it possible for proliferation of wireless devices fostering innovation and creating products with specific functions.
Table 2 intuitively compares the TCP/IP model and the proposed model. The proposed model allows protocols to be converged, such as higher-layer protocols making decisions based on the dynamic metrics of the network obtained through cross-layer communication. The TCP/IP model does not have any features of cross-layer visibility, with each layer oblivious of the top/bottom layer and hence cannot make decisions based on the application’s skewed requirements.
Networks serve a wide range of applications with skewed requirements; thus, the network model should be flexible to address these requirements. Furthermore, as mentioned previously, the number of interfaces in a device is expected to increase, and for future applications, application layer data can be split across several interfaces at the lower layers to satisfy stringent performance requirements. Openness is a fundamental principle of network architecture, which makes it possible to reduce the cost of development. This is an example of how multiple layers interact and share information synergistically.

7. Analytical Model

This section presents the analytical model and validates its performance. In a multi-path wireless network, we can conceptualize every layer and interface as a queue as depicted in Figure 5 . This implies that a single network queue can strategically split its data across multiple interfaces, with each interface having its own separate queue. Specifically, each network interface is represented by two queues in tandem: one for the MAC (Media Access Control) layer and another for the PHY (Physical) layer. This model simulates the network layer, MAC layer, and the physical layer to obtain key performance metrics such as packet loss, latency, and throughput. The primary purpose of this analytical model is twofold as stated below:
  • Quantify Performance Gains: We aim to quantify the performance gains resulting from the distribution of network packets across multiple interfaces. The proposed approach was validated by quantifying the throughput increase between the network and interface layers. This analysis incorporated the overhead of queue processing and potential data losses at each interface to provide a realistic assessment of the achievable performance gains.
  • Optimize Interface Allocation: The model also facilitates optimization of the number of interfaces for a given network flow characteristic. By analyzing the performance impact of varying the number of interfaces, an optimal configuration can be identified that maximizes efficiency and minimizes latency, considering the specific demands of the network traffic.
The case above can be accurately modeled as an open Jackson queuing network [23]. An open Jackson queuing network is a mathematical model consisting of multiple interconnected service nodes (queues) where packets arrive from outside the network, receive service at one or more nodes, and then leave the system. Each node can be modelled as an M/M/1 or M/M/c queue, implying service times are exponentially distributed and arrivals follow a Poisson process. Routing between the nodes is probabilistic which is represented by a routing matrix. The model assumes first-come first-serve discipline and that the utilization at each node is less than one to ensure stability. The network is open as packets arrive from an external interface onto the network layer and packets exit from the system at the PHY layer. The system is a Jackson network as the following are true for i , j = 1 , 2 , . . . . , M , where M is the number of nodes in the system.
  • Node ` i is a FCFS queue and may have one or more exponential servers, each with a specific service rate.
  • External arrival rates to node ` i is a Poisson process.
  • After completing service at node ` i at the Network layer, a packet may proceed to node ` j at the MAC layer, and then subsequently exit from the PHY layer.
Each queue is modelled as a M/M/1 queue [23] to represent the memory less behaviour of the arrival and departure of packets at each layer. The M/M/1 queueing model represents a system with a single server, assuming Poisson arrivals and exponential service times. While a standard open Jackson network requires constant external Poisson arrival rates and constant exponential service rates to guarantee a closed-form solution, a modified approach allows for rate variations while still preserving the structure. The model introduces state-dependent rates that maintain the property of local balance within the network. This flexibility allows for modeling varying traffic without compromising the closed form solution. The state-dependent model is a steady-state model as it allows the rate of service and rate of external arrival to depend on the number of jobs currently in the system.
The system presented in Figure 5, is modelled as 2 n + 1 independent M/M/1 queues, as depicted in Figure 5 where n is the number of network interfaces. This configuration serves as an analogue for the proposed scenario, where a single network-level queue is split into multiple network interface queues.
Let ` λ denote the arrival rate of packets in a queue, ` μ denote the service rate of packets in a queue. Packets arrive from the upper layers of the network into the network queue. Let ` α denote the arrival rate of packets at the network queue or the external arrival rate. Packets from network layer go through the physical abstraction layer and may be bifurcated onto multiple network interfaces. Each network interface comprises of a MAC and PHY layer that has a one-to-one mapping. Upon completing service in the network queue, a packet may move to any of the MAC level interfaces with a probability 1 n where ` n is the number of network interfaces. The packets can move to one or many queues from the network layer with a probability ` P .
Table 3. Table of Notations.
Table 3. Table of Notations.
Notation Description
n Number of network interfaces
M Number of nodes in the system
μ Service rate which is equal to 1 s e r v i c e t i m e
λ Arrival rate which is 1 i n t e r a r r i v a l t i m e
E μ Effective service rate accounting for delays
α External arrival rate
γ Sum of all external arrival rates
W Mean Waiting time of a packet in the network
N Number of packets at a node
ρ Utilization of the server
r i j Probability of transition from node i to node j
ϵ Service Penalty
The routing matrix R = [ r i j ] defines the probabilities of transitions between nodes in the system, and is used to determine the total average arrival rate of packets at each node by solving the system’s traffic equations. For a system with M nodes - one at the network layer and M 1 2 nodes each at the MAC and PHY layers - the routing matrix corresponding to Figure 5 is represented in matrix form below by an M x M matrix. The first row of the routing matrix corresponds to the departure of packets from the network layer node to the lower layers of the network, and the first column corresponds to the arrival of packets at the network layer node from the lower layers. The next M 1 2 rows and columns correspond to the MAC layer queues, and the last M 1 2 rows and columns represent the PHY layer queues. In the routing matrix, a value 2 M 1 in the first row for M 1 2 columns after the first column indicates the probability of transition from the network layer node to each of the MAC layer nodes. In the proposed model, packets are routed from the network layer node to any of the available MAC queues with equal probability. Packets move from the MAC layer queue only to the corresponding physical (PHY) layer queue which is denoted by transition probability of 1 in the routing matrix. The transition probability matrix for the above scenario is mentioned as follows
0 2 M 1 2 M 1 2 M 1 . . . . . . 0 0 0 . . . . . . . . . . 0 0 0 0 . . . . . . 1 0 0 0 0 0 0 . . . . . . 0 1 0 0 0 0 0 . . . . . . 0 0 1 0 0 0 0 . . . . . . 0 0 0 0 0 0 0 . . . . . . 0 0 0 0 0 0 0 . . . . . . 0 0 0
The goal of the model is to optimize the mean waiting time in the network denoted by ` W . Using Little`s formula [23] in the Jackson network which relates the average number of packets in the system to the average arrival rate and the average waiting time is written in (1) as
W = N γ
Service overhead must be considered at the network layer as it bifurcates into multiple MAC streams. The physical abstraction layer is represented as processing overhead. This overhead is characterized by a service penalty, which increases with the number of streams and is defined in (2) as
E μ = μ ( 1 ϵ ) M 1 2
Arrival rate at the network layer node is ` α ’. ` λ ’ at any intermediate node below the network node can be expressed below as in (3) by the traffic equations for a Jackson network
λ i = α + j = 1 M λ j * r j i , w h e r e i = 1 , 2 , 3 , . . . M
In steady state, the total rate at which packets arrive at the queuing network from the outside must be equal to the rate at which they leave the network. This can be expressed in (4) as
α = i = 1 M λ i ( 1 j = 1 M r i j )
The utilization of the server is calculated in (5) as,
ρ i = λ i μ i
Mean number of packets in the network ` N is evaluated in (6) as below [23] ,
N = i = 1 M ρ i 1 ρ i
(6) is expressed in (7) in terms of λ and μ as below
N = i = 1 M λ i E μ i λ i
W is evaluated in (8) using Little’s formula as,
W = 1 γ i = 1 M λ i E μ i λ i
The objective is to minimize W, subject to the following
γ , λ i , μ i > 0 , μ i ! = , λ i ! = μ i , and μ i > λ i
From the above W becomes undefined if μ = λ or as λ ± . Sum of arrival times ( γ ) cannot approach ± as the arrival rates in the lower layers are constrained by the service rate of the upper layers. The throughput of a single node is the rate at which packets leave that node. The throughput of the entire open queuing network is the rate at which packets leave the network. At equilibrium, throughput of the system is equal to the arrival rate for a M/M/1 queue. However, if a node contains a finite buffer which causes packets to be lost when full, then not all packets will be served, and its throughput will be less than its arrival rate. Therefore, the throughput of the system is the rate at which packets exit the PHY layer.

8. Analytical Modeling Results

This section presents the numerical results derived from the analytical model proposed in the previous section. The formulated equations were implemented in a custom simulation environment to analyze the model’s behavior under various conditions. To explore the relationships between different parameters, we conducted a series of experiments. In each experiment, we varied a single independent variable across a specified range while holding all other relevant parameters constant. This process was repeated for multiple pairs of variables to visualize and analyze their interdependencies. The model presented in the previous section is implemented in Python to provide a comparative analysis. This modeling approach analyzes the impact of the number of interfaces on several key system parameters, including:
1.
Packet processing time
2.
Net system throughput
3.
Queue length
4.
Server utilization
This analysis establishes the feasibility of the proposed system. The experimental setup assumes that all physical interfaces have identical bandwidth and loss characteristics through the duration of the experiment. Network interface herein denotes a sequence of packets from MAC to PHY. Our model considers the queuing overhead that arises from distributing traffic into multiple interfaces. As a result, the system’s performance is not a monotonically increasing function of the number of interfaces. Instead, there is an optimal number of interfaces for any given packet arrival rate, after which the performance either plateaus or declines due to the compounding overheads of packet distribution overhead at the physical abstraction layer, packet retransmissions and packet reordering, which have been previously discussed.
Table 4. Simulation parameters.
Table 4. Simulation parameters.
Parameter Description
μ Exponential distribution
α Poisson distribution
ϵ Uniform distribution for Service Penalty
n Number of MAC-PHY interfaces
M e t h o d Aggregation across 3 states
Figure 6a illustrates the maximum throughput of the network measured at the PHY layer as a function of the number of interfaces. The x-axis represents the number of interfaces, ranging from 1 to 8, and the y-axis represents the maximum obtained throughput. There is a direct relationship between the number of interfaces and the throughput. As the number of interfaces increases, the network throughput also increases. Adding more interfaces significantly increases the network’s overall throughput. For instance, moving from one to two interfaces shows a 50% gain in throughput. From two to three interfaces also shows a similar gain in throughput. While the increase remains positive throughout the observed range, one might observe an evident decrease in the rate of increase as the number of interfaces increase, although it’s not as pronounced as in the mean packet time graph in Figure 6. This figure strongly supports the premise that adding additional wireless interfaces is beneficial for network performance, specifically in terms of maximizing network throughput at the network interface layer. More interfaces allow for more concurrent data streams, leading to higher aggregate data rates. The value of ρ < 1 in all cases implies that the network is not saturated or overloaded. This implies that the observed increase in throughput is genuinely due to the added capacity from the additional interfaces and not due to the network recovering from a state of congestion. If ρ were approaching or exceeding 1, the throughput might may have decreased due to contention and retransmissions.
Figure 6b displays the mean packet time in the network as a function of the number of interfaces. The x-axis represents the number of interfaces, ranging from 1 to 8, while the y-axis shows the mean packet time in microseconds ( μ s). An inverse relationship is observed between the number of interfaces and the mean packet time. As the number of interfaces increases, the mean packet time decreases. The most significant reduction in mean packet time occurs when transitioning from one to two interfaces, where the mean packet time decreases from approximately 0.419 μ s to 0.404 μ s. This indicates that even a single additional wireless interface provides a notable performance enhancement. Subsequent additions of interfaces continue to reduce the mean packet time, but at a diminishing rate. For instance, increasing the number of interfaces from 5 to 6 yields a marginal reduction of only 0.0081 μ s. These results suggest that additional wireless interfaces can enhance network performance by reducing packet latency, but there exists an optimal number of interfaces beyond which the performance gain becomes negligible.
Figure 6c illustrates the relationship between the mean number of packets at a node and the number of interfaces in the network. The mean number of packets at a node obtained at a given instance, which includes both packets waiting in queue and those currently being serviced is a direct measure of the workload and potential bottlenecks at that node. Lower values of this parameter indicates a healthier, less burdened network. The most substantial reduction in the mean number of packets occurs when moving from 1 to 2 interfaces. The number drops from 474.7 to 408.4. This initial decrease of about 66.3 packets at a node at any given instance highlights the substantial benefit that even one additional wireless interface can significantly reduce the backlog of packets at a node. This suggests that the network interface addition significantly alleviates a single point of congestion. While the reduction continues, the rate of decrease diminishes as more interfaces are added. A lower mean number of packets in a node directly translates to reduced queuing delay and lower end-to-end latency. With fewer packets accumulating at the nodes, the network becomes inherently more stable. This figure provides compelling evidence that adding additional wireless interfaces is highly effective in improving network performance by significantly reducing the mean number of packets accumulating at network nodes.
Figure 6d displays the server utilization against the number of interfaces. The data indicate a clear inverse relationship between the number of network interfaces and mean server utilization. As shown in Figure 6d, an increase in the number of interfaces results in a reduction in the mean server utilization. This relationship is particularly pronounced when transitioning from a single interface to two interfaces, where we observed a reduction of 9% in mean server utilization. This highlights the significant immediate benefit of even a single additional interface in terms of load distribution. The reduction in mean server utilization increases the server’s idle capacity, thereby enhancing the network’s resilience to unexpected traffic spikes. This buffer allows the system to absorb additional loads without a proportional increase in latency or a risk of overload. A less utilized server can process new packets with reduced queuing and processing delays, leading to lower overall packet latency and improved network responsiveness. This also aligns with Figure 6b, where packet times decreases as utilization decreased.
The data presented in Figure 6 demonstrates a clear correlation between the number of active wireless interfaces and the mean server utilization and the throughput of the system. Specifically, increasing the number of wireless interfaces from one to two resulted in a 9% reduction in mean server utilization. This decrease directly contributes to enhanced network stability and responsiveness, improving the system’s capacity to manage fluctuating traffic loads and unexpected surges. These findings support the objective of improving overall network performance.

9. Conclusion

The concept of metaverse is being referred to as the successor to the Internet. While several research projects on metaverse are in progress, a key component of the realization of metaverse and other applications is the capability or the ability of the wireless network to support these futuristic scenarios. To guarantee an ideal immersive experience for users, data rates and latency requirements for data transfer and processing must be adhered. The goal of this paper is to set the stage for a profound change in the approach to creating metaverse ready wireless networks where technology progression along with a paradigm shift in data flow in the layered network model can result in a multi-fold increase in capacity and performance. In this paper, we presented a new augmented network model that facilitates a multi-fold increase in wireless network capacity and performance. Our work successfully explored novel scenarios, specifically those involving cross-layer interactions and the integration of a recommender system, to address the critical challenges of network capacity and efficiency for futuristic applications like the metaverse. The value of this approach was validated by an analytical model, which highlights the opportunities for all layers to adjust dynamically to variations in user requirements and network environments. While our model provides a strong theoretical foundation, detailed testbed studies are reserved for future work. This paper underscores the opportunities and value that can be brought by fast adaptation of all layers from application to the physical layer, adjusting to variations in user requirements and network environments. Finally, the research and industry community must push the barriers of existing architectures along with developing new technologies to meet futuristic applications.

Biographies

Preprints 196768 i001 A. George (IEEE Senior Member) is a faculty at the Higher Colleges of Technology,
United Arab Emirates. He earned his Doctorate degree in Computer Science from
the University of Louisville, USA and his master’s degree in computer science
from Ball State University, USA. Dr. George has over a decade and a half of
industry and academic experience in multinational companies such as Kyocera,
National Instruments, and Alliance University. His primary areas of interest
include wireless networks, distributed computing, and machine learning. He
has published in reputed journals such as Elsevier and IJPCC. His recent
publication is a book titled Towards Wireless Heterogeneity in 6G Networks with
CRC Press (April 2024).
Preprints 196768 i002 M. W. Hussain is an Associate Professor in the School of Computing and
Information Technology at Reva University, Bangalore. He received his PhD in
Computer Science from the National Institute of Technology Meghalaya, Shillong,
and his master’s degree in Computer Science from the National Institute of
Technology Arunachal Pradesh. His research interests include software-defined
networking and big data. His work has appeared in reputed venues, including the
IEEE Internet of Things Journal and journals published by IEEE, Elsevier,
and Wiley.

References

  1. Park, S.M.; Kim, Y.G. A metaverse: taxonomy, components, applications, and open challenges. IEEE access 2022, 10, 4209–4251. [Google Scholar] [CrossRef]
  2. Tang, F.; Chen, X.; Zhao, M.; Kato, N. The Roadmap of Communication and Networking in 6G for the Metaverse. IEEE Wireless Communications 2022.
  3. Fu, B.; Xiao, Y.; Deng, H.; Zeng, H. A survey of cross-layer designs in wireless networks. IEEE Communications Surveys & Tutorials 2013, 16, 110–126. [Google Scholar] [CrossRef]
  4. Rabinovitsj, D. The next big connectivity challenge: Building metaverse-ready networks. https://tech.facebook.com/ideas/2022/2/metaverse-ready-networks/, 2022.
  5. Wang, C.X.; You, X.; Gao, X.; Zhu, X.; Li, Z.; Zhang, C.; Wang, H.; Huang, Y.; Chen, Y.; Haas, H.; et al. On the road to 6G: Visions, requirements, key technologies and testbeds. IEEE Communications Surveys & Tutorials, 2023. [Google Scholar]
  6. Akyildiz, I.F.; Kak, A.; Nie, S. 6G and beyond: The future of wireless communications systems. IEEE access 2020, 8, 133995–134030. [Google Scholar] [CrossRef]
  7. Habibi, M.A.; Han, B.; Nasimi, M.; Kuruvatti, N.P.; Fellan, A.; Schotten, H.D. Towards a fully virtualized, cloudified, and slicing-aware RAN for 6G mobile networks. In 6G Mobile Wireless Networks; Springer, 2021; pp. 327–358. [Google Scholar]
  8. Wheeler, D.; Natarajan, B. Engineering semantic communication: A survey. IEEE Access 2023, 11, 13965–13995. [Google Scholar] [CrossRef]
  9. Ostovar, A.; Keshavarz, H.; Quan, Z. Cognitive radio networks for green wireless communications: an overview. Telecommunication Systems 2021, 76, 129–138. [Google Scholar] [CrossRef]
  10. Chen, Z.; Zhang, Z.; Yang, Z. Big AI models for 6G wireless networks: Opportunities, challenges, and research directions. IEEE Wireless Communications 2024.
  11. Khoramnejad, F.; Hossain, E. Generative AI for the optimization of next-generation wireless networks: Basics, state-of-the-art, and open challenges. IEEE Communications Surveys & Tutorials 2025.
  12. Olaifa, J.O.; Arifler, D. Dual Connectivity in Heterogeneous Cellular Networks: Analysis of Optimal Splitting of Elastic File Transfers Using Flow-Level Performance Models. IEEE Access 2023, 11, 140582–140595. [Google Scholar] [CrossRef]
  13. Khan, S.A.; Shayea, I.; Ergen, M.; Mohamad, H. Handover management over dual connectivity in 5G technology with future ultra-dense mobile heterogeneous networks: A review. Engineering Science and Technology, an International Journal 2022, 35, 101172. [Google Scholar] [CrossRef]
  14. Prakash, M.; Abdrabou, A.; Zhuang, W. Stochastic delay guarantees for devices with dual connectivity. IEEE Internet of Things Journal 2023, 11, 2126–2138. [Google Scholar] [CrossRef]
  15. Geelen, A. Deutsche Telekom, Ericsson and Qualcomm demonstrate millimeter wave technologies for QoS managed connectivity. https://www.telekom.com/en/media/media-information/archive/5g-millimeter-wave-technologies-for-industry-1024054, 2023.
  16. Srivastava, V.; Motani, M. Cross-layer design: a survey and the road ahead. IEEE communications magazine 2005, 43, 112–119. [Google Scholar] [CrossRef]
  17. Sooriyabandara, M.; Farnham, T.; Mahonen, P.; Petrova, M.; Riihijarvi, J.; Wang, Z. Generic interface architecture supporting cognitive resource management in future wireless networks. IEEE Communications Magazine 2011, 49, 103–113. [Google Scholar] [CrossRef]
  18. Al Emam, F.A.; Nasr, M.E.; Kishk, S.E. Coordinated handover signaling and cross-layer adaptation in heterogeneous wireless networking. Mobile Networks and Applications 2020, 25, 285–299. [Google Scholar] [CrossRef]
  19. Ramly, A.M.; Abdullah, N.F.; Nordin, R. Cross-layer design and performance analysis for ultra-reliable factory of the future based on 5G mobile networks. IEEE Access 2021, 9, 68161–68175. [Google Scholar] [CrossRef]
  20. Chatterjee, S.; De, S. QoE-aware cross-layer adaptation for D2D video communication in cooperative cognitive radio networks. IEEE Systems Journal 2022, 16, 2078–2089. [Google Scholar] [CrossRef]
  21. Liu, K.; Xu, X.; Chen, M.; Liu, B.; Wu, L.; Lee, V.C. A hierarchical architecture for the future internet of vehicles. IEEE Communications Magazine 2019, 57, 41–47. [Google Scholar] [CrossRef]
  22. Hussain, M.W.; Sangaiah, A.K.; Reddy, K.H.K.; Roy, D.S.; Alenazi, M.J.; Javvaji, P.K. A Novel Intelligent Task Offloading Scheme for Multi-Controller Environment in Software Defined Internet of Vehicles. IEEE Internet of Things Journal 2025.
  23. Stewart, W.J. Probability, Markov chains, queues, and simulation: the mathematical basis of performance modeling; Princeton university press, 2009.
Figure 1. Form factor of a metaverse device.
Figure 1. Form factor of a metaverse device.
Preprints 196768 g001
Figure 2. Multiple physical layers.
Figure 2. Multiple physical layers.
Preprints 196768 g002
Figure 3. Proposed network model.
Figure 3. Proposed network model.
Preprints 196768 g003
Figure 4. Physical abstraction layer interaction with adjoining layers.
Figure 4. Physical abstraction layer interaction with adjoining layers.
Preprints 196768 g004
Figure 5. Open Queuing Network.
Figure 5. Open Queuing Network.
Preprints 196768 g005
Figure 6. Simulation results of the proposed approach.
Figure 6. Simulation results of the proposed approach.
Preprints 196768 g006
Table 1. Summary of protocol design elements.
Table 1. Summary of protocol design elements.
Design Element Objective Description
Cross Layer Optimization – Layer Interfaces and Message Format – Exchange information across layers. – Design and deploy middleware to exchange messages across layers. – Define message format across layers. – Specify data model for an open API across layers.
Cross Layer Optimization – Best Outcome – Apply constrained optimization at each layer. – Avoid optimization loops where layers counteract each other. – Allow layers to enable/disable cross-layer optimizations. – Ensure consistent actions at each layer across devices.
Flexible Physical Layer – Transmission Management – Distribute network flow over multiple interfaces. – Manage retransmission policy across radio interfaces. – Decide retransmission on same or other interfaces. – Detect and retransmit with minimal delay and jitter.
Flexible Physical Layer – Transmission Flexibility – Discover and utilize appropriate radio interfaces. – Transmit efficiently across selected interfaces. – Dynamically segregate traffic across interfaces. – Select interfaces ensuring consistent performance.
Table 2. Comparison of TCP/IP and the proposed model.
Table 2. Comparison of TCP/IP and the proposed model.
Features TCP/IP Proposed Model
Layer- centric Layering in TCP/IP was an after outcome of the merging of existing protocols. Hence it fails to represent protocol stack other than the TCP/IP suite (e.g. Bluetooth connection). Proposed model is based on TCP/IP is fine-grained where layers are functionally equipped regardless of the technology at each layer.
Function Separation TCP/IP model does not have a clear separation between the services and functions at each layer, might cause inconsistency. Model revolves around the idea of function separation into layers with a focus on layer independence.
Protocol Adaptation Protocol or layer boundaries are rigid which restricts the optimization of protocols to address futuristic application scenarios. Intelligent protocol adaptations or optimizations can be done based on the application scenario as layers are not fully agnostic.
Cross-layer visibility Layer interfaces are closed and offer no visibility to perform constrained optimizations that spans layer boundaries. Individual layers can make optimal decisions with cross layer fabric providing enhanced visibility.
Layer-dependence Linear mapping from application layer to network interface layer where a single application is attached to a single radio interface. Physical abstraction layer removes the linear mapping from application to the network interface layer where a single application can be attached to multiple radio interface.
Openness Closed set of interfaces offer limited functionality to improve network performance. Cross layer fabric provides accessible information across layers via an open API where layers can directly communicate with each other.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated