Preprint
Review

This version is not peer-reviewed.

Fog Computing and Deep Reinforcement Learning for Smart Grid Demand Response: A Comprehensive Review

Submitted:

02 August 2025

Posted:

04 August 2025

You are already at the latest version

Abstract
This paper reviews recent advancements in fog computing and deep reinforcement learning for smart grid demand response systems. It analyzes key developments in fog architectures, learning techniques, and energy optimization for distributed energy management. With the rise of IoT devices and renewable energy, traditional cloud-based systems face challenges such as high latency, limited scalability, and energy inefficiency. Through analysis of recent literature, we highlight major gaps, including the lack of integrated fog-reinforcement learning frameworks, limited adaptability to real-time demand fluctuations, and the absence of holistic solutions addressing multiple performance issues simultaneously. While current methods show improvements in specific areas (e.g., 15–35% energy savings or 47% latency reduction), they lack integrated frameworks to deliver comprehensive, real-time optimization for future smart grids. This review provides a systematic framework for developing integrated approaches that address these complexities, offering actionable insights for real-world smart grid deployment.
Keywords: 
;  ;  ;  ;  ;  ;  ;  

1. Introduction

The modern energy industry is experiencing a tremendous transformation triggered by the popularization of renewable energy, upgrades to outdated smart grid infrastructure, and the phenomenal development of Internet of Things (IoT) devices. This development offers great potential and massive challenges to energy management systems. Smart grids nowadays have to effectively cope with the real-time data of billions of connected devices and be able to deliver response times under the sub-second level (especially in case of stressful situations or peak demands). Nevertheless, classic cloud-based topologies of the centralized architecture are becoming less suitable to meet these requirements because of their innate flaws that lie in latency restrictions, scaling, and flexibility [64,65].
The current systems have three major gaps in research that still limit the use of smart grid demand response strategies. The first gap involves architectural limitations in dynamic response. Cloud-based power systems also have problems with latency, going well above 500 milliseconds on peak times, when less than 100 milliseconds are needed to achieve grid stability [64]. The massive volume of IoT-generated data creates communication congestion and latency, undermining real-time responsiveness and limiting optimal energy flows [63]. The current solutions have focused on either improving fog computing solely without smart decision-making or reinforced learning (RL) included in the cloud framework, failing to look at the important aspect of reducing latency.
The second gap is in the absence of integrated optimization frameworks. The existing literature dedicates their attention to the separate aspects of the problem in the form of the investigation of the benefits of latency offered by fog computing on the one hand or the adaptability of RL algorithms on the other hand. A striking shortage is the existence of holistic models that will balance various performance indicators, such as, latency, energy efficiency, and scale up all at once [66]. As an example, although multi-agent deep reinforcement learning has had some success in generating energy savings of 15-20% in a building-level implementation, these solutions are frequently targeted at islands of operation and do not allow the integration with the larger grid [46].
The third major drawback is associated with the lack of real-time flexibility of modern systems. The sources of renewable energy are solar and wind which are variable in nature hence a good grid system must be able to change dynamically to these changes. Current methods refer to static policies or fixed policies that cannot be changed to adapt to the rapidly changing requirements and are therefore insufficient in case of fast demand fluctuation or emergency.
This review aims at answering these shortcomings by providing an in-depth discussion of how fog computing and deep reinforcement learning can be combined to construct the next generation demand response systems. This is the first integrated analysis of the synergistic potential of fog computing and deep reinforcement learning in smart grid applications, contrasting with prior studies that treated these technologies in isolation. It will seek to provide a theoretical and practical reference to systems that can achieve sub-100 milliseconds response time, super-30 percent energy savings and enable scalability to over 15,000 simultaneously running devices.
Specifically, the complexity of multi-source data—including smart meter readings, weather data, building characteristics, and socioeconomic factors [1]—in modern residential areas demands computational systems capable of real-time synthesis of these diverse sources. These include smart meters, weather stations, building characteristics, and socioeconomic factors. The current paper addresses the question of how the decentralized processing power of fog computing could be used alongside sophisticated deep Q-learning-based algorithms, e.g., the Double Deep Q-Networks (DDQN) or Dueling DQN variations, to satisfy the multi-dimensional requirements of smart grid energy management. In this way, this review preconditions the future research and application of intelligent, flexible, and scalable energy management systems that could fulfill the requirements of a fast-changing power environment.
Although integration of federated reinforcement learning with advanced DQN variants in a building energy system has been carried out previously [46], little has been proposed regarding how federated reinforcement learning and advanced DQN variants can be integrated into handling large-scale, grid-level demand response. Also, policy sharing in distributed systems has been little studied with the help of blockchain-enabled model coordination, a secure and tamper-resistant method. Combinations of these technologies hold potential to scale-privacy preserving energy management in heterogeneous grid environments to achieve real-time energy management in future frameworks.

2. Fog Computing in Smart Grid Applications

2.1. Evolution from Cloud-Centric to Distributed Paradigms

The shift from centralized cloud computing to distributed fog computing marks a paradigm shift in addressing gaps in cloud-centric energy management systems. Based on past observation of applications of fog computing, Atlam et al. [66] recognized four specific attributes that make fog computing a good fit to IoT based energy applications such as location awareness to minimize communication overhead, geographical dispersion to achieve higher fault tolerance, horizontal flexibility to accommodate a large number of devices and real-time interactivity insofar as there are sub-100ms response need applications.
The average response times of traditional cloud-based demand response systems are around 500-800ms because of the multi-hop communication and central processing bottlenecks [64]. Fog computing, in turn, enables on-site processing in less than 50–100ms, representing a 5–8x latency reduction compared to the sub-100ms threshold required to sustain grid stability during demand spikes.
The recent trends confirm great possibilities in different spheres, which set the precedents in the field of energy management application. The combination of fog computing and IoT has demonstrated optimistic values in smart traffic management incorporating 60 percent lower latency and health monitoring with 45 percent energy-saving percentages [10], which have the direct architectural designs in smart grid set-ups.

2.2. Hierarchical Fog Computing Architectures for Energy Systems

Hierarchical three-tier architectures are employed in current fog computing designs for energy systems, balancing the computational power of cloud computing with the low latency of edge computing. This architecture is especially suitable in cases where both local, real-time, control as well as global optimization coordination is needed in the demand response application.
  • Tier 1: IoT Device tier
Smart meters, environmental sensors and controllable appliances are devices, which need to respond in real time and work in a stream of data coming in and called devices layer. This layer performs low level data collection and simple directives with response time of less than 10ms.
  • Tier 2: Layer of Fog Node
Intermediate fog nodes will usually be installed in distribution substations or within a neighborhood aggregation point; intermediate fog nodes will process the data, make local optimization decisions, and coordinate devices in real-time. Mahapatra et al. [12] showed that energy-sensitive task offloading and load- balancing at this layer can achieve up to 27% in energy efficiency when compared to cloud-centric approaches, and still have sub-100ms response times, due to the energy-awareness made possible by this layer.
  • Tier 3: Cloud Integration Layer
The regional cloud infrastructure is in charge of long term optimization, predictive analytics as well as coordination across multiple regions of fog. This hybrid model enables millisecond-level local responsiveness while retaining global optimization capabilities.
The concept of the fog-cloud continuum has been making lots of progress in the field of energy management studies because of its capabilities to mix global coordination and sense of local autonomy. Alwabel and Swain [18] investigated the deadline and energy aware application module placement in fog-cloud systems, and showed that both energy efficiency (32 percent improvement) and response time (improved by 65 percent) can be improved extensively by using smart decisions about the placement based on the computational needs and the cost communication.
Branannvall et al. [17] addressed the problem of cost optimization in edge-cloud continuum using energy-based workload placement and their findings offer significant help to resource-constrained utilities that aim at deploying fog computing environments cost-effectively. Its strategic workload placement strategy showed that 40 percent cost savings by utilities are achievable without violating performance requirements, to build more sustainable and economically viable edge computing architecture.

2.3. IoT Integration and Real-Time Data Processing

With the current rate of adoption of IoT in smart grid and the forecast that there would be more than 75 billion connected devices by 2025, the energy management systems face unprecedented opportunities, as well as enormous challenges. The distributed processing nature of fog computing is not only extremely useful but also necessary to manage this scale and be able to respond to events in real-time.
An exhaustive study of fog data analytics in the context of traffic in IoT was presented by Bhatia et al. [10], in which various traffic data analytical methods using fog computing have been illustrated in detail based on the data on smart meter readings, weather data and occupancy patterns with application of real-time insights in improving demand response. Their structure was able to handle more than 10,000 parallel data streams on average latency processing times less than 50ms.
In 5G cellular networks, Muhamad et al. [28] explored the benefits of energy efficient task offloading in fog computing, and they showed how contemporary communication infrastructure can collaborate seamlessly through the fog computing frameworks in establishing energy efficient devices in the context of an IoT-based energy management. What they did managed to reduce the communication energy overhead 35% and keep the response times less than 100ms. Their successive work [39] has further streamlined task offloading mechanisms indicating further reduction in energy consumption with smart assignments of workload according to device capabilities and network conditions.
Guo et al. [29] discussed the integration of fog and cloud computing in enhancing the energy efficiency of telehealth IoT systems, with considerable energy savings resulting as a consequence of requiring fewer overhead data to be transmitted. Their simulation experiments showed that savings in IoT implementations were 40% based on energy savings, but the principles thus laid down are very applicable in smart grid implementation where saving of communication system energy is essential to overall efficiency of the system.

3. Deep Reinforcement Learning in Energy Management

3.1. Foundations of Deep Q-Network (DQN) Applications in Fog Environments

Deep Reinforcement Learning has been found a game-changer in the optimization of complex dynamic systems and Deep Q-Network (DQN) algorithms have been specifically found to work efficiently in energy management applications that involve sequential decision-making under uncertainty. The DQN method works with deep neural networks to approximate value functions to work in high-dimensional state spaces and thus the DQN method is ideally suited to dynamic smart grid environments in which the agents need to find optimal policies to distribute energy, balance their loads and allocate their resources as the conditions in the environment change constantly.
Integrating DQN algorithms into fog computing infrastructure addresses a critical limitation in existing demand response systems: the capacity to make smart, responsive decisions at the network edge nearly in real-time and to continue learning and enhancing performance continuously. The classical system from the rule-based approach does not offer flexibility, whereas the cloud-based AI systems present an unacceptable latency period when it comes to real-time management of the grid.
A detailed deep Q-learning approach focused on improving Quality of Experience (QoE) and energy-efficient optimization of fog computing systems was offered by Sumona et al. [4]. This was proven in their work and showed DQN would be an effective way to learn how to balance competing goals minimizing energy usage on fog nodes, high quality of user experience, and a low response time of less than 100ms. In the framework, there was an agreement to adjust to the dynamic network conditions in real-time and meet stringent energy requirements of energy consumption 25 per cent less-energetic consumptions when related to the stationary schemes of assignment and 99 per cent availability of services.

3.2. Advanced DQN Variants for Enhanced Performance

The major area of research in recent times has been to overcome the inherent shortage of commonly used DQN algorithms, by making more advanced architecture-wise enhancements on them, tailored to the fog computing setup.
  • Double Deep Q-Network (DDQN) Integration Double Deep Q-Network (DDQN) solves the overestimation problem where a standalone network combines the processes of action selection and value estimation. Such a design fix also results in more stable and predictable learning and this becomes critical when faced with complicated resource contracting choices in fog nodes where a lot of variables and uncertainties will arise. This stability is relevant in fog computing environments where there is increased stability in peak demand period where decision reliability is vital.
  • Dueling Deep Q-Network Architecture: It would also be a huge architectural breakthrough whereby the estimation of the value of the state and the advantage of the action are decoupled as two separate neural network branches. In this design the agents learn which system states in fact hold intrinsic value regardless of action selections, leading to a far more efficient learning process in dynamic foggy environments. In demand response applications, this implies reducing the speed at which new grid situations are detected and offering a better utilization of the resources when the loads fluctuate.
Zhong et al.[5] proposed an efficient offloading scheme focused on fair and energy-efficient task distribution based on Dueling Double Deep Q-Network (D3QN) that is entirely specific to heterogenous fog-enabled IoT systems. This strategy showed that the proposed intelligent device-offloading strategy allowed to balance the two conflicting goals of fairness and energy efficiency using the sophisticated RL algorithms. The D3QN implementation produced an energy consumption efficiency of 35 percent as opposed to traditional DQN methods without prejudice among the various types or the level of needs amongst various types of devices, a remarkable breakthrough of multi-objective optimization on the fog-based systems of energy.

3.3. Multi-Agent and Federated Learning Paradigms

Rather than being centred around single-agent systems as earlier software architectures, more recent reinforcement learning methods have been developed, leveraging the inherently decentralised design of modern energy systems to scale and stabilise distributed solutions to smart grid applications.
  • Multi-Agent Deep Reinforcement Learning (MADRL)
Shen et al., [46] developed a multi-agent deep reinforcement learning framework for building energy systems that also incorporate renewable energy resources and energy storage. While this framework achieved 15-20% energy savings in building-scale applications, it lacks scalability to grid-level demand response—highlighting the need for federated learning approaches [32] to coordinate distributed nodes. Their distributed style proved to save offices up to 15-20 percent energy-wise in comparison with traditional rule-based systems without jeopardizing the stability of the system or the requirements of the users trying to be comfortable. Using the multi-agent approach, the multi-agent framework allows the different agents (each agent representing a different building system like HVAC systems, lighting systems, the renewable energy systems and the energy storage systems) to learn and modify their policies keeping in view the actions and states of other agents.
This paradigm of distributed learning can greatly increase the power, as well as the scalability, of the system by allowing a local decision-making process but still ensuring global coordination. The multi-agent framework, unlike centralized approaches with its computational intractability with system size, naturally scales with the number of participating entities and thus is very applicable in large-scale smart grid implementation with thousands of buildings and distributed energy resources and resources.
  • The Federated Deep Reinforcement Learning (FedDRL)
Federated reinforcement learning is one such paradigm shift in the distributed learning methods which is the direct answer to the challenge of privacy and scalability of smart grid applications. Shi et al., [32] analyzed federated deep reinforcement learning and applied it to task allocation in vehicular fog computing through which multiple edge devices can together train their models based on their own raw data, but without exchanging the raw data.
In contrast to the conventional centralized learning methods that demand the combination of sensitive consumer energy consumption data with the possible privacy infringement, federated learning allows distributed fog nodes to learn the optimal demand response policies but without revealing consumer data at their level. Local data remain private; only local model parameters are shared as fog nodes train individual models on their respective datasets. This also avoids a locality of the privacy as well as minimizing overhead in communication and increasing the robustness of the systems against the points of failure.
Fu et al. [31] set one step ahead by combining blockchain along with federated learning-grounded resource management within smart IoT settings. Their design deals with security and trust concerns associated with distributed learning systems to manage energy without giving up on the efficiency due to local processing. They have integrated the use of blockchain in the secure exchange of model parameters that make the federated learning system tamper-proof and able to identify and stop any malicious member of the learning system in a bid to affect the learning process.

4. Energy Efficiency in Fog Computing

4.1. Resource Allocation and Management Strategies

The most important part of sustainable fog computing deployment is energy efficiency, especially when it comes to large IoT deployments in the smart grid setting where thousands of fog nodes must operate 24/7 with minimal environmental impact. The problem is even harder than the trivial energy minimization: it must achieve a balanced allocation of intelligent resources to take the needs of precision and service requirements into consideration with computational efficiency and communication overhead requirements.
Kopras et al. [59] explored extensive task division optimization in distribution energy-efficient computing systems with rigid latency limitation by inserting mathematical models that consider both energy advisability and latency limitations on cloud systems. According to their findings, task allocations that are smarter produce a 30 percent saving of energy without sacrificing sub-100ms critical applications needed in demand response. The fundamentals of the mathematical framework of trade-offs between energy and performance in distributed fog settings discovered by them forms a basis to understand the trade-off between energy consumption and performance.
Singh et al., [11] proposed a high-order machine learning-based resource allocation scheme, particularly in Software-Defined Network (SDN)-enabled fog computing. Their strategy showed considerable improvement in energy efficient features by being smart in the control of network resources and dynamic routing of traffic. The ML-based system outperformed the static allocation strategies by 40 percent in energy efficiency and was also scalable to the diverse conditions and load patterns in a network, a property that makes it very flexible in multiple real-world situations created in smart grids.
Centralizing network control, but preserving all of the advantages of distributed processing, represents the potential of the integrated application of SDN capabilities to provide a global optimization of computer resources and also the communication resources. It is useful in this hybrid approach especially in applications of smart grid with network topology and traffic patterns shifting in dynamic scenarios according to the demands and patterns of the energy generation.
Premalatha and Prakasam [48] kept a resource-efficient allocation and fault-tolerant energy-efficient task offloading scheme exclusively used in IoT-fog computing networks. Their work dealt with energy efficiency and system reliability, the two important attributes of the mission-critical energy management applications where system failure may lead to disastrous impacts on the grid stability. The presented scheme reduced the energy consumption by 25% and increased system reliability by 60% and reduced the latency to less than 75ms which is a critical parameter in implementing a real-time task in IoT-based smart grids.
In surveyed research, simulation such as iFogSim was generally applied to study the system performance in large-scale deployments. Parameters commonly used are 10,000 to 20,000 IoT devices (e.g., smart meters, environmental sensors), hierarchical layers of fog nodes or fog node architectures, illustration of latency of 100ms or less and energy-scheduling low-power scheduling plans. Other research like Massrur et al. [35] have simulated 15,000 devices to capture the real world density in a smart grid deployment in cities in a residential area with thousands of users. These setups assist in evaluating instant response potential, energy efficiency and network flexibility when subjected to shifting load environments.

4.2. Advanced Energy Optimization Techniques

Some advanced methods have been designed to reduce energy consumption to implement fog computing with high levels of performance and reliability needed in smart grid applications.
  • Communication and Computational Energy Optimization
Kopras et al. [3] carried out an extensive survey study over the energy consumption minimization issue in fog networks by speaking both on communication as well as computational energy-efficient methods. Their survey pointed out the following preferred techniques to address this: workload consolidation to ensure fewer nodes in active state, dynamic voltage and frequency scaling (DVFS) to ensure adaptive power consumption, renewable energy integration to ensure sustainable operations, and smart caching policies to dilute redundant computations.
Communication energy portion can be a large part of the overall fog node energy, such as 40-60%, hence optimization of data transmission patterns is one of the key functions to improve the overall efficiency of the whole system. Data compression, adaptive sampling rate and intelligent data fusion within fog nodes are some of the techniques that can provide important data reduction in communication without compromising data quality to be used in demand response decisions.
  • Artificial Intelligence Energy Management Systems
As an example, Zhang et al. [7] investigated broad uses of AI in energy-saving in fog computing-enabled data centers and showed how artificial intelligence algorithms may improve both cooling infrastructure and the distribution of computation resources so that the total energy footprint of the data center can be reduced. Their approach demonstrated that 25 percent of computational energy and 35 percent of cooling energy were saved by employing predictive thermal control and smart workload scheduling respectively.
The incorporation of deep learning models employed in predicting and controlling energy consumption patterns in real-time has been a huge milestone in terms of energy efficiency in fog computing. Leveraging past energy consumption trends, environmental factors, and the nature of workloads, AI systems can take initiative and optimize resource distribution and cooling tactics to limit the quantity of energy consumed but still release the necessities of performance.
  • Frameworks of Multi-Objective Optimization
Pan et al. [49] have suggested an elaborate Lyapunov-based Long Short-Term Memory Particle Swarm Optimization (LSTM-PSO) scheme specially designed to optimize energy consumption of IoT fog computing application settings. Their algorithm demonstrated a strong capability to conserve energy without compromising the overall performance of the system to support strict latency (< 100 ms) and throughput (>10,000 transactions/sec) requirements.
The LSTM sub-system ensures proactive allocation of the resources decisions since it predicts the future energy loads and state of the system. The PSO algorithm improves resource allocation that takes into consideration various goals such as energy consumption, response time and resource utilization. The stability analysis developed based on Lyapunov guarantees that the optimization decisions will also guarantee system stability even under highly dynamic conditions characteristic of smart grids.

4.3. Comprehensive Energy-Performance Optimization

There have been recent studies in the development of multi-dimensional optimization approaches aiming at considering optimization of several performance dimensions instead of focusing on achieving some specific individual metrics in isolation.
  • Pre-emptive Energy-Aware Scheduling
Nazeri et al. presented an in-depth predictive energy-saving scientific workflow scheduling process in the fog computing scene, where sophisticated simulation and forecasting technology helped to minimize 25 percent of energy utilization by fitting a tight deadline environment. They combine together machine learning predictive models and optimization methods to pre-schedule jobs according to predicted energy cost, computation needs, and time limits.
The predictive method allows the fog nodes to predict potential energy demands and optimize resource utilization ahead of the peak demand periods to avoid the reactive nature of the approach to energy usage, thus resulting in energy inefficiency. The given proactive approach is especially useful in smart grid applications when demand characteristics fluctuate in a patterned way on both a daily and seasonal basis.
  • Real-Time Energy-Cost Optimisation
Trabelsi and Ahmed [23] specialized in full-fledged real-time task scheduling in fog computing that are energy- and deadline driven. Their simulation works could give exerted metrics of the performance of a real-time system in line with different loads, supplying some information with regards to the balance between energy demands and economic consideration, as well as different time factors in demand response.
They were able to effectively address the 30-percent reduced operation costs with 95-percent deadline compliance rates in their framework, and it is successful in terms of feasibility of multi-objective optimization in practical instances of implementation. The cost-awareness component provides utilities with possibilities to maximize not only technical performance, but also economic efficiency, which is paramount to a broad use of fog computing technologies in the energy sector.
Vashisht et al. [26] gave an in-depth evaluation of energy efficient fog computing methods, including the description of possible research avenues, as well as emerging trends in this constantly developing domain. The focus of their work referred to the significance of hardware-software co-design methods in terms of energy efficiency enhancement and the need to integrate renewable energy sources to the ultimate creation of fog computing systems that can be used in the realm of smart grids.

5. Integration Challenges and Research Gaps

5.1. Scalability and Dynamic Resource Management

Rapid expansion of IoT-enabled devices in smart grids—anticipated to exceed 75 billion worldwide by 2025—will require advanced resource management techniques that will be able to scale efficiently in a performance and energy-resilient manner. Up-to-date literature presents high levels of difficulty involved in controlling computational and interconnect resources when system scale becomes exponentially magnified.
  • Key barriers to scaling
The first challenge to scalability in fog-based smart grid systems is the fact that at least 10,000 devices per fog node cannot be supported since simple, static, load-balancing schemes cannot be used to deal with the highly-dynamic requirements of the grid [13]. The current approaches, such as the scheduling of standard tasks and load balancing, can be partially used, yet they have many limitations, e.g., they can be too rigid or very deterministic. These constraints are particularly problematic when dealing with an environment in which the density of the devices used and their consumption patterns vary with time of day, weather and grid-wide variations. The adaptation of federated deep reinforcement learning (DRL) into fog architectures can be a more dynamic strategy by decentralizing decision-making amongst the nodes [32], but an actual deployment of how model aggregations can take place within a normal set-up is still an open research problem.
An exhaustive literature search on the topic of machine learning applications to resource management in fog computing environments also revealed that intelligent strategies of management to support different workloads are desirable due to the heterogeneity and distributed nature of fog computing systems and reactive requirements that may demand low latency resources [44]. According to their analysis, existing methods perform adequately in static environments but fail to sustain reasonable performance with device populations exceeding 15,000 units or under dynamic workloads.
Sheikh et al. [45] have presented the dynamic K-means clustering augmented with fuzzy reasoning as a means of task scheduling in the fog computing environments, which ensured the scalability by intelligently grouping workloads using fuzzy reasoning. Their technique dynamically classifies on-coming tasks into groups that may be defined on the basis of resource demand, time constraint and execution priority in order to organize better task distribution across the available fog nodes. The fuzzy logic integration is useful in the management of the uncertainties and character of tasks that are not clear enough in the context of a smart grid of energy demands that have intricate temporal and spatial peculiarities.
  • Suggested Solutions to Integration
Scalability issues of future integrated fog-reinforcement learning paradigms can be solved by use of federated learning systems, where the computational capacity and learning task are replicated over more than one fog node. DQN models based on local data can be trained by each of the fog nodes and also engage in protocols of federated learning during which learned policies can be shared without sharing any sensitive consumer data. Such a strategy could serve 50,000+ devices simultaneously in a fog domain while ensuring sub-100ms response times via localized decision-making. This builds on Massrur et al.’s [35] simulation of 15,000 devices but extends scalability through federated policy sharing—critical for real-world grid density.

5.2. Security and Privacy Considerations in Distributed Energy Systems

Nodes used for fog are placed deeper into the network edge and are therefore prone to physical attacks, cyber-attacks, and security exposures than the centralized cloud infrastructure. Having strong security with maintaining integrity of algorithms is an essential challenge of field deployment feasibility in the smart grid environment where breach in security may lead to disastrous effect in stability of a grid and in privacy of consumers.
  • Energy Management Privacy Preserving
Applications in smart grid deal with processing of sensitive consumer energy consumption data that unveils considerable details about occupancy pattern, use of appliances, and lifestyle features. The classic cloud-based systems imply the aggregation of this highly sensitive information in single points, posing severe privacy risks and possible regulatory matters of compliances with the frameworks concerning GDPR and newly established forms of energy data protection regulations.
Fu et al. [31] suggested the combination of blockchain and the federated learning approach to resource management of smart IoT environments when planning security and trust concepts of distributed learning systems in energy management. This way, their approach provides a tamper-resistant distributed learning setting: blockchain technology delivers accountability and prevents data manipulation, and federated learning preserves privacy locally, protecting the raw information of consumers.
The integration of the blockchain allows exchanging the model parameters between the fog nodes safely, without revealing sensitive consumer statistics, whereas smart contracts may be used to automate the trust decisions and identify rogue parties that aim to disrupt the learning process. This method achieved a 99.9% detection accuracy against malicious model updates with comparable performance to centralized methods when learning.
  • Smart Security of Offloading Tasks
To offload tasks to fog nodes, Pakmehr, [58] also identified key security considerations on the protection of sensitive information by creating intelligent task offloading strategies based on deep reinforcement learning techniques. In their work, they showed that security and performance in fog computing systems could be improved concurrently by adaptive task handling that balanced both efficiency and security requirements of computing.
Security approach based on DRL learns optimum security policies that meet both the protection demands and performance limits, thereby improving the security measures by 40 percent with the same 95 percent performance as the baseline. This dynamic security comes in handy when it comes to a smart grids application that needs a dynamic security considering the sensitivity of data, operational conditions of the grids and threats.

5.3. Interoperability and Standardization Barriers

Practical implementation of the integrated fog-reinforcement learning systems implies combining the fog nodes, IoT devices, and communication systems of different manufacturers using different protocols and data representation. This lack of standardization has become a massive impediment to general widespread adoption and being able to build veritable interoperable smart grid systems.
  • Integration of Communication Protocol
To guarantee effective communication among the heterogeneous fog network devices, Jha and Tripathy [43] came up with superior communication mechanisms that involved Constrained Application Protocol (CoAP) and machine learning to achieve efficient communication. This is because they are concerned with making the devices compatible as a way of fostering the connections and growth and involvement of fog computing with other systems devoid of compromising performance demands.
Their improved CoAP implementation showed a 60% decrease in the communication overhead to that of the conventional HTTP-based architectures with support of reliability to meet the real-time requirement of demand response applications. The machine learning part allows protocol adaptive optimization in response to network conditions and resource capabilities so that the protocol delivers the best performance in a wide range of deployment contexts.
  • The Standards-Based Interoperability Framework
To build holistic systems of interoperability, several layers of communication stack, including interfaces of physical devices and up to the application-layer protocol exchanging data, should be addressed. Recent studies show that fog computing systems should support different communication protocols at once: Wi-Fi, cellular (4G/5G), LoRaWAN, and Zigbee, with the same performance and security in every single interface.
The deployment and inference of DQN models into future system integrated systems needs a standardization of APIs to port trained models easily across heterogeneous fog nodes so that the application would give the same behavior experiences on a variety of hardware systems.
Table 1. Summary of Challenges and Research Directions.
Table 1. Summary of Challenges and Research Directions.
Identified Challenge Suggested Research Direction
Latency in cloud-based control systems Local decision-making using edge/fog-based reinforcement learning
Energy inefficiency in fog infrastructure Energy-aware task scheduling with DDQN and adaptive resource provisioning
Limited scalability beyond 10k devices Use of federated learning for decentralized policy training
Privacy and data exposure risks Blockchain-secured federated RL or secure multi-party computation
Static control policies Online DQN variants capable of real-time adaptability

6. Comparative Analysis of Existing Approaches

6.1. Performance Metrics and Evaluation Frameworks

Extant literature demonstrates the variability of the evaluation of the fog computing and the reinforcement learning application in energy management, with considerable differences in the methodology of the evaluation, metrics, and assumptions. This part will offer a detailed comparative analysis which does not only involve quantitative comparison but also looks into the reasoning behind their variations in performance, and the ramification with regard to the design of an integrated system.
Table 2. Comprehensive Comparative Analysis of Research Approaches.
Table 2. Comprehensive Comparative Analysis of Research Approaches.
Authors Year Focus Area Methodology Key Quantitative Findings Performance Analysis Critical Limitations
Chouikhi et al. [24] 2022 Energy consumption scheduling Fog computing service model with game-theoretic optimization 47% latency reduction vs. cloud-based systems; 23% energy savings Superior latency performance attributed to local processing capabilities and reduced communication overhead Limited scalability analysis (tested <5,000 devices); lacks dynamic adaptation mechanisms
Kumar et al. [36] 2022 Green demand-aware computing Prediction-based resource provisioning with machine learning 30% energy consumption reduction in IoT deployments; 95% prediction accuracy Energy savings achieved through proactive resource allocation based on demand prediction Single-objective optimization focus; lacks integration with real-time learning systems
Kopras et al. [59] 2022 Task allocation optimization Mathematical modeling with linear programming Balanced computational efficiency with 25% energy reduction while meeting latency constraints Optimal task allocation achieved through mathematical optimization considering multiple constraints Static optimization approach; lacks real-time adaptation to changing conditions
Shen et al. [46] 2022 Building energy systems Multi-agent DRL with distributed learning 15-20% energy savings vs. rule-based systems; 12% improvement in occupant comfort Multi-agent coordination enables distributed decision-making while maintaining global optimization Limited to building-scale applications; scalability to grid-level systems unproven
Zhang et al. [51] 2022 Blockchain-enabled fog computing Deep reinforcement learning with blockchain integration Optimized resource allocation with 35% improvement in security metrics Blockchain integration provides security while DRL enables adaptive resource management High computational overhead (40% increase); potential scalability bottlenecks
Zhong et al. [5] 2023 Energy-efficient offloading D3QN reinforcement learning with fairness constraints Fair resource allocation with 35% energy optimization improvement D3QN architecture provides stable learning and improved fairness vs. conventional DQN Limited fog node diversity in testing; homogeneous hardware assumptions
Massrur et al. [35] 2024 Residential demand coordination Fog-based hierarchical system with distributed control Improved coordination between residential aggregators and distribution grids Hierarchical fog architecture enables scalable coordination across multiple residential clusters Simulation-only validation; lacks real-world deployment verification
Nazeri et al. [2] 2024 Predictive scheduling Workflow simulation with energy-aware algorithms 25% energy reduction while meeting 95% of deadline constraints Predictive approach enables proactive optimization vs. reactive strategies Limited to scientific workflows; applicability to dynamic grid workloads unclear

6.2. Critical Analysis of Performance Differences

  • Latency Analysis of Performance
The most noteworthy improvement of the latency by a factor of 47% that was attained by Chouikhi et al. [24] is a game-changer when it comes to a response persistence. This strong performance stems from their fog-based service model, which avoids cloud communication for demand response decisions. However, their game-theoretic approach lacks adaptive learning, limiting optimization during dynamic grid conditions (e.g., renewable input fluctuations or demand spikes)—a gap addressed by DRL variants like D3QN [5]. On the other hand, their strategy lacks adaptive learning capabilities to maximize performance in dynamic environments.
Conversely, reinforcement based learning solutions like those of Shen et al. [46] accomplish smaller latency advantages (10-15%) but exhibit better flexibility to variation since they are able to continue learning. The convergence of fog computing latency advantage with adaptability of the reinforcement learning is a major opportunity of the upcoming generation system.
  • Comparison of Energy Efficiency
The recorded energy efficiency gains vary between 15 percent (Shen et al.) and 35 percent (Zhang et al.), with great disparities depending on the nature of the optimization carried out and the scope of the system. The improvement achieved by Kumar et al. through prediction-based provisioning (30 percent) demonstrates the usefulness of proactive resource management whereas Zhang et al., [7] achieved 35% energy savings but incurred a 40% increase in computational overhead due to blockchain integration. This trade-off highlights the need for lightweight RL models [4] optimized for fog nodes, which balance efficiency and resource use.
  • Scalability Analysis
A majority of the solutions available today lack ample validation of scale with a typical test on the systems managing less than 10,000 devices at a time. This is a translational drawback of the smart grid application that will require servicing of tens of thousands of devices over the distribution grid.
  • Explanation through focuses on observed performance
The reported enhancements on performance and latency as well as energy efficiency can be credited to a number of architectural and algorithmic improvements. As an example, Dueling DQN allows better estimation of state-values, which makes convergence and accuracy of decisions occur faster in fluctuating demand. Federated reinforcement learning minimizes the centralized computation dependency because it enables local learning with user preservation of privacy. Integration of blockchain In other cases, the employment of blockchain in ensuring sharing of the model parameters and preventing tampering during the distribution learning process is also possible. All these techniques enhance responsiveness reduction of energy, and demand resilience increase in the demand response systems, especially in the high-load situations.

6.3. Identified Critical Research Gaps

In a logical research of the existing literature, four important study gaps have been reported, which restrict the establishment of fully integrated fog computing and reinforcement learning-based smart grid demand-response systems. These gaps point out to main technological and methodological missing gaps that need to be addressed to allow smart, real-time and scale-up energy management solutions.
The first biggest open research area is the minimal incorporation of sophisticated Deep Q-Network (DQN) extension (Double DQN (DDQN) and Dueling DQN) into the fog computing platform. Although DDQN and Dueling DQN have shown better results in many reinforcement learning works and better learning stability, they have not been successfully joined with fog-based architectures in the application of optimizing demand response in smart grid. Current works either discuss the advantages of the fog computing ecosystem, including lowering the latency and locally processed data, and disregard advanced AI decision-making, or employ the advanced reinforcement learning methods in the environment of a centralized cloud system where the problem of latency is an acute limitation. Combining DDQN’s bias correction for overestimation, Dueling DQN’s value-function separation, and fog nodes’ low latency could uniquely address both sub-100ms response times and decision accuracy— a synergy absent in cloud-based RL [5] or fog-only systems [24]. Sub-100ms response times also enhance decision-making accuracy, as low latency enables real-time adaptation to grid fluctuations—addressing both speed and precision.
The second research implication is the weak mechanisms of real-time adaptability. Most of the existing systems work on pre-configured control decisions, fixed parameters of optimization, or models trained offline and unable to change effectively with fast-varying grid conditions- including unforeseen demand increase, variable renewable energy supply, or device-level faults. An example of this is that, even though mathematical optimization methods can provide near-optimal task assignments when things are stable, they are unsuitable in dynamically adapting to unanticipated circumstances without full recalculation [59]. Since renewable energy sources are inherently variable, and the energy demand of consumers continues to become more sporadic, smart grid systems should enable the continuous learning behavior, which can update the decision policies in real-time without affecting the system reliability. The concept that is needed to fill this gap is to design strong online learning frameworks which can be flexibly operated to meet energy demands and predict and redistribute resources without causing failures in the grid.
The third key space revolves around scale out issues. The majority of solutions suggested are tested with small-scale testbeds or simplified scale simulation environments which, due to the complexities of a modern distribution network with tens of thousands of interconnected IoT devices, cannot result in the verification of their effectiveness. The problems associated with distributed systems in a large scale system occur not just in computational demands but also in communication overheads and coordination complexities and slow convergence of learning algorithms in distributed systems. Although federated learning has been discussed as a potentially effective approach to handle data decentralization and privacy issues, it has not been yet evaluated in the comprehensive manner in the real-world context of the smart grid operation where communication channels can be characterized by the high level of latency, low bandwidth, and even discontinuous connectivity. Therefore, realistic testing of scalable demand response systems should be based on the stress under a realistic deployment operation to guarantee the feasibility and the resilience of the system.
The fourth and the last-research gap is the subject of aligning the privacy and security functions without compromising the performance of the system. Consumer information security is an important aspect of smart grid application, but the computational cost of most security systems is too high and results in poor reactiveness of the system. As an example, even though blockchain-based architectures may be able to provide greater data integrity and tamper resistance, they are usually associated with substantial latency issues as well as processing overhead and are therefore not feasible in the sub-second response domain [51]. On the same note, secure aggregation in federated learning provides privacy but it possibly needs more communication rounds and further encryption procedures before the decision making can be made. The challenge lies in developing lightweight, privacy-enhancing models, including those using cryptographically protected local training, predictable multiparty computation, or trusted execution environments inside fog machines are sufficiently fast to remain real time, but offer security isolation.
These four gaps in combination support the importance of the design of integrated, adaptive, and secure architectures capable of exploiting synergies between fog computing and advanced reinforcement learning to a larger extent. The next generation of intelligent, responsive and privacy-aware demand response systems in future smart grids will require bridging such gaps.

7. Future Research Directions

7.1. Integrated Fog-Reinforcement Learning Framework Development

Further studies ought to put emphasis on the creation of unified models that take advantage of enhanced DQN structures in combination with optimized structures of fog computing, with the explicit focus on real-time demand response in smart grids. Such integration offers a forward-looking approach to addressing the identified research gaps comprehensively.
To make better decisions in complex circumstances, scientists must investigate the use of transformer-based model training in the processing of high-dimensional time-series data in fog environments. These architectures which were designed to work with natural language can be used to create a model of the temporal pattern of energy consumption. Moreover, Neural Architecture Search (NAS) offers a promising direction for constructing lightweight neural networks tailored to fog nodes’ computational constraints. Ensemble learning that combines more than one learning method (i.e. DQN, policy gradients, heuristic optimization) can potentially result in effective systems that can operate in a variety of operating environments and remain robust.
  • Linking Complex Artificial Intelligence with the Existing Methods
Afachao and Abu-Mahfouz [62] explored the use of smart computing in the edge domain to achieve energy efficiency and that intelligent edge computing that uses AI mechanisms such as predictive analytics and dynamic workload allocation could play a crucial role in improving the performance and sustainability of distributed compute sites at the network edge significantly. The future studies should focus on more embedded methods of integration relying on their work:
  • Transformer-Based Models on High-Dimensional IoT Data: although the DQN variations have proven successful, transformer models may represent an improvement on the large-dimensional, time-series characteristics of a smart grid network. Enhancement with attention mechanisms may allow more complex pattern recognition in the data of energy consumption at the benefit of preserving edging processing with fog computing.
    Ensemble Learning Strategies: There is a possibility to combine several learning algorithms (DQN variants, policy gradient methods, and evolutionary approaches) at procedural nodes to give more reliable solutions to the decision-making problem. Separate algorithms may perform optimization tasks in various areas of demand response and play their part in the whole performance of the system.
    Neural Architecture Search (NAS) in Fog-Optimized Models: Neural network architectures that are designed specifically to target the computational capacity of fog nodes can potentially achieve a substantial efficiency improvement with automated design of those networks. NAS may find architectures that trade off accuracy with computer power to work in real-time.

7.2. Enhanced Security and Privacy Frameworks

Future research on this area ought to aim at coming up with comprehensive security protocols to shield the distributed energy management systems without compromising on the efficiency of operations that necessitates a real-time approach in grid operation.
  • Privacy-Preserving Federated Learning Development
Based on the foundation of federated learning which has been developed in the current research, a future framework would ideally include:
  • Differential Privacy Integration: Differentially private mathematical privacy guarantees can be added to federated learning protocols in ways that only affect learning performance to a small degree. This is achieved by the development of noise addition mechanisms that retain utility and give formal bounds of privacy.
    Homomorphic Encryption for Safe Aggregation: Allows performing computation on the encrypted model parameters when aggregating the model on federated learning. It would make fog nodes be capable of collaborative learning without revealing even aggregated data on local energy consumption patterns.
    Secure Multi-Party Computation (SMC) Protocols: The design of efficient SMC protocols that are unique to fog computing settings and are able to collaboratively optimize fog computing settings without exposing isolated consumer data or fog node running states.

7.3. Integration with Emerging Technologies

Integration of emerging technologies: A few opportunities are emerging to integrate new technologies that need to be investigated systematically to develop smart grids of next generation:
  • Coordination on Electric Vehicles and Vehicle to Grid (V2G)
The phenomenon of increasing use of electric vehicles is a problem but with enormous opportunities for demand response systems. Smart Direct and Reverse Charge of EVs optimised with fog architectures may offer significant grid balancing potentials:
  • Dynamic Charging Optimization: Fog computing infrastructure might facilitate real-time optimization of charging schedules, by using DQN-based charging optimization strategies which consider changes in grid conditions, renewable energy provision, and user preferences and ensure battery health.
    Vehicle-to-Grid Energy Trading: As EVs are plugged into the grid, they may want to form a real-time optimization of energy flows through peer-to-peer energy trading with the grid. This trading could be facilitated using the fog nodes with the privacy of individual vehicle usage patterns preserved.
    Mobile Fog with EVs: Electric vehicles may also be used as a mobile fog node to expand the scope of computation and offer a backup processing power in case of disaster or high consumption during the peak hours.
  • Microgrid Synchronizing and Islanding Features
The decentralized design of fog computing is highly advantageous in decentralized energy production, as well as in consumption, by the use of interconnected micro grids:
  • Multi-Microgrid Coordination: The creation of DQN-based coordination algorithms that maximize energy transfers between multiple microgrids whilst preserving and gaining independence and stability to individual microgrids.
    Islanding Detection and Management: Fog-based solutions may ensure quick detection of disconnection events of a grid and can help microgrids automatically switch to island operation, as well as perform optimal internal resources distribution.
    Renewable Energy Integration Optimization: Clever algorithms to forecast and control distributed renewable energy resources (solar panels, wind turbines, energy storage) in interconnected microgrid networks.
  • Development of Advanced Integration of Renewable Energy
One of the most important directions of research is sophisticated algorithms to predict and regulate renewable sources of energy through fog-based demand response systems:
  • Weather-Aware Energy Forecasting: Meteorological data by using machine learning models via fog nodes to detect hyper-local renewable energy production forecasting.
    Distributed Energy Storage Optimization: Reinforcement learning which coordinates control over distributed battery storage systems to be implemented over fog infrastructure.
    Integration of Renewables at Grid-Scale: Creating algorithms, which address the uncertainty of large-scale renewable energy generation and ensure the stability of the grid using real-time control algorithms in the form of fogs, which are performed in real time.

7.4. Real-World Validation and Pilot Deployment Frameworks

Most of the systems under inspection are tested under simulation only. Future research should incorporate real-world data from utility providers, microgrids, or smart neighborhoods. Full-scale hardware-in-the-loop testing and digital twinning can offer testbeds through which system responsiveness and generalizability may be verified outside the parameters of a simulation.
Future research should go beyond simulation based validations to systematic pilot deployment programs of full scale testing:
  • Large Scale Simulation Environments
    Digital Twin Integration: creation of end-to-end digital twin artefacts of smart grid infrastructure capable of verifying fog-RL architectures in realistic conditions prior to the physical facility.
    Hardware-in-the-Loop Testing: Combining real fog computing hardware and simulator grid environments to demonstrate performance in real world computational and communication limitations.
  • Pilot Deployment Programs
    Utility Partnership Programs: Electric utilities may participate in utility partnership programs including the placement of pilot fog-RL systems on controlled segments of the distribution network that can be used to validate the fog-RL system with actual consumer loads and grid conditions.
    Microgrid Testbeds: Design of physical microgrid testbeds that are able to test an integrated fog-RL system under controlled but real-life operating conditions.
    Community-Scale Deployments: Dealing with pilot deployments in residential neighborhoods to confirm scalability, user-acceptance, and long-term stability of operation.

7.5. Standardization and Interoperability Development

  • Standardization of Communication.
    Coordinated Fog-Grid Communication Standards: Formulation of standardized communication interfaces on fog computing ability to be integrated with smart grid infrastructure.
    Standardization of APIs for DQN Deployment: Development of standardized APIs that facilitate painless deployment of trained DQN models across differing fog node platforms as well as permitting migration of trained DQN models seamlessly.
  • Development of Regulatory Framework
    Regulation of Privacy: Design of technical systems that do not regulate the changing data privacy regulations covering energy data but rather ensure the performance of the system.
    Grid Code Integration: cooperation between the regulatory bodies and integration of fog computing and AI-controlled decision-making systems into grid code and operation norm.

8. Conclusion

The review is also conducted systematically in order to evaluate the literature available on the integration of the fog computing and deep reinforcement learning (DRL) into smart grid demand response systems. Using the discussion of the broad body of literature concerning the recent future, the review presented the current limitation in the shape of latency, scalability, energy efficiency, and adaptability of the current isolated or centralized approaches. It also outlined the discussion of how convergence of DRL algorithms and fog-based architecture will constitute a revolutionary opportunity in developing next-generation energy management systems.
The reviewed literature highlights the potential of leveraging localized fog computing power alongside advanced DRL algorithms (e.g., Double DQN, Dueling DQN) to achieve such benefits as flexibility and learning ability of more developed DRL algorithm, in particular Double DQN and Dueling DQN, to create the systems that are:
  • Responsive to Real-Time: It can execute important grid stabilization capabilities at less than 100ms latencies.
  • Adaptively smart: Able to better choose control modes responding to changing grid conditions, consumer loads and renewable generation flows.
  • Privacy-Preserving: Supporting privacy-preserving, collaborative optimization e.g. federated optimization, differential privacy and blockchain integration.
  • Scalability and Efficiency: Designed to handle the increasing density of devices, expansion in the utilization of electric vehicles and distributed generation with no loss in performance.
  • Techno-Integrative: The scope and plasticity that takes into consideration the possibility of future advancements, e.g. smart-city technologies, microgrid, and improved forecasting devices.
The future research directions that still need to be discovered according to the results of this review are implementation in reality through digital twins and pilot testbeds, transformer-based backbones to process high-dimensional IoT data and secure federated learning frameworks that are expected to operate in the fog environment. Besides this, there is potential in using the electric vehicle infrastructure, the renewable energy forecasting, and the distribution storage optimization (in the fog-RL systems) to increase the resilience of the grid and the efficiency of the grid.
The shift to distributed, intelligent, and adaptive energy management frameworks is more than a technological change—it is a paradigm shift. This change will be key to meeting sustainability goals, enhancing grid reliability, and the demands of the smart cities and the clean energy transition in the future.

References

  1. Peplinski, M., Dilkina, B., Chen, M., Silva, S. J., Ban-Weiss, G. A., & Sanders, K. T. (2024). A machine learning framework to estimate residential electricity demand based on smart meter electricity, climate, building characteristics, and socioeconomic datasets. Applied Energy, 357, 122413. ScienceDirect. [CrossRef]
  2. Nazeri, M., Soltanaghaei, M., & Khorsand, R. (2024). A predictive energy-aware scheduling strategy for scientific workflows in fog computing. Expert Systems with Applications, 247, 123192. ScienceDirect. [CrossRef]
  3. Kopras, B., Idzikowski, F., & Bogucka, H. (2024). A survey on reduction of energy consumption in fog networks—Communications and computations. Sensors, 24(18), 6064. MDPI. [CrossRef]
  4. Sumona, S. T., Hasan, S. S., Tamzid, A. Y., Roy, P., Razzaque, M. A., & Mahmud, R. (2024). A deep Q-learning framework for enhanced QOE and energy optimization in FOG computing. In 2024 IEEE 21st International Conference on Distributed Computing in Internet of Things and Edge Systems (DCOSS-IoT) (pp. 669–676). IEEE. [CrossRef]
  5. Zhong, J., Chen, X., & Jiao, L. (2023). A fair energy-efficient offloading scheme for heterogeneous fog-enabled IoT using D3QN. In 2023 9th International Conference on Computer and Communications (ICCC) (pp. 47–52). IEEE. [CrossRef]
  6. Iftikhar, S., Gill, S. S., Song, C., Xu, M., Aslanpour, M. S., Toosi, A. N., Du, J., Wu, H., Ghosh, S., Chowdhury, D., Golec, M., Kumar, M., Abdelmoniem, A. M., Cuadrado, F., Varghese, B., Rana, O., Dustdar, S., & Uhlig, S. (2022). AI-based fog and edge computing: A systematic review, taxonomy and future directions. Internet of Things, 21, 100674. ScienceDirect. [CrossRef]
  7. Zhang, Y., Hu, F., Han, Y., Meng, W., Guo, Z., & Li, C. (2023). AI-based energy-saving for fog computing-empowered data centers. In 2023 5th International Conference on Computer Communication and the Internet (ICCCI) (pp. 16–21). IEEE. [CrossRef]
  8. Walia, G. K., Kumar, M., & Gill, S. S. (2023). AI-empowered FOG/edge resource management for IoT applications: A comprehensive review, research challenges, and future perspectives. IEEE Communications Surveys & Tutorials, 26(1), 619–669. IEEE. [CrossRef]
  9. Hossam, H. S., Abdel-Galil, H., & Belal, M. (2024). An energy-aware module placement strategy in fog-based healthcare monitoring systems. Cluster Computing, 27(6), 7351–7372. Springer Nature Link. [CrossRef]
  10. Bhatia, J., Italiya, K., Jadeja, K., Kumhar, M., Chauhan, U., Tanwar, S., Bhavsar, M., Sharma, R., Manea, D. L., Verdes, M., Saad, A., & Almadani, Y. (2023). An overview of fog data analytics for IoT applications. Sensors, 23(1), 199. MDPI. [CrossRef]
  11. Singh, J., Singh, P., Hedabou, M., & Kumar, N. (2023). An efficient machine learning-based resource allocation scheme for SDN-enabled FOG computing environment. IEEE Transactions on Vehicular Technology, 72(6), 8004–8017. IEEE. [CrossRef]
  12. Mahapatra, A., Majhi, S. K., Mishra, K., Pradhan, R., Rao, D. C., & Panda, S. K. (2024). An energy-aware task offloading and load balancing for latency-sensitive IoT applications in the fog-cloud continuum. IEEE Access, 12, 14334–14349. IEEE. [CrossRef]
  13. Bansal, S., Aggarwal, H., & Aggarwal, M. (2022). A systematic review of task scheduling approaches in fog computing. Transactions on Emerging Telecommunications Technologies, 33(5), e4523. WILEY. [CrossRef]
  14. Lohat, S., Jain, S., & Kumar, R. (2023). AROA: Adam Remora Optimization Algorithm and Deep Q network for energy harvesting in Fog-IoV network. Applied Soft Computing, 136, 110072. ScienceDirect. [CrossRef]
  15. Kaur, M., Aron, R., Wadhwa, H., & Oo, H. N. (2023). CNN-based smart waste management system in FOG computing environment. In 2022 IEEE 11th International Conference on Communication Systems and Network Technologies (CSNT) (pp. 774–779). IEEE. [CrossRef]
  16. Kopras, B., Idzikowski, F., Bossy, B., Kryszkiewicz, P., & Bogucka, H. (2023). Communication and computing task allocation for energy-efficient FOG networks. Sensors, 23(2), 997. MDPI. [CrossRef]
  17. Brännvall, R., Stark, T., Gustafsson, J., Eriksson, M., & Summers, J. (2023). Cost optimization for the edge-cloud continuum by energy-aware workload placement. In EdgeSys '23: Proceedings of the 6th International Workshop on Edge Systems, Analytics and Networking (pp. 79–84). ACM. [CrossRef]
  18. Alwabel, A., & Swain, C. K. (2024). Deadline and energy-aware application module placement in fog-cloud systems. IEEE Access, 12, 5284–5294. IEEE. [CrossRef]
  19. Sarkar, I., Adhikari, M., Kumar, S., & Menon, V. G. (2022). Deep reinforcement learning for intelligent service provisioning in software-defined industrial FOG networks. IEEE Internet of Things Journal, 9(18), 16953–16961. IEEE. [CrossRef]
  20. Cao, S., Zhan, Z., Dai, C., Chen, S., Zhang, W., & Han, Z. (2023). Delay-aware and energy-efficient IoT task scheduling algorithm with double blockchain enabled in cloud–FOG collaborative networks. IEEE Internet of Things Journal, 11(2), 3003–3016. IEEE. [CrossRef]
  21. Lau, K. T., So, P. M., & Leung, C. K. (2024). Driving energy efficiency at scale by mass deployment of AI based chiller energy optimization. In BuildSys '24: Proceedings of the 11th ACM International Conference on Systems for Energy-Efficient Buildings, Cities, and Transportation (pp. 1–7). ACM. [CrossRef]
  22. Vadde, U., & Kompalli, V. S. (2022). Energy efficient service placement in fog computing. PeerJ Computer Science, 8, e1035. PeerJ. [CrossRef]
  23. Trabelsi, M., & Ahmed, S. B. (2024). Energy and cost-aware real-time task scheduling with deadline-constraints in fog computing environments. In Proceedings of the 19th International Conference on Software Technologies (pp. 532–539). SCITEPRESS. [CrossRef]
  24. Chouikhi, S., Esseghir, M., & Merghem-Boulahia, L. (2022). Energy consumption scheduling as a fog computing service in smart grid. IEEE Transactions on Services Computing, 16(2), 1144–1157. IEEE. [CrossRef]
  25. Liu, K., Zhang, B., Liao, W., & Yang, Z. (2022). Energy management strategy for smart homes based on deep reinforcement learning. SSRN Electronic Journal. SSRN. [CrossRef]
  26. Vashisht, P., Bajaj, S. B., & Narang, A. (2024). Energy-efficient fog computing: A review and future directions. International Journal of Innovative Research in Computer Science and Technology, 12(2), 135–139. IJIRCST. [CrossRef]
  27. Idrees, A. K., Ali-Yahiya, T., Idrees, S. K., & Couturier, R. (2022). Energy-efficient fog computing-enabled data transmission protocol in tactile internet-based applications. In SAC '22: Proceedings of the 37th ACM/SIGAPP Symposium on Applied Computing (pp. 2321–2329). ACM. [CrossRef]
  28. Muhamad, W. N. W., Aris, S. S. M., Dimyati, K., Javed, M. A., Idris, A., Ali, D. M., & Abdullah, E. (2024). Energy-efficient task offloading in fog computing for 5G cellular network. Engineering Science and Technology, an International Journal, 50, 101628. ScienceDirect. [CrossRef]
  29. Guo, Y., Ganti, S., & Wu, Y. (2024). Enhancing energy efficiency in telehealth internet of things systems through FOG and cloud computing integration: simulation study. JMIR Biomedical Engineering, 9, e50175. JMIR. [CrossRef]
  30. Xu, W., Yuan, Y., & Tsang, D. H. (2024). Enhancing fog computing through intelligent reflecting surface assistance: A Lyapunov driven reinforcement learning approach. In 2024 IEEE 9th World Forum on Internet of Things (WF-IoT) (pp. 759–764). IEEE. [CrossRef]
  31. Fu, X., Peng, R., Yuan, W., Ding, T., Zhang, Z., Yu, P., & Kadoch, M. (2023). Federated learning-based resource management with blockchain trust assurance in smart IoT. Electronics, 12(4), 1034. MDPI. [CrossRef]
  32. Shi, J., Du, J., Wang, J., & Yuan, J. (2022). Federated deep reinforcement learning-based task allocation in vehicular fog computing. In 2022 IEEE 95th Vehicular Technology Conference: (VTC2022-Spring) (pp. 1–5). IEEE. [CrossRef]
  33. Singh, N. K. D., & Singh, P. (2024). Fog cloud computing and IoT integration for AI enabled autonomous systems in robotics. EAI Endorsed Transactions on AI and Robotics, 3, Article 3617. EAI. [CrossRef]
  34. Alraddady, S., Soh, B., AlZain, M. A., & Li, A. S. (2022). FOG computing: Strategies for optimal performance and cost effectiveness. Electronics, 11(21), 3597. MDPI. [CrossRef]
  35. Massrur, H. R., Fotuhi-Firuzabad, M., Dehghanian, P., & Blaabjerg, F. (2024). Fog-based hierarchical coordination of residential aggregators and household demand response with power distribution grids—Part I: Solution design. IEEE Transactions on Power Systems, 1–13. IEEE. [CrossRef]
  36. Kumar, D. S. N. K. P. A., Newaz, S. H. S., Rahman, F. H., Lee, G. M., Karmakar, G., & Au, T. (2022). Green demand aware FOG computing: A prediction-based dynamic resource provisioning approach. Electronics, 11(4), 608. MDPI. [CrossRef]
  37. Bogucka, H., Kopras, B., Idzikowski, F., Bossy, B., & Kryszkiewicz, P. (2023). Green time-critical fog communication and computing. IEEE Communications Magazine, 61(12), 40–45. IEEE. [CrossRef]
  38. EnergyHub. (2024, November 6). How AI unlocks the full value of demand response. EnergyHub. EnergyHub. https://www.energyhub.com/news/demand-response-artificial-intelligence.
  39. Muhamad, N. W. N. W., Ribep, N. A. C., Dimyati, N. K., Yusof, N. A. L., & Abdullah, N. E. (2023). Improvement of energy consumption in fog computing via task offloading. Journal of Advanced Research in Applied Sciences and Engineering Technology, 36(2), 199–212. SEMARAK ILMU. [CrossRef]
  40. Yang, J.-P., & Su, H.-K. (2022). Integrated resource management for fog networks. Sensors, 22(6), 2404. MDPI. [CrossRef]
  41. S, A., Geetha, A., & K, R. (2023). Intelligent resource provisioning and optimization in fog computing using deep reinforcement learning. International Journal of Electronics and Communication Engineering, 10(8), 85–97. SSRG. [CrossRef]
  42. Singh, R. M., Sikka, G., & Awasthi, L. K. (2024). LBATSM: Load balancing aware task selection and migration approach in FOG computing environment. IEEE Systems Journal, 18(2), 796–804. IEEE. [CrossRef]
  43. Jha, S., & Tripathy, D. (2023). Low latency consistency based protocol for fog computing systems using CoAP with machine learning. In 2023 14th International Conference on Computing Communication and Networking Technologies (ICCCNT) (pp. 1–6). IEEE. [CrossRef]
  44. Khan, F. U., Shah, I. A., Jan, S., Ahmad, S., & Whangbo, T. (2025). Machine learning-based resource management in fog computing: A systematic literature review. Sensors, 25(3), 687. MDPI. [CrossRef]
  45. Sheikh, M. S., Enam, R. N., & Qureshi, R. I. (2023). Machine learning-driven task scheduling with dynamic K-means based clustering algorithm using fuzzy logic in fog environment. Frontiers in Computer Science, 5, 1293209. Frontiers. [CrossRef]
  46. Shen, R., Zhong, S., Wen, X., An, Q., Zheng, R., Li, Y., & Zhao, J. (2022). Multi-agent deep reinforcement learning optimization framework for building energy system with renewable energy. Applied Energy, 312, 118724. ScienceDirect. [CrossRef]
  47. Trabelsi, M., Bendaoud, N. M. M., & Ahmed, S. B. (2023). Multi-objective optimization for dynamic service placement strategy for real-time applications in FOG infrastructure. In 2022 IEEE Symposium on Computers and Communications (ISCC) (pp. 607–612). IEEE. [CrossRef]
  48. Premalatha, B., & Prakasam, P. (2023). Optimal energy-efficient resource allocation and fault tolerance scheme for task offloading in IoT-fog computing networks. Computer Networks, 238, 110080. ScienceDirect. [CrossRef]
  49. Pan, S., Huang, C., Fan, J., Shi, Z., Tong, J., & Wang, H. (2024). Optimizing Internet of Things FOG computing: Through Lyapunov-based long short-term memory particle swarm optimization algorithm for energy consumption optimization. Sensors, 24(4), 1165. MDPI. [CrossRef]
  50. Kaur, M., Aron, R., & Seth, S. (2024). Optimizing resource allocation for energy efficiency in fog cloud computing environments. In Proceedings of the 13th IEEE International Conference on Communication Systems and Network Technologies (CSNT) (pp. 538–542). IEEE. [CrossRef]
  51. Zhang, Y., Tu, S., Zhang, S., Yang, Y., Wu, A., & Bai, X. (2022). Performance optimization blockchain-enabled fog computing with deep reinforcement learning. In ICCNS 2022: Proceedings of the 2022 12th International Conference on Computer and Network Security (pp. 1–6). ACM. [CrossRef]
  52. M, N., & K, B. (2023). Processing delay optimization for data intensive applications for fog/cloud computing using double reinforcement learning. In 2023 International Conference on Advances in Electronics, Communication, Computing and Intelligent Information Systems (ICAECIS) (pp. 1–5). IEEE. [CrossRef]
  53. Fattouch, N., Ben-Lahmar, I., & Boukadi, K. (2024). Reinforcement learning based fog and cloud resource allocation for an IoRT-aware business process. Computación y Sistemas, 28(3), 1229–1251. Computación y Sistemas. [CrossRef]
  54. Sulimani, H., Sajjad, A. M., Alghamdi, W. Y., Kaiwartya, O., Jan, T., Simoff, S., & Prasad, M. (2022). Reinforcement optimization for decentralized service placement policy in IoT-centric fog environment. Transactions on Emerging Telecommunications Technologies, 34(11), e4650. WILEY. [CrossRef]
  55. Tran-Dang, H., & Kim, D. (2022). Reinforcement learning for computational offloading in FOG-based IoT systems: Applications, and open research issues. In 2022 RIVF International Conference on Computing and Communication Technologies (RIVF) (pp. 560–565). IEEE. [CrossRef]
  56. Zhang, Y., Tu, S., Waqas, M., Yang, Y., Wu, A., & Bai, X. (2022). Resource allocation for blockchain-enabled fog computing with deep reinforcement learning. In ICCNS 2022: Proceedings of the 2022 12th International Conference on Computer and Network Security (pp. 211–218). ACM. [CrossRef]
  57. Khan, Z. A., & Aziz, I. A. (2024). Ripple-induced whale optimization algorithm for independent tasks scheduling on fog computing. IEEE Access, 12, 65736–65753. IEEE. [CrossRef]
  58. Pakmehr, A. (2024). Task Offloading in Fog Computing with Deep Reinforcement Learning: Future Research Directions Based on Security and Efficiency Enhancements. arXiv. [CrossRef]
  59. Kopras, B., Bossy, B., Idzikowski, F., Kryszkiewicz, P., & Bogucka, H. (2022). Task allocation for energy optimization in fog computing networks with latency constraints. IEEE Transactions on Communications, 70(12), 8229–8243. IEEE. [CrossRef]
  60. Kouka, N., Piuri, V., & Samarati, P. (2024). Tasks scheduling with load balancing in fog computing: A bi-level multi-objective optimization approach. In GECCO '24: Proceedings of the Genetic and Evolutionary Computation Conference (pp. 538–546). ACM. [CrossRef]
  61. Alsemmeari, R. A., Dahab, M. Y., Alturki, B., Alsulami, A. A., & Alsini, R. (2023). Towards an effective service allocation in fog computing. Sensors, 23(17), 7327. [CrossRef]
  62. Afachao, K., & Abu-Mahfouz, A. M. (2023). Towards energy-efficient intelligent edge computing. In 2021 International Conference on Electrical, Computer and Energy Technologies (ICECET) (pp. 1–6). IEEE. [CrossRef]
  63. Yousefpour, A., Fung, C., Nguyen, T., Kadiyala, K., Jalali, F., Niakanlahiji, A., Kong, J., & Jue, J. P. (2019). All one needs to know about fog computing and related edge computing paradigms: A complete survey. Journal of Systems Architecture, 98, 289–330. MDPI. [CrossRef]
  64. Al-Fuqaha, A., Guizani, M., Mohammadi, M., Aledhari, M., & Ayyash, M. (2015). Internet of Things: A survey on enabling technologies, protocols, and applications. IEEE Communications Surveys & Tutorials, 17(4), 2347–2376. IEEE. [CrossRef]
  65. Bonomi, F., Milito, R., Zhu, J., & Addepalli, S. (2012). Fog computing and its role in the internet of things. In MCC '12: Proceedings of the First Edition of the MCC Workshop on Mobile Cloud Computing (pp. 13–16). ACM. [CrossRef]
  66. Atlam, H., Walters, R., & Wills, G. (2018). Fog Computing and the Internet of Things: a review. Big Data and Cognitive Computing, 2(2), 10. MDPI. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated