Smart Hard Hats: Sensing for worker’s safety in dangerous work environments

: Accidents and mishaps in industrial environments like construction, mining, and transport are rampant - mainly due to human negligence and improper monitoring of the workplace. In this paper, we address the safety of workers operating in dangerous environments by improving their situational awareness. According to Occupational health and safety rules, everyone must wear hard hats while on site. Our main idea is to make the hard hats smart by incorporating miniature-sized Doppler radars sensing the users’ surroundings. These Doppler radars are lightweight, rugged, and consume low-power compared to vision-based solutions. This paper discusses the observability of range from Doppler frequency measurements and the magnitude of estimation errors introduced by the human head, walking, and working motions. We present the framework to estimate the position of walls and targets surrounding the worker. For testing, we simulated an indoor environment with randomly moving workers. Experiments showed that once observability conditions are met, human head and walking movements can be handled through added noise in the system. We also present an innovative idea of using two Doppler radars to obtain the estimators’ initial estimates, reducing the estimation error to less than 5cm and convergence time by more than 80


Introduction
Heavy industries such as construction, mining, and transport employ a considerable percentage of a country's workforce. These are dangerous work environments, where injuries and fatalities are common despite strict regulations. Accidents are primarily due to human negligence and improper monitoring of workplace environments. This is exacerbated by tighter deadlines, escalating project costs, and employees putting in extra hours. All these factors increase fatigue and stress levels, which increases the likelihood of injuries and accidents [1,2]. Although governments regulate site safety inspections, it is simply too expensive and impractical to ensure continuous monitoring.
Australian federal and state governments have been collecting and managing work-related injuries and fatalities statistics for the past decades [3]. About 1.14 million serious claims were lodged just in 2018-2019, with a median compensation cost of $11,700 per claim [4]. Due to their injuries, about 7% of all injured workers changed their jobs. On the other hand, the industries with the highest work-related injuries are construction, manufacturing, transport, and agriculture-related. Of all the injured workforce, 24% were laborers, 18% were technicians and trade workers, and 14% were machinery operators and drivers. An important aspect of these injuries is that about 25% of workers were injured after 'being hit by a moving object or vehicle' and about 23% after 'falling' from a non-monitored area.
In the years since the Australian government started to maintain these injury statistics, about 4000 workers have lost their lives in work-related activities. More alarmingly, about two-thirds of these fatalities involved vehicles. Another important insight is that 76% of bystander fatalities (worker's fatality as a result of someone else's work-related activity) was due to a collision involving a vehicle or a moving object like crane-load.
These statistics point out that a major reason for accidents is the collision of inattentive laborers with moving objects and vehicles. Manual implementation of OHS rules and regulations is not enough. Technology must be integrated into a worker's daily life, specifically targeting their safety. According to Australian Occupational Health and Safety (OHS) regulations [5][6][7], all workers must wear specialized personal protective equipment (PPE) on work sites. PPE include safety clothing, hard hat, goggles, earmuffs, and other protective gear that workers wear in dangerous work environments. These are the lowest order control steps in the hierarchy of safety measures. However, PPE relies only on the user's awareness and does nothing to minimize the danger itself.
In this research, we tackle the idea of how a worker's safety can be ensured by incorporating smart sensors in protective gear. PPE is already very heavy, so any sensor added must have negligible weight and minimal power consumption. The sensor must be robust to the rough and rugged working conditions. In this research, we present the idea of a smart hard hat by incorporating tiny Doppler radars on a worker's hard hat. Question is can we localize targets in surroundings when the worker wearing smart hard hat is going about his job, wobbling his head around and randomly walking? In this research, we discuss how well the surrounding targets can be localized while handling all this randomness hence enhancing the workers' situational awareness.
After localizing the targets, providing feedback to the user is also a significant research and design problem. One possible way could be to play a beeping sound in the left ear if something is approaching at high speed from the left. In the case of rescue operations in low visibility, a rescue worker could be handed a smartphone, which shows the sensed environment. However, providing feedback and related work is handled in future work, whereas, this paper deals with more basic issues of target tracking.
In section 2, we briefly review the work and research done to improve the workers' safety in dangerous work environments. Then in section 3, we discuss our idea of developing smart Hard Hats and associated challenges. Next in section 4, we develop a simulation environment of a radar inside a small room. Then we develop the associated mathematical framework and models to estimate the actual position of targets surrounding the radar. In section 5, we next develop a simulation scenario of a worker inside a room wearing smart hard hat with radar on it. This section contains detailed experiments, where different walking and head movements are incorporated in the simulations. Section 6, discusses how amplitude of the received signals can be used to enhance the estimation performance further. Finally, results are discussed in the last section 7.

Related Work
Numerous building services and navigational applications' critical task is to determine an entity's relative position in an indoor environment. A plethora of literature covers different aspects of this problem [8][9][10][11][12][13]. Solutions and systems being developed offer a wide range of accuracy. In some applications, like controlling the lights in a room, it is enough to get a rough estimate of a user's location. However, for safety applications such as collision avoidance, precise location with sub-centimeter accuracy is a must.
Radio Frequency Identification (RFID) technology uses electromagnetic waves to detect, identify, and track objects. Active RFID tags provide a detection range of tens of meters, whereas passive tags can only be detected within 1-2 meters of a reader. One popular and prominent indoor tracking approach is to attach a tag to each entity to be tracked. The authors in [14] developed an inexpensive solution based on this idea using off-the-shelf components. By analyzing the strength of radio signals received, they were able to localize objects in 3 dimensions. The authors claim to have achieved an accuracy of around 3 m. Besides low accuracy, the system also takes about 10-20 seconds to get one measurement. A similar solution was offered as LANDMARC [15], which also uses active RFID tags. However, to reduce the number of readers, the authors used the idea of reference tags at known locations and used them for system calibration. The system is reported to have a maximum range error of 2m.
To reduce costs, scientists have also proposed localizing systems based on wireless local area networks (WLANs) [10]. These use Received Signal Strength (RSS) at different receivers to determine the target's location. The RADAR localization system [16], proposed by Bahl et al., uses a variant of k-nearest-neighbors algorithm with empirical measurement of signal strength at access points to determine a user's position. Its accuracy is reported to be between 2-3 m. Authors [16] also proposed another variant to improve accuracy using a Viterbi-algorithm-like approach. Horus systems [17] localize objects using probabilistic modeling of signal propagation. Different experiments were performed to show that an accuracy of about 2 m is possible in more than 90% of cases. In [18], the authors developed a grid-based approach with Bayesian filtering to localize objects within 1.5 m. The commercially available Ekahau system also uses WLAN to track electronic devices such as tags and laptops. It works by correlating received signal strength with space information [19]. However, this system has its drawbacks, like requiring many access points to cover blind spots in buildings, hence increasing overall costs. [20] presents a Through-Wall Imaging (TWI) technique, which uses UWB pulses to detect static and dynamic objects through the walls of a building. It has gained much interest among law enforcing agencies. Using multiple frequencies, it is possible to obtain very high resolution images of the objects [20,21]. In [22] Y.K. Cho et al. tracked indoor mobile assets in construction using a UWB wireless network system and also demonstrated that statistical modeling of errors can significantly improve tracking. On the other hand, 1.5 cm accuracy was reported by [23] for 2D localization, but within a very small area of about 2x2m. Similarly, J. Teizer et al. [24] tried to estimate 3D location of building resources in complex construction environments. In [25], T. Cheng et al. continued their research on evaluating the capabilities of commercially available Radio Frequency ID based systems in measuring construction site dynamics. They reported 2 meters accuracy of a UWB system with experiments on real job sites.
The feasibility of a WiFi based tracking and positioning system for construction sites is studied in [26]. They tracked the approximate location of labor within 5 m of error, using Received-Signal-Strength-Information.
[27] developed a prototype using Ultra Wide Band technology to determine if a worker is in the proximity of a predefined potentially dangerous area. Authors in [28] reviewed computer vision techniques for improving workers' safety and experimented with different cascade classifiers to automatically detect hard-hats. Similarly, in [29], researchers developed image processing techniques to determine if workers are wearing hats or not.
Regarding safety at work, many researchers have approached the problem in two steps. First, a detailed analysis of previous accidents and injuries is performed, where all accident precursors and near-misses are identified and documented. In the second step, a framework is proposed that tries to eliminate the precursors and related behaviors. S. Chae et al. [1] performed a detailed analysis of various construction site related injuries and accidents. In [2] H. Yang et al. adopted a similar two-step approach and identified the accident precursors. In order to minimize these accident precursors, they further designed an integrated ZigBee RFID based sensor network structure.
J. Teizer et al. [30] argued that research on safety in construction is mostly reactive; i.e., based on data that is being reported after a fatal accident. They emphasize the need for pro-active systems that work, on near-miss and close-call events. To achieve this, they developed a RFID Active Tags based solution. In their approach they attached an Equipment Protection Unit (EPU) on all heavy machines and handed a Personal Protection Unit (PPU) to workers on foot. Whenever a PPU was in proximity of an EPU a warning message was issued.
Safety is also correlated with ease of access to workers' training and job data. It is also related to quick access to information about machines such as their inspections, maintenance, and repairs [31][32][33]. RFID based access management solutions were presented by [1,2,34]. They managed an active database of workers' data. The system would grant access to machines and restricted areas to authorized personnel only.
Many site supervisors monitor video streaming live from construction sites to evaluate the quality and safety of work. Simultaneously monitoring tens of cameras, if not impossible, would  [36] developed a method using background subtraction, histogram of oriented gradients (HOG), and HSV color histogram to automate the classification of workers from non-workers. According to [37], about 80-90% of accidents are due to the workers' unsafe behaviour. So authors proposed a vision-based framework to detect such behaviors. In the first trials, they used it to determine unsafe ladder climbing postures. A similar approach was also researched by [38,39]. In [40], the authors integrated real time data obtained from a crane into a 3D model located off site, enabling off site supervision for safe operations. Others have also experimented with various vision-based algorithms, already tried in different fields, to evaluate their efficacy in construction-related tasks [2,[41][42][43]. There has also been some focus on developing algorithms that work on live video streams from construction sites, for pose extraction, blob tracking and classification. A major purpose of this type of resarch is to monitor the productivity of workers [44].
In [45,46], the authors reported utilizing information in BIM to enhance safety-related scheduling and planning activities on construction sites. The target is to improve occupational safety by embedding safety solutions in BIM for safer construction planning, scheduling, and task management.
In [47], researchers from MIT Media lab developed a system that infers safety conditions at construction sites. They developed wearable sensors that measure levels of dangerous gases, noise, light quality, altitude and motion. Similar research is also carried out for the safety of mine workers in [48,49]. LukoWicz et al. used microphones and accelerometers mounted on the user's body to classify tasks like sawing, hammering and turning screws [50,51].
In [48,[52][53][54], the authors have tried to improve safety conditions by integrating miniature positioning devices and communication instruments in compulsory safety equipment worn by all the workers on site; for example, hard hats, jackets, belts, etc. Using the 3-axis accelerometer and gyroscope sensors of a smart phone [55], R. Dzeng et al. developed and compared three algorithms to evaluate how well a smart phone can be used to determine falls and fall portents.
The novel idea of a smart hard hat with integrate Doppler radars for the workers' safety has not been previously explored. The closest approaches are proposed by [52], [29], [28] where authors performed preliminary studies using image processing and computer vision techniques. However, their solutions require installation of high resolution cameras on the construction sites and are sensitive to lighting and image quality. In the next section, we describe our idea of constructing a smart hard hat to monitor the surrounding environment. Once the surrounding targets are localized, further safety decisions can be made, such as pinging the worker if a target is very close to the worker.

Smart Hard Hats
PPE is not designed smartly regardless of being in use for decades. For example, consider the scenario of a construction site, where a dumper is reversing with workers working nearby, as shown in Fig. [1]. Although heavy machinery like a dumper or excavator uses high pitched beeping tone when moving around. However, then OHS regulations require workers to wear earmuffs when working in a noisy environment. A contradiction like this leads to numerous fatalities each year.
A hard hat, being lightweight and sturdy, is a crucial uniform element for workers in unsafe fields. This paper's main idea is to make these Hard Hats smart to sense the surroundings, localize targets, and then alarm the user to impede the impending accidents. Any sensor that goes on the hard hat must be small, lightweight, and robust to rugged environments like construction or mining. Recent advances in microelectronics and signal processing have provided us with minimal, low cost and high-performance Doppler radars [56][57][58]. Our idea is to mount CW radars on hard hats and to sense the surroundings. These CW radars won't add any significant weight to these hats. Moreover, being cheap, they won't add any high cost for mass production. A worker wearing the hard hat can randomly move around, wobbling his head and doing his typical day to day activities. The question is that under  such random movements, is it still possible to localize targets? Sensing the environment using CW Doppler radar by attaching it to a worker's hard hat has some very intriguing issues. For example, one design question arises as how to put the radar antenna on the hard hats? Instead of putting one rotating antenna on top of the hat, we propose to use a phased-array antenna with elements around the hat's brim and using beam steering, as shown in Fig. 2. Then we can trigger each antenna element sequentially, one after the other. This way, we can get 360 0 measurements quickly without any moving part, plus we can easily control the rate of rotation.
Human head movements act as a potential source of noise during a phase comparison of the returned waveform. However, we have assumed that such movements would not affect our phase measurements. It is because during regular movements, the human head typically can move about 2-3cm in any direction. If we consider a typical indoor distance of say 10m, an electromagnetic radar wave takes about 66.67ns to return to the radar after reflecting from the wall. Now even if the radar spends say 150ns at one point measuring the phase and then 150ns at the next point, then in 300ns human head could only have moved negligible distance. Hence we can ignore such movements at this stage. In our previous work of collision avoidance for UAVs using Doppler radars [59], we showed that when an observer moving in a circular pattern and obtaining Doppler frequency measurements of surrounding targets, then it meets the observability conditions. Under these circumstances, the observer can estimate the true range of the targets. However, asking workers to move in a circular pattern to localize surroundings is not a practical solution. In the next section, we propose obtaining instantaneous frequency measurements from the changing phase of the returned waveform in a CW radar without requiring the user to move.

Radar Phase Rate
Consider a continuous wave radar that's transmitting and receiving back echoes from a wall, as shown in the Fig. 3. If R is the distance from radar to a point A on the wall then total number of wavelengths λ contained in the two-way path is given as,  One wavelength corresponds to an angular distance of 2π radians. Now as electromagnetic wave moves during its transit to and from the wall, it covers a total of 2πN radians. Therefore, Now if target is moving then its distance R from radar and phase φ are continuously changing. A change in phase with respect to time is called angular frequency. In case of moving target it is the well-known Doppler angular frequency. However, in our case, the wall is not moving. On the other hand, if the radar rotates and focuses now on point B then total distance between radar and target has changed again, as shown in Fig. 3. This in turn leads to a change in phase. As before when considered with respect to time, it gives us instantaneous angular frequency ω φ , where V φ represents range rate and f φ represents the frequency obtained because of rate of change of phase. Fig. 4a shows a simulation scenario of a CW radar inside a 5x5 meter room. Arrow head depicts the direction in which radar is currently pointing. As radar beam sweeps around, the phase of the returned wave keeps on changing, generating a frequency f φ and phase velocity v φ . Although similar in expression to Doppler frequency-velocity relation, here we obtained a frequency-velocity relation that is dependent upon radar's rotation rate. Fig. 4b shows the range profile obtained with respect to radar as its antenna rotates.
Note that we now have both frequency and bearing measurements of the target. In this case, target's bearing at a given instant is the angle at which radar's beam is pointing. Fig. 4c shows the corresponding bearing profile that we obtain from our simulation of the rotating radar.
In Bayesian estimation theory, a given dynamic system is observable if its state can be uniquely determined from its model, inputs and outputs. Given the Nth order dynamics nature of our problem,  we adopt the observability criterion discussed in [60], where no restriction is imposed on observer and target's motion. According to this criterion, with both angle and frequency measurements, target state is observable if and only if LOS angle between observer and target does not remain constant. In our previous work ???, we showed that a source moving in a circular pattern can accurately locate targets in its surroundings because it meets these observability conditions. In case of human users, one cannot ask them to move in circular pattern. However, as discussed in this section above, we can steer the beam and rotate it in 360 o to obtain the bearing and frequency measurements that then meets the observability conditions, providing unique solution to our estimation problem. In the next section, we show how to use this bearing and frequency information to estimate the target's true range.

Indoor localization with CW Doppler Radar
Without the loss of generality, assume a point-like observer 'O' and target 'T'. We assume observer to be located inside a room of size 5 × 5 m, having a door leading to a hallway on left hand side. It is assumed that radar is fixed on top of a worker's hard hat. The worker itself moves around randomly, whereas, the radar on top of observer takes 360 o measurements (Bearings and Doppler Frequency) of its surroundings. This means that we now have to consider the following movements as well, Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 2 November 2020 doi:10.20944/preprints202011.0048.v1 • user's head movements and wobbliness • user's walking or running movements Everything surrounding the observer is a potential target. We cast the problem of estimating a target's state, as a Probabilistic state estimation problem of nonlinear system with additive noise [61][62][63]. Then dynamic state space model of the problem is described by following set of difference equations,

Process Equation:
X Measurement Equations: where, X[k] ∈ R denotes state of dynamic system at time instant k, f : R n x × R n u → R n x and h : R n x × R n u → R n z are nonlinear known functions, U[k] is the known control input and Z where w(k) is zero mean white Gaussian observation noise with variance σ w . h(.) is a nonlinear function of the state and for bearing & phase velocity measurements, it is given as, The optimal way to recursively update the posterior density p(x[k] x[k − 1]) as new observations arrive is given by recursive Bayesian estimation. In this domain, the estimation problem is designed to recursively build confidence in the system state at time k, using all measurements until time k Z 1:k . Here Z 1:k denotes set of measurements history from time 1 up till k, Z 1:k = {Z [1], Z [2], . . . , Z[k]}. In this paper, we have used UKF [64,65] to estimate the system's state and obtain the results.

Experiments: Worker wearing smart hard hat
Average walking speed of humans is about 3mph or 5kmph. However, we should note that a worker can randomly start moving from anywhere between standing-still at one second to running at the next. Therefore, the idea in this simulation is to allow observer to move randomly at average human speed with variance of about 5kmph.
Wiener-sequence acceleration model assumes that target moves with some acceleration, where increments in this acceleration are modeled as independent white noise process. The state space model is, Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 2 November 2020 doi:10.20944/preprints202011.0048.v1 where, F a is state transition matrix. It is defined as, where 0 n×n is a n × n all zeros matrix. The Gain matrix G a is v(k) is zero-mean white Gaussian process noise with variance σ v . The covariance of process noise multiplied by gain G is given as, System is initialized with range along X and Y coordinates as 10m with Std. Dev. of 5m, velocity of 2m/s with Std. Dev. 2m/s, acceleration of 1m/s 2 with standard deviation (Std. Dev.) 2m/s 2 . Process noise Std. Dev. is 5 × 10 −2 . Bearing and Doppler measurement noise Std. Dev is 0.5 o and 0.01m/s, respectively. Fig. [5a] shows result of one of the possible worker's trajectories. In the figure, blue line depicts worker's trajectory whereas, surrounding walls are depicted in green. As the worker moves around, CW radar rotates 360 o obtaining bearing and phase measurements simultaneously. Fig. [5b] shows the actual and estimated range of surrounding walls from observer at each instant as radar turns. The range estimation error is quite large at start but as radar receives more measurements then error starts decreasing -reaching within 20-30cm after 2-3 seconds. Fig. [5c] shows Root Mean Square Error (RMSE) after 100 MC runs. Notice that error spikes around the corners and especially around the door. It's because the door appears as high maneuvering target to the tracker, but given that it is meeting the observability conditions, filter converges to the original range quickly.
By looking at the range profile of walls with respect to radar, it appears that surrounding walls present themselves as highly maneuvering targets to a dynamic observer whose head is also randomly moving.
This adds considerable higher order derivatives in relative dynamics which can't be ignored. In previous design, we only considered dynamics model up till 2 degrees i.e. velocity and acceleration. Although, we tried to handle the higher order effects as noise in the state space model, it appears that errors can only be reduced so much. In this section, we design another tracker that incorporates one more derivative of motion i.e. jerk. The model so obtained is referred to as Constant Jerk model in literature [66][67][68].  State transition matrix for this model is given as, where 0 4×4 is 4 × 4 matrix of all zeros. Covariance matrix equivalently now becomes,  Rest of the design and conditions remains same as mentioned in previous tracker's case. Fig.  6a shows another example of a worker randomly moving around with a smart hard hat. The hat is assumed to receive bearing and frequency measurements as explained before. Then Fig. 6 shows the true and estimated range results. Reductions in estimation error can be seen in the RMSE plot in Fig.  [6b]. Not only the filter converges to 5-10cm of actual range in less than a second but also re-converges faster after high non-linearity maneuvers near doorways.
Although random human motions and movements add vulnerabilities to our trackers but such random movements can be handled easily through added noise in system. Constant-Jerk filter outperforms the Constant-Acceleration filter. However, former also has more computations and complexity. It then depends upon the final system and it's application as to prefer accuracy vs computational complexity. Constant jerk filter also converged within a second compared to the constant acceleration filter which took approximately 3 seconds on average.

Performance Improvement using Amplitude of Received Signal
As Kalman filter based algorithms are recursive, so they need a starting point to initiate the recurrence. In case where true initial state parameters are not known and one randomly makes the initial variance too large then filter would take a large time to converge. In case of linear systems with Gaussian noise, initial conditions doesn't really effect stability of the filter, other than that it delays the convergence. However, convergence and stability of nonlinear filters is quite sensitive to proper initialization. If covariance matrix is initialized with large values than nonlinear filters easily diverges or converge to a wrong estimate.
In our system, we are already using one radar to obtain radial velocity information. Now assume another CW radar placed side by side with our first radar and displaced by a small distance ∆r. The power of the signal received by the radar is inversely proportional to the range of the target as follows [69] where, • P R 1 is Power received at the radar, • P T is Power transmitted by the radar, • G is the antenna gain • λ is the radar operating wavelength • σ is the radar cross section of the target • R 1 is the range of target from radar If we assume that two Doppler radars are identical, then the power received at this second Doppler radar from the same target at the same instant could be obtained as, where, • P R 2 is Power received at radar 2, • R 2 is the range of target from radar From equations 16 and 17, we can write that, Using Binomial theorem to expand the numerator and ignoring the higher order terms, we can obtain R 1 as follows, This rough estimate when used to initialize the filters reduces the convergence time significantly. Figures [7] & [8] shows the tracking results after incorporating the above method of initialization. Looking at the range RMSE plots in Figures [7a] & [7b], we can easily see the effect of proper initialization in case of Constant Acceleration Model based trackers. Previously tracker was taking 3 seconds on average to converge within 10cm of true range. Now it is only taking a fraction of the first second. Similar is the case for Constant-Jerk model based trackers as shown in Fig. [8]

Results
This research addresses how to improve workers' safety by increasing their situational awareness in unsafe environments such as construction and mining. Persoanal Protective Equipment (PPE) worn by workers have significant weight. Thus any proposed solution to the safety problem must be innovative to add only negligible weight to the PPE, consume low power and robust under harsh conditions (high temperature, humidity, darkness, etc.). We addressed these issues by proposing a smart hard hat for workers using cheap continuous wave (CW) Doppler radars.   With advancements in technology, CW radars are available for a couple of dollars off the shelf. In our research, we discuss the observability conditions under which the highly non-linear targets are observable. We also showed how one can handle random head and walking movements so that this doesn't compromise the results. The range and distances under consideration are relatively small compared to EM waves' velocity. Kalman filter-based algorithms are recursive and need a good starting point to initialize the system. Since our system is highly non-linear, the initialization strategy significantly affects the results of the estimation. We introduced the idea of placing two CW Doppler radars side by side, and showed how we could leverage the received signal strength to estimate the initial target range. Results further showed that this methodology decreased the filter convergence time by more than 80%. Table 1 shows the average RMSE values of different trackers proposed in this article after 1000 MC runs. Using higher-order derivative (Jerk) and proper initialization strategy, estimation error reduced to app. 5cm within first second and one scan of the environment. Doppler radars being inexpensive, light-weight and robust are well suited for sensing the indoor environment where sensor weights are a critical design factor.
It must be noted that another important application of this technology is in robots and UAVs. The ability to self navigate and avoid collisions in complex and dynamic environments is essential for these machines. Any collision avoidance system must be light to add only minimal weight. Current technologies rely mostly on vision sensors, which consume more power, and are heavy and delicate. In similar applications, a key element in the success of any rescue or crisis intervention is the availability of an accurate and updated map of the area. Such situational awareness can be used to map previously unknown areas or update the a priori map of a place destroyed by natural calamity. For rescue workers who lose their orientation due to fire or smoke, such real-time maps could guide them to a safer location. Such a system's successful utilization lies in its miniaturization and smooth integration with equipment already in use. The feedback from such a system must also be subtle and user friendly to avoid burdening the user.