Detecting objects with LiDAR in fog has long been treated as a noise-removal problem: identify the fog returns and throw them away. This work takes the opposite view. The same fog returns that other methods discard are read as a direct measurement of how far the sensor can still see. After an off-the-shelf fog-segmentation step, the near-range fog points are used to estimate the atmosphere's optical thickness, which is then converted into a per-frame Maximum Detection Range (MDR) — a single, human-readable number, in meters, that tells planning and safety modules how far the LiDAR is currently usable. The estimator has no learnable parameter(The physics core — a per-frame Beer–Lambert slope fit followed by the Koschmieder mapping) and runs in real time during driving. We validate it with real-vehicle experiments in controlled heavy-fog scenarios. Across six fog runs the estimated MDR recovers the true first-detection distance with a mean absolute error of 1.99 m and R² = 0.741 — a 58 % error reduction over a no-feature baseline. To rule out that the result is an artefact of our own recording, the same estimate is cross-checked against two independent references used by the wider community: the STF transmissometer dataset and the Hahner et al. automotive fog simulator. Our estimate agrees with both, which is the key differentiator from prior LiDAR-fog work that reports performance only on its own data. The result is a real-time, parameter-free, externally-corroborated visibility readout that fits inside an existing fog-denoising pipeline and gives downstream autonomy a quantitative answer to the question "how far can I see right now?"