AtmosphericFog

Targets:
image
volume
Image Types:uint8, float32

Apply depth-aware atmospheric fog using the standard scattering model.

Unlike RandomFog which overlays circular fog patches, this transform uses the physically-based atmospheric scattering equation with a synthetic depth map, producing more realistic distance-dependent fog.

Formula: result = image * exp(-density * depth) + fog_color * (1 - exp(-density * depth))

Arguments
density_range
tuple[float, float]
[1,3]

Range for fog density. Higher values = thicker fog. Default: (1.0, 3.0).

fog_color
tuple[int, ...]
[200,200,200]

RGB color of the fog. Default: (200, 200, 200).

depth_mode
linear | diagonal | radial
linear

How to generate the synthetic depth map:

  • "linear": top of image is far, bottom is near
  • "diagonal": top-left corner is far
  • "radial": center is near, edges are far Default: "linear".
p
float
0.5

Probability of applying the transform. Default: 0.5.

Examples
>>> import numpy as np
>>> import albumentations as A
>>> image = np.random.randint(0, 256, (100, 100, 3), dtype=np.uint8)
>>> transform = A.AtmosphericFog(density_range=(1.0, 2.5), depth_mode="linear", p=1.0)
>>> result = transform(image=image)["image"]
Notes

The depth map is synthetic (generated from image position), not from actual scene geometry. For best results with outdoor scenes, "linear" mode works well since distant objects tend to be near the top of the frame.