Apply optical distortion (lens/camera or fisheye model) to images, masks, bboxes, keypoints. Params: distort_range, mode (camera/fisheye), interpolation.
Supports two distortion models:
Camera matrix model (original): Uses OpenCV's camera calibration model with k1=k2=k distortion coefficients
Fisheye model: Direct radial distortion: r_dist = r * (1 + gamma * r²)
distort_rangeRange of distortion coefficient, sampled per image. For camera model: recommended range (-0.05, 0.05). For fisheye model: recommended range (-0.3, 0.3). Default: (-0.05, 0.05)
modeDistortion model to use:
interpolationInterpolation method used for image transformation. Should be one of: cv2.INTER_NEAREST, cv2.INTER_LINEAR, cv2.INTER_CUBIC, cv2.INTER_AREA, cv2.INTER_LANCZOS4. Default: cv2.INTER_LINEAR.
mask_interpolationFlag that is used to specify the interpolation algorithm for mask. Should be one of: cv2.INTER_NEAREST, cv2.INTER_LINEAR, cv2.INTER_CUBIC, cv2.INTER_AREA, cv2.INTER_LANCZOS4. Default: cv2.INTER_NEAREST.
keypoint_remapping_methodMethod to use for keypoint remapping.
map_resolution_rangeRange for sampling the displacement map resolution relative to the target size. Values below 1.0 generate lower-resolution maps and upscale them, trading precision for speed. Default: (1.0, 1.0).
pProbability of applying the transform. Default: 0.5.
>>> import albumentations as A
>>> transform = A.Compose([
... A.OpticalDistortion(distort_range=(-0.1, 0.1), p=1.0),
... ])
>>> transformed = transform(image=image, mask=mask, bboxes=bboxes, keypoints=keypoints)
>>> transformed_image = transformed['image']
>>> transformed_mask = transformed['mask']
>>> transformed_bboxes = transformed['bboxes']
>>> transformed_keypoints = transformed['keypoints']