Apply optical distortion (lens/camera or fisheye model) to images, masks, bboxes, keypoints. Params: distort_limit, mode (camera/fisheye), interpolation.
Supports two distortion models:
Camera matrix model (original): Uses OpenCV's camera calibration model with k1=k2=k distortion coefficients
Fisheye model: Direct radial distortion: r_dist = r * (1 + gamma * r²)
distort_limitRange of distortion coefficient. For camera model: recommended range (-0.05, 0.05) For fisheye model: recommended range (-0.3, 0.3) Default: (-0.05, 0.05)
modeDistortion model to use:
interpolationInterpolation method used for image transformation. Should be one of: cv2.INTER_NEAREST, cv2.INTER_LINEAR, cv2.INTER_CUBIC, cv2.INTER_AREA, cv2.INTER_LANCZOS4. Default: cv2.INTER_LINEAR.
mask_interpolationFlag that is used to specify the interpolation algorithm for mask. Should be one of: cv2.INTER_NEAREST, cv2.INTER_LINEAR, cv2.INTER_CUBIC, cv2.INTER_AREA, cv2.INTER_LANCZOS4. Default: cv2.INTER_NEAREST.
keypoint_remapping_methodMethod to use for keypoint remapping.
map_resolution_rangeRange for downsampling the distortion map before applying it. Values should be in (0, 1] where 1.0 means full resolution. Lower values generate smaller distortion maps which are faster to compute but may result in less precise distortions. The actual resolution is sampled uniformly from this range. Default: (1.0, 1.0).
pProbability of applying the transform. Default: 0.5.
>>> import albumentations as A
>>> transform = A.Compose([
... A.OpticalDistortion(distort_limit=0.1, p=1.0),
... ])
>>> transformed = transform(image=image, mask=mask, bboxes=bboxes, keypoints=keypoints)
>>> transformed_image = transformed['image']
>>> transformed_mask = transformed['mask']
>>> transformed_bboxes = transformed['bboxes']
>>> transformed_keypoints = transformed['keypoints']