MaskDropout

Targets:
image
mask
bboxes
keypoints
volume
mask3d
Image Types:uint8, float32

Apply dropout to random objects in a mask, zeroing out the corresponding regions in both the image and mask.

This transform identifies objects in the mask (where each unique non-zero value represents a distinct object), randomly selects a number of these objects, and sets their corresponding regions to zero in both the image and mask. It can also handle bounding boxes and keypoints, removing or adjusting them based on the dropout regions.

Arguments
max_objects
tuple[int, int] | int
[1,1]

Maximum number of objects to dropout. If a single int is provided, it's treated as the upper bound. If a tuple of two ints is provided, it's treated as a range [min, max].

fill
float | inpaint_telea | inpaint_ns
0

Value to fill dropped out regions in the image. Can be one of:

  • float: Constant value to fill the regions (e.g., 0 for black, 255 for white)
  • "inpaint_telea": Use Telea inpainting algorithm (for 3-channel images only)
  • "inpaint_ns": Use Navier-Stokes inpainting algorithm (for 3-channel images only)
fill_mask
float
0

Value to fill the dropped out regions in the mask.

min_area
float

Minimum area (in pixels) of a bounding box that must remain visible after dropout to be kept. Only applicable if bounding box augmentation is enabled. Default: 0.0

min_visibility
float

Minimum visibility ratio (visible area / total area) of a bounding box after dropout to be kept. Only applicable if bounding box augmentation is enabled. Default: 0.0

p
float
0.5

Probability of applying the transform. Default: 0.5.

Examples
>>> import numpy as np
>>> import albumentations as A
>>>
>>> # Prepare sample data
>>> image = np.random.randint(0, 256, (100, 100, 3), dtype=np.uint8)
>>> mask = np.zeros((100, 100), dtype=np.uint8)
>>> mask[20:40, 20:40] = 1  # Object 1
>>> mask[60:80, 60:80] = 2  # Object 2
>>> bboxes = np.array([[20, 20, 40, 40], [60, 60, 80, 80]], dtype=np.float32)
>>> bbox_labels = [1, 2]
>>> keypoints = np.array([[30, 30], [70, 70]], dtype=np.float32)
>>> keypoint_labels = [0, 1]
>>>
>>> # Define the transform with tuple for max_objects
>>> transform = A.Compose(
...     transforms=[
...         A.MaskDropout(
...             max_objects=(1, 2),  # Using tuple to specify min and max objects to drop
...             fill=0,  # Fill value for dropped regions in image
...             fill_mask=0,  # Fill value for dropped regions in mask
...             p=1.0
...         ),
...     ],
...     bbox_params=A.BboxParams(
...         format='pascal_voc',
...         label_fields=['bbox_labels'],
...         min_area=1,
...         min_visibility=0.1
...     ),
...     keypoint_params=A.KeypointParams(
...         format='xy',
...         label_fields=['keypoint_labels'],
...         remove_invisible=True
...     )
... )
>>>
>>> # Apply the transform
>>> transformed = transform(
...     image=image,
...     mask=mask,
...     bboxes=bboxes,
...     bbox_labels=bbox_labels,
...     keypoints=keypoints,
...     keypoint_labels=keypoint_labels
... )
>>>
>>> # Get the transformed data
>>> transformed_image = transformed['image']  # Image with dropped out regions
>>> transformed_mask = transformed['mask']    # Mask with dropped out regions
>>> transformed_bboxes = transformed['bboxes']  # Remaining bboxes after dropout
>>> transformed_bbox_labels = transformed['bbox_labels']  # Labels for remaining bboxes
>>> transformed_keypoints = transformed['keypoints']  # Remaining keypoints after dropout
>>> transformed_keypoint_labels = transformed['keypoint_labels']  # Labels for remaining keypoints
Notes
  • The mask should be a single-channel image where 0 represents the background and non-zero values represent different object instances.
  • For bounding box and keypoint augmentation, make sure to set up the corresponding processors in the pipeline.