← Back to all transforms

ChannelDropout

Description

Randomly drop channels in the input image.

    This transform randomly selects a number of channels to drop from the input image
    and replaces them with a specified fill value. This can improve model robustness
    to missing or corrupted channels.

    The technique is conceptually similar to:
    - Dropout layers in neural networks, which randomly set input units to 0 during training.
    - CoarseDropout augmentation, which drops out regions in the spatial dimensions of the image.

    However, ChannelDropout operates on the channel dimension, effectively "dropping out"
    entire color channels or feature maps.

    Args:
        channel_drop_range (tuple[int, int]): Range from which to choose the number
            of channels to drop. The actual number will be randomly selected from
            the inclusive range [min, max]. Default: (1, 1).
        fill_value (float): Pixel value used to fill the dropped channels.
            Default: 0.
        p (float): Probability of applying the transform. Must be in the range
            [0, 1]. Default: 0.5.

    Raises:
        NotImplementedError: If the input image has only one channel.
        ValueError: If the upper bound of channel_drop_range is greater than or
            equal to the number of channels in the input image.

    Targets:
        image

    Image types:
        uint8, float32

    Example:
        >>> import numpy as np
        >>> import albumentations as A
        >>> image = np.random.randint(0, 256, (100, 100, 3), dtype=np.uint8)
        >>> transform = A.ChannelDropout(channel_drop_range=(1, 2), fill_value=128, p=1.0)
        >>> result = transform(image=image)
        >>> dropped_image = result['image']
        >>> assert dropped_image.shape == image.shape
        >>> assert np.any(dropped_image != image)  # Some channels should be different

    Note:
        - The number of channels to drop is randomly chosen within the specified range.
        - Channels are randomly selected for dropping.
        - This transform is not applicable to single-channel (grayscale) images.
        - The transform will raise an error if it's not possible to drop the specified
          number of channels (e.g., trying to drop 3 channels from an RGB image).
        - This augmentation can be particularly useful for training models to be robust
          against missing or corrupted channel data in multi-spectral or hyperspectral imagery.

    

Parameters

  • p: float (default: 0.5)
  • channel_drop_range: tuple[int, int] (default: (1, 1))
  • fill_value: float (default: 0)

Targets

  • Image

Try it out

Original Image (width = 484, height = 733):

Original

Transformed Image:

Transform not yet applied