Adapts the pixel value distribution of an input image to match a reference image using statistical transformations (PCA, StandardScaler, or MinMaxScaler).
This transform aims to harmonize images from different domains by aligning their pixel-level statistical properties.
Why use Pixel Distribution Adaptation? Domain Adaptation: Useful for aligning images across domains with differing pixel statistics (e.g., caused by different sensors, lighting, or post-processing).
Use Case Example: Consider having labeled data from Scanner A and needing the model to perform well on unlabeled data from Scanner B, where images might have different overall brightness, contrast, or color biases. This transform can adapt the labeled images from Scanner A to mimic the pixel distribution style of the images from Scanner B, potentially improving generalization without needing labels for Scanner B data.
How it works:
transform_type) is fitted to both the input (source) image
and the reference (target) image separately.blend_ratio.metadata_keyKey in the input data dictionary to retrieve the reference image(s).
The value should be a sequence (e.g., list) of numpy arrays (pre-loaded images).
Default: "pda_metadata".
blend_ratioSpecifies the minimum and maximum blend ratio for mixing the adapted image with the original. A value of 0 means the original image is returned, 1 means the fully adapted image is returned. A random value within this range [min, max] is sampled for each application. Default: (0.25, 1.0).
transform_typeSpecifies the type of statistical transformation to apply:
pThe probability of applying the transform. Default: 0.5.
>>> import numpy as np
>>> import albumentations as A
>>> import cv2
>>>
>>> # Create sample images for demonstration
>>> # Source image: simulated image from domain A (e.g., medical scan from one scanner)
>>> source_image = np.random.normal(100, 20, (100, 100, 3)).clip(0, 255).astype(np.uint8)
>>>
>>> # Reference image: image from domain B with different statistical properties
>>> # (e.g., scan from a different scanner with different intensity distribution)
>>> reference_image = np.random.normal(150, 30, (100, 100, 3)).clip(0, 255).astype(np.uint8)
>>>
>>> # Example 1: Using PCA transformation (default)
>>> pca_transform = A.Compose([
... A.PixelDistributionAdaptation(
... transform_type="pca",
... blend_ratio=(0.8, 1.0), # Strong adaptation
... metadata_key="reference_images",
... p=1.0
... )
... ])
>>>
>>> # Apply the transform with the reference image
>>> pca_result = pca_transform(
... image=source_image,
... reference_images=[reference_image]
... )
>>>
>>> # Get the adapted image
>>> pca_adapted_image = pca_result["image"]
>>>
>>> # Example 2: Using StandardScaler transformation
>>> standard_transform = A.Compose([
... A.PixelDistributionAdaptation(
... transform_type="standard",
... blend_ratio=(0.5, 0.7), # Moderate adaptation
... metadata_key="reference_images",
... p=1.0
... )
... ])
>>>
>>> standard_result = standard_transform(
... image=source_image,
... reference_images=[reference_image]
... )
>>> standard_adapted_image = standard_result["image"]
>>>
>>> # Example 3: Using MinMaxScaler transformation
>>> minmax_transform = A.Compose([
... A.PixelDistributionAdaptation(
... transform_type="minmax",
... blend_ratio=(0.3, 0.5), # Subtle adaptation
... metadata_key="reference_images",
... p=1.0
... )
... ])
>>>
>>> minmax_result = minmax_transform(
... image=source_image,
... reference_images=[reference_image]
... )
>>> minmax_adapted_image = minmax_result["image"]
>>>
>>> # Example 4: Using multiple reference images
>>> # When multiple reference images are provided, one is randomly selected for each transformation
>>> multiple_references = [
... reference_image,
... np.random.normal(180, 25, (100, 100, 3)).clip(0, 255).astype(np.uint8),
... np.random.normal(120, 40, (100, 100, 3)).clip(0, 255).astype(np.uint8)
... ]
>>>
>>> multi_ref_transform = A.Compose([
... A.PixelDistributionAdaptation(p=1.0) # Using default settings
... ])
>>>
>>> # Each time the transform is applied, it randomly selects one of the reference images
>>> multi_ref_result = multi_ref_transform(
... image=source_image,
... pda_metadata=multiple_references # Using the default metadata key
... )
>>> adapted_image = multi_ref_result["image"]metadata_key argument.