PixelDistributionAdaptation

Targets:
image
volume
Image Types:uint8, float32

Adapts the pixel value distribution of an input image to match a reference image using statistical transformations (PCA, StandardScaler, or MinMaxScaler).

This transform aims to harmonize images from different domains by aligning their pixel-level statistical properties.

Why use Pixel Distribution Adaptation? Domain Adaptation: Useful for aligning images across domains with differing pixel statistics (e.g., caused by different sensors, lighting, or post-processing).

Use Case Example: Consider having labeled data from Scanner A and needing the model to perform well on unlabeled data from Scanner B, where images might have different overall brightness, contrast, or color biases. This transform can adapt the labeled images from Scanner A to mimic the pixel distribution style of the images from Scanner B, potentially improving generalization without needing labels for Scanner B data.

How it works:

  1. A chosen statistical transform (transform_type) is fitted to both the input (source) image and the reference (target) image separately.
  2. The input image is transformed using the transform fitted on it (moving it to a standardized space).
  3. The inverse transform fitted on the reference image is applied to the result from step 2 (moving the standardized input into the reference image's statistical space).
  4. The result is optionally blended with the original input image using blend_ratio.
Arguments
metadata_key
str
pda_metadata

Key in the input data dictionary to retrieve the reference image(s). The value should be a sequence (e.g., list) of numpy arrays (pre-loaded images). Default: "pda_metadata".

blend_ratio
tuple[float, float]
[0.25,1]

Specifies the minimum and maximum blend ratio for mixing the adapted image with the original. A value of 0 means the original image is returned, 1 means the fully adapted image is returned. A random value within this range [min, max] is sampled for each application. Default: (0.25, 1.0).

transform_type
pca | standard | minmax
pca

Specifies the type of statistical transformation to apply:

  • "pca": Principal Component Analysis.
  • "standard": StandardScaler (zero mean, unit variance).
  • "minmax": MinMaxScaler (scales to [0, 1] range). Default: "pca".
p
float
0.5

The probability of applying the transform. Default: 0.5.

Examples
>>> import numpy as np
>>> import albumentations as A
>>> import cv2
>>>
>>> # Create sample images for demonstration
>>> # Source image: simulated image from domain A (e.g., medical scan from one scanner)
>>> source_image = np.random.normal(100, 20, (100, 100, 3)).clip(0, 255).astype(np.uint8)
>>>
>>> # Reference image: image from domain B with different statistical properties
>>> # (e.g., scan from a different scanner with different intensity distribution)
>>> reference_image = np.random.normal(150, 30, (100, 100, 3)).clip(0, 255).astype(np.uint8)
>>>
>>> # Example 1: Using PCA transformation (default)
>>> pca_transform = A.Compose([
...     A.PixelDistributionAdaptation(
...         transform_type="pca",
...         blend_ratio=(0.8, 1.0),  # Strong adaptation
...         metadata_key="reference_images",
...         p=1.0
...     )
... ])
>>>
>>> # Apply the transform with the reference image
>>> pca_result = pca_transform(
...     image=source_image,
...     reference_images=[reference_image]
... )
>>>
>>> # Get the adapted image
>>> pca_adapted_image = pca_result["image"]
>>>
>>> # Example 2: Using StandardScaler transformation
>>> standard_transform = A.Compose([
...     A.PixelDistributionAdaptation(
...         transform_type="standard",
...         blend_ratio=(0.5, 0.7),  # Moderate adaptation
...         metadata_key="reference_images",
...         p=1.0
...     )
... ])
>>>
>>> standard_result = standard_transform(
...     image=source_image,
...     reference_images=[reference_image]
... )
>>> standard_adapted_image = standard_result["image"]
>>>
>>> # Example 3: Using MinMaxScaler transformation
>>> minmax_transform = A.Compose([
...     A.PixelDistributionAdaptation(
...         transform_type="minmax",
...         blend_ratio=(0.3, 0.5),  # Subtle adaptation
...         metadata_key="reference_images",
...         p=1.0
...     )
... ])
>>>
>>> minmax_result = minmax_transform(
...     image=source_image,
...     reference_images=[reference_image]
... )
>>> minmax_adapted_image = minmax_result["image"]
>>>
>>> # Example 4: Using multiple reference images
>>> # When multiple reference images are provided, one is randomly selected for each transformation
>>> multiple_references = [
...     reference_image,
...     np.random.normal(180, 25, (100, 100, 3)).clip(0, 255).astype(np.uint8),
...     np.random.normal(120, 40, (100, 100, 3)).clip(0, 255).astype(np.uint8)
... ]
>>>
>>> multi_ref_transform = A.Compose([
...     A.PixelDistributionAdaptation(p=1.0)  # Using default settings
... ])
>>>
>>> # Each time the transform is applied, it randomly selects one of the reference images
>>> multi_ref_result = multi_ref_transform(
...     image=source_image,
...     pda_metadata=multiple_references  # Using the default metadata key
... )
>>> adapted_image = multi_ref_result["image"]
Notes
  • Requires at least one reference image to be provided via the metadata_key argument.