FDA

Targets:
image
volume
Image Types:uint8, float32

Fourier Domain Adaptation (FDA).

Adapts the style of the input image to match the style of a reference image by manipulating their frequency components in the Fourier domain. This is particularly useful for unsupervised domain adaptation (UDA).

Why use FDA?

Domain Adaptation: FDA helps bridge the domain gap between source and target datasets (e.g., synthetic vs. real, day vs. night) by aligning their low-frequency Fourier spectrum components. This can improve model performance on the target domain without requiring target labels.

Use Case Example: Imagine you have labeled training data acquired under certain conditions (e.g., images from Hospital A using a specific scanner) but need your model to perform well on data from a different distribution (e.g., unlabeled images from Hospital B with a different scanner). FDA can adapt the labeled source images to match the style (frequency characteristics) of the unlabeled target images, potentially improving the model's generalization to the target domain at test time.

How it works: FDA operates in the frequency domain. It replaces the low-frequency components of the source image's Fourier transform with the low-frequency components from the reference (target domain) image's Fourier transform. The beta_limit parameter controls the size of the frequency window being swapped.

Arguments
metadata_key
str
fda_metadata

Key in the input data dictionary to retrieve the reference image(s). The value should be a sequence (e.g., list) of numpy arrays (pre-loaded images). Default: "fda_metadata".

beta_limit
tuple[float, float] | float
[0,0.1]

Controls the extent of the low-frequency spectrum swap. A larger beta means more components are swapped. Corresponds to the L parameter in the original paper. Should be in the range [0, 0.5]. Sampling is uniform within the provided range [min, max]. Default: (0, 0.1).

p
float
0.5

Probability of applying the transform. Default: 0.5.

Examples
>>> import numpy as np
>>> import albumentations as A
>>> import cv2
>>>
>>> # Create sample images for demonstration
>>> # Source image: synthetic or simulated image (e.g., from a rendered game environment)
>>> source_img = np.zeros((100, 100, 3), dtype=np.uint8)
>>> # Create a pattern in the source image
>>> source_img[20:80, 20:80, 0] = 200  # Red square
>>> source_img[40:60, 40:60, 1] = 200  # Green inner square
>>>
>>> # Target domain image: real-world image with different texture/frequency characteristics
>>> # For this example, we'll create an image with different frequency patterns
>>> target_img = np.zeros((100, 100, 3), dtype=np.uint8)
>>> for i in range(100):
...     for j in range(100):
...         # Create a high-frequency pattern
...         target_img[i, j, 0] = ((i + j) % 8) * 30
...         target_img[i, j, 1] = ((i - j) % 8) * 30
...         target_img[i, j, 2] = ((i * j) % 8) * 30
>>>
>>> # Example 1: FDA with minimal adaptation (small beta value)
>>> # This will subtly adjust the frequency characteristics
>>> minimal_fda = A.Compose([
...     A.FDA(
...         beta_limit=(0.01, 0.05),  # Small beta range for subtle adaptation
...         metadata_key="target_domain",  # Custom metadata key
...         p=1.0
...     )
... ])
>>>
>>> # Apply the transform with minimal adaptation
>>> minimal_result = minimal_fda(
...     image=source_img,
...     target_domain=[target_img]  # Pass reference image via custom metadata key
... )
>>> minimal_adapted_img = minimal_result["image"]
>>>
>>> # Example 2: FDA with moderate adaptation (medium beta value)
>>> moderate_fda = A.Compose([
...     A.FDA(
...         beta_limit=(0.1, 0.2),  # Medium beta range
...         metadata_key="target_domain",
...         p=1.0
...     )
... ])
>>>
>>> moderate_result = moderate_fda(image=source_img, target_domain=[target_img])
>>> moderate_adapted_img = moderate_result["image"]
>>>
>>> # Example 3: FDA with strong adaptation (larger beta value)
>>> strong_fda = A.Compose([
...     A.FDA(
...         beta_limit=(0.3, 0.5),  # Larger beta range (upper limit is MAX_BETA_LIMIT)
...         metadata_key="target_domain",
...         p=1.0
...     )
... ])
>>>
>>> strong_result = strong_fda(image=source_img, target_domain=[target_img])
>>> strong_adapted_img = strong_result["image"]
>>>
>>> # Example 4: Using multiple target domain images
>>> # Creating a list of target domain images with different characteristics
>>> target_imgs = [target_img]
>>>
>>> # Add another target image with different pattern
>>> another_target = np.zeros((100, 100, 3), dtype=np.uint8)
>>> for i in range(100):
...     for j in range(100):
...         another_target[i, j, 0] = (i // 10) * 25
...         another_target[i, j, 1] = (j // 10) * 25
...         another_target[i, j, 2] = ((i + j) // 10) * 25
>>> target_imgs.append(another_target)
>>>
>>> # Using default FDA settings with multiple target images
>>> multi_target_fda = A.Compose([
...     A.FDA(p=1.0)  # Using default settings with default metadata_key="fda_metadata"
... ])
>>>
>>> # A random target image will be selected from the list for each application
>>> multi_target_result = multi_target_fda(image=source_img, fda_metadata=target_imgs)
>>> adapted_image = multi_target_result["image"]
Notes
  • Requires at least one reference image to be provided via the metadata_key argument.