HistogramMatching
Adjust the pixel value distribution of an input image to match a reference image.
This transform modifies the pixel intensities of the input image so that its histogram matches the histogram of a provided reference image. This process is applied independently to each channel of the image if it is multi-channel.
Why use Histogram Matching?
Domain Adaptation: Helps bridge the gap between images from different sources (e.g., different cameras, lighting conditions, synthetic vs. real data) by aligning their overall intensity and contrast characteristics.
Use Case Example: Imagine you have labeled training images from one source (e.g., daytime photos, medical scans from Hospital A) but expect your model to work on images from a different source at test time (e.g., nighttime photos, scans from Hospital B). You might only have unlabeled images from the target (test) domain. HistogramMatching can be used to make your labeled training images resemble the style (intensity and contrast distribution) of the unlabeled target images. By training on these adapted images, your model may generalize better to the target domain without needing labels for it.
How it works: The core idea is to map the pixel values of the input image such that its cumulative distribution function (CDF) matches the CDF of the reference image. This effectively reshapes the input image's histogram to resemble the reference's histogram.
metadata_keyKey in the input data dictionary to retrieve the reference image(s).
The value should be a sequence (e.g., list) of numpy arrays (pre-loaded images).
Default: "hm_metadata".
blend_ratioRange for the blending factor between the original and the histogram-matched image. A value of 0 means the original image is returned, 1 means the fully matched image is returned. A random value within this range [min, max] is sampled for each application. This allows for varying degrees of adaptation. Default: (0.5, 1.0).
pProbability of applying the transform. Default: 0.5.
>>> import numpy as np
>>> import albumentations as A
>>> import cv2
>>>
>>> # Create sample images for demonstration
>>> # Source image: dark image with low contrast
>>> source_image = np.ones((100, 100, 3), dtype=np.uint8) * 50 # Dark gray image
>>> source_image[30:70, 30:70] = 100 # Add slightly brighter square in center
>>>
>>> # Target image: higher brightness and contrast
>>> target_image = np.ones((100, 100, 3), dtype=np.uint8) * 150 # Bright image
>>> target_image[20:80, 20:80] = 200 # Add even brighter square
>>>
>>> # Initialize the histogram matching transform with custom settings
>>> transform = A.Compose([
... A.HistogramMatching(
... blend_ratio=(0.7, 0.9), # Control the strength of histogram matching
... metadata_key="reference_imgs", # Custom metadata key
... p=1.0
... )
... ])
>>>
>>> # Apply the transform
>>> result = transform(
... image=source_image,
... reference_imgs=[target_image] # Pass reference image via metadata key
... )
>>>
>>> # Get the histogram-matched image
>>> matched_image = result["image"]
>>>
>>> # The matched_image will have brightness and contrast similar to target_image
>>> # while preserving the content of source_image
>>>
>>> # Multiple reference images can be provided:
>>> ref_imgs = [
... target_image,
... np.random.randint(100, 200, (100, 100, 3), dtype=np.uint8) # Another reference image
... ]
>>> multiple_refs_result = transform(image=source_image, reference_imgs=ref_imgs)
>>> # A random reference image from the list will be chosen for each transform application- Requires at least one reference image to be provided via the
metadata_keyargument.
- Histogram Matching in scikit-imagehttps://scikit-image.org/docs/dev/auto_examples/color_exposure/plot_histogram_matching.html