GlassBlur
Apply a glass blur effect to the input image.
This transform simulates the effect of looking through textured glass by locally shuffling pixels in the image. It creates a distorted, frosted glass-like appearance.
sigmaStandard deviation for the Gaussian kernel used in the process. Higher values increase the blur effect. Must be non-negative. Default: 0.7
max_deltaMaximum distance in pixels for shuffling. Determines how far pixels can be moved. Larger values create more distortion. Must be a positive integer. Default: 4
iterationsNumber of times to apply the glass blur effect. More iterations create a stronger effect but increase computation time. Must be a positive integer. Default: 2
modeMode of computation. Options are:
- "fast": Uses a faster but potentially less accurate method.
- "exact": Uses a slower but more precise method. Default: "fast"
pProbability of applying the transform. Should be in the range [0, 1]. Default: 0.5
>>> import numpy as np
>>> import albumentations as A
>>> import cv2
>>>
>>> # Create a sample image for demonstration
>>> image = np.zeros((300, 300, 3), dtype=np.uint8)
>>> # Add some shapes to visualize glass blur effects
>>> cv2.rectangle(image, (100, 100), (200, 200), (255, 0, 0), -1) # Red square
>>> cv2.circle(image, (150, 150), 30, (0, 255, 0), -1) # Green circle
>>> cv2.putText(image, "Text Sample", (50, 50), cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 255, 255), 2)
>>>
>>> # Example 1: Subtle glass effect (light frosting)
>>> subtle_transform = A.Compose([
... A.GlassBlur(
... sigma=0.4, # Lower sigma for gentler blur
... max_delta=2, # Small displacement
... iterations=1, # Single iteration
... mode="fast",
... p=1.0 # Always apply
... )
... ])
>>>
>>> subtle_result = subtle_transform(image=image)
>>> subtle_glass = subtle_result["image"]
>>> # The image will have a subtle glass-like distortion, like light frosting
>>>
>>> # Example 2: Medium glass effect (typical frosted glass)
>>> medium_transform = A.Compose([
... A.GlassBlur(
... sigma=0.7, # Default sigma
... max_delta=4, # Default displacement
... iterations=2, # Default iterations
... mode="fast",
... p=1.0
... )
... ])
>>>
>>> medium_result = medium_transform(image=image)
>>> medium_glass = medium_result["image"]
>>> # The image will have a moderate glass-like effect, similar to standard frosted glass
>>>
>>> # Example 3: Strong glass effect (heavy distortion)
>>> strong_transform = A.Compose([
... A.GlassBlur(
... sigma=1.0, # Higher sigma for stronger blur
... max_delta=6, # Larger displacement
... iterations=3, # More iterations
... mode="fast",
... p=1.0
... )
... ])
>>>
>>> strong_result = strong_transform(image=image)
>>> strong_glass = strong_result["image"]
>>> # The image will have a strong glass-like distortion, heavily obscuring details
>>>
>>> # Example 4: Using exact mode for higher quality
>>> exact_transform = A.Compose([
... A.GlassBlur(
... sigma=0.7,
... max_delta=4,
... iterations=2,
... mode="exact", # More precise but slower
... p=1.0
... )
... ])
>>>
>>> exact_result = exact_transform(image=image)
>>> exact_glass = exact_result["image"]
>>> # The image will have a similar effect to medium, but with potentially better quality
>>>
>>> # Example 5: In a pipeline with other transforms
>>> pipeline = A.Compose([
... A.RandomBrightnessContrast(brightness_limit=0.1, contrast_limit=0.1, p=0.7),
... A.GlassBlur(sigma=0.7, max_delta=4, iterations=2, p=0.5), # 50% chance of applying
... A.HueSaturationValue(hue_shift_limit=10, sat_shift_limit=15, val_shift_limit=10, p=0.3)
... ])
>>>
>>> pipeline_result = pipeline(image=image)
>>> transformed_image = pipeline_result["image"]
>>> # The image may have glass blur applied with 50% probability along with other effects- This transform is particularly effective for creating a 'looking through glass' effect or simulating the view through a frosted window.
- The 'fast' mode is recommended for most use cases as it provides a good balance between effect quality and computation speed.
- Increasing 'iterations' will strengthen the effect but also increase the processing time linearly.
- [{'description': 'This implementation is based on the technique described in', 'source': '"ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness" https://arxiv.org/abs/1903.12261'}, {'description': 'Original implementation', 'source': 'https://github.com/hendrycks/robustness/blob/master/ImageNet-C/create_c/make_imagenet_c.py'}]