YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

NSGA-II Optimized Deep Learning Image Fusion (Pansharpening)

Multi-objective optimization of deep learning pansharpening models using NSGA-II (Non-dominated Sorting Genetic Algorithm II). The optimizer tunes CNN hyperparameters to achieve balanced results across competing image quality objectives.

🎯 What This Does

Traditional pansharpening fuses a low-resolution multispectral (MS) image with a high-resolution panchromatic (PAN) image to produce a high-resolution multispectral image. This project uses NSGA-II to find the optimal trade-offs between:

Objective Measures Goal
SAM (Spectral Angle Mapper) Spectral fidelity Minimize ↓
ERGAS Normalized spectral-spatial error Minimize ↓
SF (Spatial Frequency) Spatial detail/sharpness Maximize ↑

These objectives fundamentally conflict: improving spatial sharpness often degrades spectral fidelity and vice versa. NSGA-II finds the Pareto front β€” the set of solutions where no objective can be improved without worsening another.

πŸ—οΈ Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚              NSGA-II Optimizer (pymoo)           β”‚
β”‚  Population: 20-30 individuals                  β”‚
β”‚  Each individual = [lr, Ξ»_spec, Ξ»_spat,         β”‚
β”‚                      n_filters, n_blocks]        β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚         ↓ For each individual:                   β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”       β”‚
β”‚  β”‚    Z-PNN Fusion Network (PyTorch)     β”‚       β”‚
β”‚  β”‚  Input: [MS_upsampled βˆ₯ PAN]          β”‚       β”‚
β”‚  β”‚  Architecture: Conv β†’ ResBlocks β†’ Conv β”‚       β”‚
β”‚  β”‚  Loss: Ξ»_specΒ·L1 + Ξ»_spatΒ·SpatCorr   β”‚       β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜       β”‚
β”‚         ↓ Train for N epochs                     β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”       β”‚
β”‚  β”‚    Evaluate: SAM, ERGAS, SF            β”‚       β”‚
β”‚  β”‚    β†’ fitness = [SAM, ERGAS, -SF]       β”‚       β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜       β”‚
β”‚         ↓ Return to NSGA-II                      β”‚
β”‚  Selection β†’ Crossover β†’ Mutation β†’ Next Gen     β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
         ↓ After all generations
    Pareto Front + Visualizations

πŸ“ Project Structure

nsga2_image_fusion/
β”œβ”€β”€ __init__.py            # Package init
β”œβ”€β”€ metrics.py             # 10+ image quality metrics
β”œβ”€β”€ model.py               # Z-PNN fusion network + loss functions
β”œβ”€β”€ data.py                # Data pipeline (synthetic, H5, TIFF)
β”œβ”€β”€ train.py               # Training loop per individual
β”œβ”€β”€ nsga2_optimizer.py     # NSGA-II wrapper (pymoo)
β”œβ”€β”€ visualize.py           # Pareto front plots & analysis
run_nsga2_fusion.py        # Main entry point
requirements.txt           # Dependencies
sample_outputs/            # Demo run outputs

πŸš€ Quick Start

pip install -r requirements.txt
python run_nsga2_fusion.py --mode demo     # ~2 min CPU
python run_nsga2_fusion.py --mode test     # ~15 min CPU
python run_nsga2_fusion.py --mode full --device cuda  # hours, GPU

πŸ”§ Decision Variables

Variable Range Description
lr [1e-5, 1e-2] Learning rate (log-scale)
lambda_spectral [0.1, 5.0] Spectral loss weight
lambda_spatial [0.1, 5.0] Spatial loss weight
n_filters {16..128} Conv filter count
n_blocks {2..8} Residual block count

πŸ“– References

License

MIT

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Papers for Vanos007/nsga2-image-fusion