Holistic Robust DRO¶
- class dro.neural_model.hrdro_nn.HRNNDRO(input_dim, num_classes, task_type='classification', model_type='mlp', alpha=0.1, r=0.01, epsilon=0.1, learning_approach='HD', adversarial_params=None, train_batch_size=64, device=device(type='cpu'))¶
Bases:
BaseNNDRO
Holistic Robust DRO.
Initialize Holistic Robust DRO.
- Parameters:
input_dim (int) – Input feature dimension \(d \geq 1\)
num_classes (int) –
Output dimension:
Classification: \(K \geq 2\) (number of classes)
Regression: Automatically set to 1
task_type (str) –
Learning task type. Supported:
'classification'
: Cross-entropy loss'regression'
: MSE loss
model_type (str) –
Neural architecture type. Supported:
'mlp'
: Multi-Layer Perceptron (default)linear
resnet
alexnet
device (torch.device) – Target computation device, defaults to CPU
alpha (float) – Robustness regularization coefficient \(\alpha > 0\), controls model conservativeness. Defaults to 0.1.
r (float) – Statistical error radius \(r > 0\) for distribution shift, defaults to 0.01.
epsilon (float) – Contamination ratio \(\epsilon \in [0,1)\), proportion of adversarial samples. Defaults to 0.1.
learning_approach (str) –
Robust optimization method. Options:
"HR"
: Huberian Robust optimization"HD"
: Hybrid DRO (default)
adversarial_params (dict) – Adversarial training configuration: Defaults to
{"steps":7, "step_size":0.03, "norm":"l-inf", "method":"PGD"}
train_batch_size (int) – Training batch size \(B \geq 1\), defaults to 64.
- Raises:
If alpha ≤ 0
If r ≤ 0
If ε ∉ [0,1)
If invalid learning_approach
- Example (CIFAR-10)::
>>> adv_params = {"steps":5, "norm":"l2", "method":"PGD"} >>> model = HRNNDRO( ... input_dim=3072, ... num_classes=10, ... alpha=0.5, ... epsilon=0.2, ... adversarial_params=adv_params ... )
- fit(X, y, train_ratio=0.8, lr=0.001, batch_size=None, epochs=100, verbose=True)¶
Enhanced training loop with adversarial training