Personalized Loss¶
In the following, we present a case to show how to incorporate the personalized loss and plug it into different DRO losses.
\(f\)-divergence DRO¶
Across \(f\)-divergence DRO, the adaptive loss can be easily modified as follows. Below, we modify the standard loss into the quantile regression: \(\ell((\theta, b);(X, Y)) = 3(Y - \theta^{\top}X - b)^+ + (\theta^{\top}X + b - Y)^+\).
[2]:
import cvxpy as cp
import numpy as np
from dro.src.linear_model.wasserstein_dro import *
from dro.src.linear_model.chi2_dro import *
X = np.array([[1, 1], [2, 1], [3, 1], [4,1]])
y = np.array([1, 1, 0, 0])
model = Chi2DRO(input_dim = 2, model_type = 'quantile')
from types import MethodType
def _loss(self, X, y):
return 3 * np.maximum(y - X @ self.theta - self.b, 0) + np.maximum(X @ self.theta + self.b - y, 0)
def _cvx_loss(self, X, y, theta, b):
return 3 * cp.pos(y - X @ theta - b) + 1 * cp.pos(X @ theta + b - y)
model._loss = MethodType(_loss, model)
model._cvx_loss = MethodType(_cvx_loss, model)
for k in range(5):
model.update({'eps': 0.05 * (k + 1)})
print(model.fit(X, y))
{'theta': [-0.4889100722052225, 0.0], 'b': array(1.9556403)}
{'theta': [-0.4710318069318671, 0.0], 'b': array(1.88412726)}
{'theta': [-0.4651670885521472, 0.0], 'b': array(1.86066836)}
{'theta': [-0.4620192280358392, 0.0], 'b': array(1.84807691)}
{'theta': [-0.4599744797051139, 0.0], 'b': array(1.83989792)}
Wasserstein DRO¶
For more complicated losses, such as hr-dro and wasserstein-dro, where the loss difference changes with the inner function fit, it has not been implemented yet.
Uncontextual and Contextual Robust Learning:
uncontextual: by taking \(X\) as a 1-d unit one vector
contextual: newsvendor (the same as quantile regression); portfolio