KronRLS - Kronecker kernel RLS for pairwise data with complete training set¶
-
class
rlscore.learner.kron_rls.
KronRLS
(**kwargs)¶ Bases:
rlscore.predictor.pairwise_predictor.PairwisePredictorInterface
Regularized least-squares regression with paired-input (dyadic) data and Kronecker kernels. Closed form solution for complete data set with labels for all pairs known.
Parameters: - X1 : {array-like}, shape = [n_samples1, n_features1]
Data matrix 1 (for linear KronRLS)
- X2 : {array-like}, shape = [n_samples2, n_features2]
Data matrix 2 (for linear KronRLS)
- K1 : {array-like}, shape = [n_samples1, n_samples1]
Kernel matrix 1 (for kernel KronRLS)
- K2 : {array-like}, shape = [n_samples1, n_samples1]
Kernel matrix 2 (for kernel KronRLS)
- Y : {array-like}, shape = [n_samples1*n_samples2]
Training set labels. Label for (X1[i], X2[j]) maps to Y[i + j*n_samples1] (column order).
- regparam : float, optional
regularization parameter, regparam > 0 (default=1.0)
Notes
Computational complexity of training: m = n_samples1, n = n_samples2, d = n_features1, e = n_features2
O(mnd + mne) Linear version (assumption: d < m, e < n)
O(m^3 + n^3) Kernel version
KronRLS implements the closed form solution described in [1,2]. In the publications only special case X1 = X2 (or equivalently K1 = K2) is considered, while this implementation also allows the two sets of inputs to be from different domains. By default KronRLS trains a regression method that minimizes mean-squared error.
Currently, Kronecker RankRLS that minimizes magnitude preserving ranking error can be trained with method solve_linear_conditional_ranking(…) (yes, this is a hack and no, kernels are currently not supported for ranking). Inputs from domain 1 act as queries, and inputs from domain 2 as objects to be ranked.
References
[1] Tapio Pahikkala, Willem Waegeman, Antti Airola, Tapio Salakoski, and Bernard De Baets. Conditional ranking on relational data. Machine Learning and Knowledge Discovery in Databases (ECML PKDD), 2010
[2] Tapio Pahikkala, Antti Airola, Michiel Stock, Bernard De Baets, and Willem Waegeman. Efficient regularized least-squares algorithms for conditional ranking on relational data. Machine Learning, 93(2-3):321–356, 2013.
Attributes: - predictor : {LinearPairwisePredictor, KernelPairwisePredictor}
trained predictor
-
in_sample_loo
()¶ Computes the in-sample leave-one-out cross-validation predictions. By in-sample we denote the setting, where we leave out one entry of Y at a time.
Returns: - F : array, shape = [n_samples1*n_samples2]
Training set labels. Label for (X1[i], X2[j]) maps to F[i + j*n_samples1] (column order).
Notes
Computational complexity of leave-one-out:
m = n_samples1, n = n_samples2, d = n_features1, e = n_features2
O(mne + mnd) Linear version (assumption: d < m, e < n)
O(mn^2 + m^2n) Kernel version
-
predict
(X1=None, X2=None, inds_X1pred=None, inds_X2pred=None, pko=None)¶ Computes predictions for test examples.
Parameters: - X1 : {array-like}, shape = [n_samples1, n_features1]
first test data matrix
- X2 : {array-like}, shape = [n_samples2, n_features2]
second test data matrix
- inds_X1pred : array of indices, optional
rows of X1, for which predictions are needed
- inds_X2pred : array of indices, optional
rows of X2, for which predictions are needed
Notes
If using kernels, give kernel matrices K1 and K2 as arguments instead of X1 and X2
-
solve
(regparam)¶ Re-trains KronRLS for the given regparam
Parameters: - regparam : float, optional
regularization parameter, regparam > 0
Notes
Computational complexity of re-training:
m = n_samples1, n = n_samples2, d = n_features1, e = n_features2
O(ed^2 + de^2) Linear version (assumption: d < m, e < n)
O(m^3 + n^3) Kernel version
-
solve_linear_conditional_ranking
(regparam)¶ Trains conditional ranking KronRLS, that ranks objects from domain 2 against objects from domain 1.
Parameters: - regparam : float, optional
regularization parameter, regparam > 0 (default=1.0)
Notes
Minimizes RankRLS type of loss. Currently only linear kernel supported. Including the code here is a hack, this should probably be implemented as an independent learner.