GlobalRankRLS - ranking regularized least-squares, ordinal regression

class rlscore.learner.global_rankrls.GlobalRankRLS(X, Y, regparam=1.0, kernel='LinearKernel', basis_vectors=None, **kwargs)

Bases: rlscore.predictor.predictor.PredictorInterface

RankRLS: Regularized least-squares ranking. Global ranking (see QueryRankRLS for query-structured data)

Parameters:
X : {array-like, sparse matrix}, shape = [n_samples, n_features]

Data matrix

Y : {array-like}, shape = [n_samples] or [n_samples, n_labels]

Training set labels

regparam : float, optional

regularization parameter, regparam > 0 (default=1.0)

kernel : {‘LinearKernel’, ‘GaussianKernel’, ‘PolynomialKernel’, ‘PrecomputedKernel’, …}

kernel function name, imported dynamically from rlscore.kernel

basis_vectors : {array-like, sparse matrix}, shape = [n_bvectors, n_features], optional

basis vectors (typically a randomly chosen subset of the training data)

Other Parameters:
 
Typical kernel parameters include:
bias : float, optional

LinearKernel: the model is w*x + bias*w0, (default=1.0)

gamma : float, optional

GaussianKernel: k(xi,xj) = e^(-gamma*<xi-xj,xi-xj>) (default=1.0) PolynomialKernel: k(xi,xj) = (gamma * <xi, xj> + coef0)**degree (default=1.0)

coef0 : float, optional

PolynomialKernel: k(xi,xj) = (gamma * <xi, xj> + coef0)**degree (default=0.)

degree : int, optional

PolynomialKernel: k(xi,xj) = (gamma * <xi, xj> + coef0)**degree (default=2)

Notes

Computational complexity of training: m = n_samples, d = n_features, l = n_labels, b = n_bvectors

O(m^3 + dm^2 + lm^2): basic case

O(md^2 +lmd): Linear Kernel, d < m

O(mb^2 +lmb): Sparse approximation with basis vectors

RankRLS algorithm is described in [1,2]. The leave-pair-out cross-validation algorithm is described in [2,3]. The use of leave-pair-out cross-validation for AUC estimation is analyzed in [4].

References

[1] Tapio Pahikkala, Evgeni Tsivtsivadze, Antti Airola, Jorma Boberg and Tapio Salakoski Learning to rank with pairwise regularized least-squares. In Thorsten Joachims, Hang Li, Tie-Yan Liu, and ChengXiang Zhai, editors, SIGIR 2007 Workshop on Learning to Rank for Information Retrieval, pages 27–33, 2007.

[2] Tapio Pahikkala, Evgeni Tsivtsivadze, Antti Airola, Jouni Jarvinen, and Jorma Boberg. An efficient algorithm for learning to rank from preference graphs. Machine Learning, 75(1):129-165, 2009.

[3] Tapio Pahikkala, Antti Airola, Jorma Boberg, and Tapio Salakoski. Exact and efficient leave-pair-out cross-validation for ranking RLS. In Proceedings of the 2nd International and Interdisciplinary Conference on Adaptive Knowledge Representation and Reasoning (AKRR‘08), pages 1-8, Espoo, Finland, 2008.

[4] Antti Airola, Tapio Pahikkala, Willem Waegeman, Bernard De Baets, Tapio Salakoski. An Experimental Comparison of Cross-Validation Techniques for Estimating the Area Under the ROC Curve. Computational Statistics & Data Analysis 55(4), 1828-1844, 2011.

Attributes:
predictor : {LinearPredictor, KernelPredictor}

trained predictor

holdout(indices)

Computes hold-out predictions for a trained RankRLS

Parameters:
indices : list of indices, shape = [n_hsamples]

list of indices of training examples belonging to the set for which the hold-out predictions are calculated. The list can not be empty.

Returns:
F : array, shape = [n_hsamples, n_labels]

holdout predictions

Notes

The algorithm is a modification of the ones published in [1,2] for the regular RLS method.

References

[1] Tapio Pahikkala, Jorma Boberg, and Tapio Salakoski. Fast n-Fold Cross-Validation for Regularized Least-Squares. Proceedings of the Ninth Scandinavian Conference on Artificial Intelligence, 83-90, Otamedia Oy, 2006.

[2] Tapio Pahikkala, Hanna Suominen, and Jorma Boberg. Efficient cross-validation for kernelized least-squares regression with sparse basis expansions. Machine Learning, 87(3):381–407, June 2012.

leave_one_out()

Computes leave-one-out predictions for a trained RankRLS

Returns:
F : array, shape = [n_samples, n_labels]

leave-one-out predictions

Notes

Provided for reference, usually you should not call this, but rather use leave_pair_out.

leave_pair_out(pairs_start_inds, pairs_end_inds)

Computes leave-pair-out predictions for a trained RankRLS.

Parameters:
pairs_start_inds : list of indices, shape = [n_pairs]

list of indices from range [0, n_samples-1]

pairs_end_inds : list of indices, shape = [n_pairs]

list of indices from range [0, n_samples-1]

Returns:
P1 : array, shape = [n_pairs]

holdout predictions for pairs_start_inds

P2 : array, shape = [n_pairs]

holdout predictions for pairs_end_inds

Notes

Computes the leave-pair-out cross-validation predictions, where each (i,j) pair with i= pair_start_inds[k] and j = pairs_end_inds[k] is left out in turn.

When estimating area under ROC curve with leave-pair-out, one should leave out all positive-negative pairs, while for estimating the general ranking error one should leave out all pairs with different labels.

Computational complexity of leave-pair-out with most pairs left out: m = n_samples, d = n_features, l = n_labels, b = n_bvectors

O(lm^2+m^3): basic case

O(lm^2+dm^2): Linear Kernel, d < m

O(lm^2+bm^2): Sparse approximation with basis vectors

The leave-pair-out cross-validation algorithm is described in [1,2]. The use of leave-pair-out cross-validation for AUC estimation has been analyzed in [3]

[1] Tapio Pahikkala, Evgeni Tsivtsivadze, Antti Airola, Jouni Jarvinen, and Jorma Boberg. An efficient algorithm for learning to rank from preference graphs. Machine Learning, 75(1):129-165, 2009.

[2] Tapio Pahikkala, Antti Airola, Jorma Boberg, and Tapio Salakoski. Exact and efficient leave-pair-out cross-validation for ranking RLS. In Proceedings of the 2nd International and Interdisciplinary Conference on Adaptive Knowledge Representation and Reasoning (AKRR‘08), pages 1-8, Espoo, Finland, 2008.

[3] Antti Airola, Tapio Pahikkala, Willem Waegeman, Bernard De Baets, Tapio Salakoski. An Experimental Comparison of Cross-Validation Techniques for Estimating the Area Under the ROC Curve. Computational Statistics & Data Analysis 55(4), 1828-1844, 2011.

predict(X)

Predicts outputs for new inputs

Parameters:
X : {array-like, sparse matrix}, shape = [n_samples, n_features]

input data matrix

Returns:
P : array, shape = [n_samples, n_tasks]

predictions

solve(regparam=1.0)

Re-trains RankRLS for the given regparam

Parameters:
regparam : float, optional

regularization parameter, regparam > 0 (default=1.0)

Notes

Computational complexity of re-training: m = n_samples, d = n_features, l = n_labels, b = n_bvectors

O(lm^2): basic case

O(lmd): Linear Kernel, d < m

O(lmb): Sparse approximation with basis vectors