SteepestDescentMMC - unsupervised RLS; maximum margin clustering type, steepest descent

class rlscore.learner.steepest_descent_mmc.SteepestDescentMMC(X, regparam=1.0, number_of_clusters=2, kernel='LinearKernel', basis_vectors=None, Y=None, fixed_indices=None, callback=None, **kwargs)

Bases: rlscore.predictor.predictor.PredictorInterface

RLS-based maximum-margin clustering. Performs steepest descent search with a shaking heuristic to avoid getting stuck in local minima.

Parameters:
X : {array-like, sparse matrix}, shape = [n_samples, n_features]

Data matrix

regparam : float, optional

regularization parameter, regparam > 0 (default=1.0)

number_of_clusters : int, optional

number of clusters (default = 2)

kernel : {‘LinearKernel’, ‘GaussianKernel’, ‘PolynomialKernel’, ‘PrecomputedKernel’, …}

kernel function name, imported dynamically from rlscore.kernel

basis_vectors : {array-like, sparse matrix}, shape = [n_bvectors, n_features], optional

basis vectors (typically a randomly chosen subset of the training data)

Y : {array-like}, shape = [n_samples] or [n_samples, n_clusters], optional

Initial clustering (binary or one-versus-all encoding)

fixed_indixes : list of indices, optional

Instances whose clusters are prefixed (i.e. not allowed to change)

callback : callback function, optional

called after each pass through data

Other Parameters:
 
bias : float, optional

LinearKernel: the model is w*x + bias*w0, (default=1.0)

gamma : float, optional

GaussianKernel: k(xi,xj) = e^(-gamma*<xi-xj,xi-xj>) (default=1.0) PolynomialKernel: k(xi,xj) = (gamma * <xi, xj> + coef0)**degree (default=1.0)

coef0 : float, optional

PolynomialKernel: k(xi,xj) = (gamma * <xi, xj> + coef0)**degree (default=0.)

degree : int, optional

PolynomialKernel: k(xi,xj) = (gamma * <xi, xj> + coef0)**degree (default=2)

Notes

The steepest descent variant of the algorithm is described in [1].

References

[1] Tapio Pahikkala, Antti Airola, Fabian Gieseke, and Oliver Kramer. Unsupervised multi-class regularized least-squares classification. The 12th IEEE International Conference on Data Mining (ICDM 2012), pages 585–594. IEEE Computer Society, December 2012

Attributes:
predictor : {LinearPredictor, KernelPredictor}

trained predictor

predict(X)

Predicts outputs for new inputs

Parameters:
X : {array-like, sparse matrix}, shape = [n_samples, n_features]

input data matrix

Returns:
P : array, shape = [n_samples, n_tasks]

predictions