quapy.classification package¶
Submodules¶
quapy.classification.methods module¶
- class quapy.classification.methods.PCALR(n_components=100, **kwargs)¶
Bases:
sklearn.base.BaseEstimator
An example of a classification method that also generates embedded inputs, as those required for QuaNet. This example simply combines a Principal Component Analysis (PCA) with Logistic Regression (LR).
- fit(X, y)¶
- get_params()¶
Get parameters for this estimator.
- Parameters
deep (bool, default=True) – If True, will return the parameters for this estimator and contained subobjects that are estimators.
- Returns
params – Parameter names mapped to their values.
- Return type
dict
- predict(X)¶
- predict_proba(X)¶
- set_params(**params)¶
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as
Pipeline
). The latter have parameters of the form<component>__<parameter>
so that it’s possible to update each component of a nested object.- Parameters
**params (dict) – Estimator parameters.
- Returns
self – Estimator instance.
- Return type
estimator instance
- transform(X)¶
quapy.classification.neural module¶
- class quapy.classification.neural.CNNnet(vocabulary_size, n_classes, embedding_size=100, hidden_size=256, repr_size=100, kernel_heights=[3, 5, 7], stride=1, padding=0, drop_p=0.5)¶
Bases:
quapy.classification.neural.TextClassifierNet
- conv_block(input, conv_layer)¶
- document_embedding(input)¶
- get_params()¶
- property vocabulary_size¶
- class quapy.classification.neural.LSTMnet(vocabulary_size, n_classes, embedding_size=100, hidden_size=256, repr_size=100, lstm_class_nlayers=1, drop_p=0.5)¶
Bases:
quapy.classification.neural.TextClassifierNet
- document_embedding(x)¶
- get_params()¶
- property vocabulary_size¶
- class quapy.classification.neural.NeuralClassifierTrainer(net: quapy.classification.neural.TextClassifierNet, lr=0.001, weight_decay=0, patience=10, epochs=200, batch_size=64, batch_size_test=512, padding_length=300, device='cpu', checkpointpath='../checkpoint/classifier_net.dat')¶
Bases:
object
- property device¶
- fit(instances, labels, val_split=0.3)¶
- get_params()¶
- predict(instances)¶
- predict_proba(instances)¶
- reset_net_params(vocab_size, n_classes)¶
- set_params(**params)¶
- transform(instances)¶
- class quapy.classification.neural.TextClassifierNet¶
Bases:
torch.nn.modules.module.Module
- dimensions()¶
- abstract document_embedding(x)¶
- forward(x)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- abstract get_params()¶
- predict_proba(x)¶
- property vocabulary_size¶
- xavier_uniform()¶
quapy.classification.svmperf module¶
- class quapy.classification.svmperf.SVMperf(svmperf_base, C=0.01, verbose=False, loss='01')¶
Bases:
sklearn.base.BaseEstimator
,sklearn.base.ClassifierMixin
- decision_function(X, y=None)¶
- fit(X, y)¶
- predict(X)¶
- set_params(**parameters)¶
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as
Pipeline
). The latter have parameters of the form<component>__<parameter>
so that it’s possible to update each component of a nested object.- Parameters
**params (dict) – Estimator parameters.
- Returns
self – Estimator instance.
- Return type
estimator instance
- valid_losses = {'01': 0, 'f1': 1, 'kld': 12, 'mae': 26, 'mrae': 27, 'nkld': 13, 'q': 22, 'qacc': 23, 'qf1': 24, 'qgm': 25}¶