Classification
All classifiers implement IClassificationModel and operate directly on Matrix and Vector primitives.
đ Logistic Regressionâ
Class: Logistic
| Hyperparameter | Values |
|---|---|
LearningRate | Step size |
MaxIterations | Convergence limit |
FitIntercept | Include bias term |
RegularizationStrength | L2 penalty |
Tolerance | Convergence threshold |
đŗ Decision Treeâ
Class: DecisionTree
| Hyperparameter | Values |
|---|---|
MaxDepth | Maximum tree depth |
MinSamplesSplit | Minimum samples to split |
đ˛ Random Forestâ
Class: RandomForest
| Hyperparameter | Values |
|---|---|
NumTrees | Number of trees |
MaxDepth | Maximum tree depth |
MinSamplesSplit | Minimum samples to split |
đĨ K-Nearest Neighborsâ
Class: KNearestNeighbors
| Hyperparameter | Values |
|---|---|
K | Number of neighbors |
đ˛ Naive Bayesâ
Class: NaiveBayes
No tunable hyperparameters.
âĄī¸ Support Vector Classifier (Linear)â
Class: LinearSVC
| Hyperparameter | Values |
|---|---|
C | Regularization strength |
LearningRate | Step size |
Epochs | Training iterations |
đ¯ Support Vector Classifier (Kernel)â
Class: KernelSVC
| Hyperparameter | Values |
|---|---|
C | Regularization strength |
Kernel | RBF, Polynomial |
LearningRate | Step size |
Epochs | Training iterations |
Gamma | Kernel coefficient |
Degree | Polynomial degree |
đ§ Multilayer Perceptron (Classifier)â
Class: MLPClassifier
| Hyperparameter | Values |
|---|---|
HiddenLayers | e.g. 64, 64,32 |
LearningRate | Step size |
Epochs | Training iterations |
Activation | ReLU, Tanh, Sigmoid |