Classifier

Hyper-parameter name

Description

Optimized value space

MLP

hidden_layer_sizes

Number of hidden layers and neurons

[100, 70, 50]

Activation

Output function of non-input neuron

[logistic, relu, tanh]

learning_rate

Regularization parameter

[0.1, 0.001, 0.0001]

Alpha

Controls step size during weight updates

[0.0001, 0.01, 0.1]

Solver

Weight optimization method

[lbfgs, sgd, adam]

RF

ET

n_estimators

Number of trees in the forest

[70, 100, 120]

max_depth

Maximum depth of the tree.

[3, 5, 8]

min_samples_leaf

Minimum number of samples at a leaf node.

RF = [2, 5, 10], ET = [1, 2, 5]

Criterion

Measure the quality of a split

[gini, entropy]

GBM

n_estimators

Number of tress.

[70, 100, 120]

learning_rate

Shrinkage factor for each tree

[0.1, 0.001]

min_samples_leaf

Minimum number of samples needed to be at a leaf node

[5, 10]

max_depth

Maximum depth of individual tree

[3, 5, 7]

Loss

The loss function in the boosting process to be optimized.

[log_loss, exponential]

Adaboost

n_estimators

Number of trees.

[70, 100, 120]

learning_rate

Boosting learning rate

[0.1, 0.001]

XGBoost

n_estimators

Number of trees.

[70, 100, 120]

learning_rate

Shrinkage factor of each tree

[0.1, 0.001]

max_depth

Tree depth

[5, 10, 20]

subsample

Subsample ratio of training samples

[0.1, 0.5, 1]

LightGBM

n_estimators

Number of gradient boosted trees.

[70, 100, 120]

learning_rate

Shrinkage factor of each tree

[0.1, 0.001]

max_depth

Maximum tree depth for base learners.

[10, 20, 30]

num_leaves

Number of leaves for each tree

[5, 8, 15]