Optimizer

Adam

Hidden layers

1 hidden layer with 9 neurons

Activate hidden layer function

Relu

Dropout grade

0.20 (in hidden layer and input layer)

Learning rate

0.001

Batch size

10

MaxNorm

3

Epochs

200

Validation Split

0.15

Output layer activation function

sigmoid