2016-04-21 3 views
1

Je commence à jouer avec Keras et les réseaux de neurones simples. La question concerne l'exactitude et quelles sont les prochaines étapes pour améliorer la précision. Considérant l'ensemble de données http://archive.ics.uci.edu/ml/datasets/default+of+credit+card+clients qui a 30K exemples et 24 caractéristiques dans le but de prédire s'il y aura un défaut ou non. J'ai créé un réseau simple avec 24 sources d'entrée dans la couche d'entrée, 16 cachées, et une couche finale douce. La perte est binary_crossentropy. Test est de 10% et validation_split est de 20%comment améliorer un réseau simple

Une ligne d'entrée est

1,20000,2,2,1,24,2,2,-1,-1,-2,-2,3913,3102,689,0,0,0,0,689,0,0,0,0,1 

code est

import pandas as pd 
from keras.models import Sequential 
from keras.layers.core import Dense, Activation, Dropout 
from keras.optimizers import SGD 
from sklearn.cross_validation import train_test_split 
from keras.utils import np_utils, generic_utils 

# load training in a panda dataframe and skip first line 
train = pd.read_csv('./data/defaulCC.csv', header=1) 
# split X, y 
X = train.iloc[:,:-1].values 
y = train.iloc[:,-1:].values 
dimof_input = X.shape[1] 
dimof_output = len(set(y.flat)) 
print('dimof_input: ', dimof_input) 
print('dimof_output: ', dimof_output) 

X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.9, random_state=0) 
y_train, y_test = [np_utils.to_categorical(x) for x in (y_train, y_test)] 
print X_train.shape, X_test.shape, y_train.shape, y_test.shape 

# Set constants 
batch_size = 128 
dimof_middle = 16 
dropout = 0.2 
countof_epoch = 100 
verbose = 1 
optimizer='sgd' 

print('batch_size: ', batch_size) 
print('dimof_middle: ', dimof_middle) 
print('dropout: ', dropout) 
print('countof_epoch: ', countof_epoch) 
print('verbose: ', verbose) 
print ('optimizer: ', optimizer) 

# this network has dimof_input n the input layer 
# dimof_output in the output layer 
# dimof_middle in the hidden layer 

model = Sequential() 
model.add(Dense(dimof_middle, input_dim=dimof_input, activation='relu')) 
model.add(Dropout(dropout)) 
#model.add(Dense(dimof_middle, activation='relu')) 
#model.add(Dropout(dropout)) 
model.add(Dense(dimof_output, activation='softmax')) 

sgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True) 
model.compile(loss='binary_crossentropy', optimizer=sgd, metrics=['accuracy']) 

# Train 
model.fit(
X_train, y_train, 
validation_split=0.2, 
batch_size=batch_size, nb_epoch=countof_epoch, verbose=verbose) 

# Evaluate 
loss, accuracy = model.evaluate(X_test, y_test, verbose=verbose) 
print('loss: ', loss) 
print('accuracy: ', accuracy) 
print() 

sortie est

Using Theano backend. 
('dimof_input: ', 24) 
('dimof_output: ', 2) 
(27000, 24) (3000, 24) (27000, 2) (3000, 2) 
('batch_size: ', 128) 
('dimof_middle: ', 16) 
('dropout: ', 0.2) 
('countof_epoch: ', 100) 
('verbose: ', 1) 
('optimizer: ', 'sgd') 
Train on 27000 samples, validate on 5400 samples 
Epoch 1/100 
27000/27000 [==============================] - 0s - loss: 3.6371 - acc: 0.7727 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 2/100 
27000/27000 [==============================] - 0s - loss: 3.5866 - acc: 0.7757 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 3/100 
27000/27000 [==============================] - 0s - loss: 3.6024 - acc: 0.7750 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 4/100 
27000/27000 [==============================] - 0s - loss: 3.5859 - acc: 0.7758 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 5/100 
27000/27000 [==============================] - 0s - loss: 3.5854 - acc: 0.7761 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 6/100 
27000/27000 [==============================] - 0s - loss: 3.5883 - acc: 0.7760 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 7/100 
27000/27000 [==============================] - 0s - loss: 3.5855 - acc: 0.7761 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 8/100 
27000/27000 [==============================] - 0s - loss: 3.5854 - acc: 0.7761 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 9/100 
27000/27000 [==============================] - 0s - loss: 3.5847 - acc: 0.7762 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 10/100 
27000/27000 [==============================] - 0s - loss: 3.5900 - acc: 0.7760 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 11/100 
27000/27000 [==============================] - 0s - loss: 3.5689 - acc: 0.7773 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 12/100 
27000/27000 [==============================] - 0s - loss: 3.5665 - acc: 0.7775 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 13/100 
27000/27000 [==============================] - 0s - loss: 3.5653 - acc: 0.7776 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 14/100 
27000/27000 [==============================] - 0s - loss: 3.5701 - acc: 0.7773 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 15/100 
27000/27000 [==============================] - 0s - loss: 3.5582 - acc: 0.7780 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 16/100 
27000/27000 [==============================] - 0s - loss: 3.5682 - acc: 0.7774 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 17/100 
27000/27000 [==============================] - 0s - loss: 3.5665 - acc: 0.7775 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 18/100 
27000/27000 [==============================] - 0s - loss: 3.5648 - acc: 0.7776 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 19/100 
27000/27000 [==============================] - 0s - loss: 3.5636 - acc: 0.7777 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 20/100 
27000/27000 [==============================] - 0s - loss: 3.5700 - acc: 0.7772 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 21/100 
27000/27000 [==============================] - 0s - loss: 3.5597 - acc: 0.7779 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 22/100 
27000/27000 [==============================] - 0s - loss: 3.5597 - acc: 0.7779 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 23/100 
27000/27000 [==============================] - 0s - loss: 3.5603 - acc: 0.7777 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 24/100 
27000/27000 [==============================] - 0s - loss: 3.5590 - acc: 0.7779 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 25/100 
27000/27000 [==============================] - 0s - loss: 3.5573 - acc: 0.7779 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 26/100 
27000/27000 [==============================] - 0s - loss: 3.5590 - acc: 0.7779 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 27/100 
27000/27000 [==============================] - 0s - loss: 3.5621 - acc: 0.7777 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 28/100 
27000/27000 [==============================] - 0s - loss: 3.5581 - acc: 0.7778 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 29/100 
27000/27000 [==============================] - 0s - loss: 3.5576 - acc: 0.7777 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 30/100 
27000/27000 [==============================] - 0s - loss: 3.5590 - acc: 0.7779 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 31/100 
27000/27000 [==============================] - 0s - loss: 3.5575 - acc: 0.7778 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 32/100 
27000/27000 [==============================] - 0s - loss: 3.5598 - acc: 0.7777 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 33/100 
27000/27000 [==============================] - 0s - loss: 3.5604 - acc: 0.7776 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 34/100 
27000/27000 [==============================] - 0s - loss: 3.5609 - acc: 0.7777 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 35/100 
27000/27000 [==============================] - 0s - loss: 3.5598 - acc: 0.7777 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 36/100 
27000/27000 [==============================] - 0s - loss: 3.5575 - acc: 0.7779 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 37/100 
27000/27000 [==============================] - 0s - loss: 3.5592 - acc: 0.7778 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 38/100 
27000/27000 [==============================] - 0s - loss: 3.5603 - acc: 0.7777 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 39/100 
27000/27000 [==============================] - 0s - loss: 3.5637 - acc: 0.7777 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 40/100 
27000/27000 [==============================] - 0s - loss: 3.5603 - acc: 0.7778 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 41/100 
27000/27000 [==============================] - 0s - loss: 3.5584 - acc: 0.7779 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 42/100 
27000/27000 [==============================] - 0s - loss: 3.5564 - acc: 0.7779 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 43/100 
27000/27000 [==============================] - 0s - loss: 3.5603 - acc: 0.7777 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 44/100 
27000/27000 [==============================] - 0s - loss: 3.5576 - acc: 0.7778 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 45/100 
27000/27000 [==============================] - 0s - loss: 3.5603 - acc: 0.7778 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 46/100 
27000/27000 [==============================] - 0s - loss: 3.5595 - acc: 0.7779 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 47/100 
27000/27000 [==============================] - 0s - loss: 3.5581 - acc: 0.7778 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 48/100 
27000/27000 [==============================] - 0s - loss: 3.5598 - acc: 0.7777 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 49/100 
27000/27000 [==============================] - 0s - loss: 3.5576 - acc: 0.7778 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 50/100 
27000/27000 [==============================] - 0s - loss: 3.5610 - acc: 0.7776 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 51/100 
27000/27000 [==============================] - 0s - loss: 3.5616 - acc: 0.7776 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 52/100 
27000/27000 [==============================] - 0s - loss: 3.5598 - acc: 0.7777 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 53/100 
27000/27000 [==============================] - 0s - loss: 3.5569 - acc: 0.7779 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 54/100 
27000/27000 [==============================] - 0s - loss: 3.5589 - acc: 0.7779 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 55/100 
27000/27000 [==============================] - 0s - loss: 3.5569 - acc: 0.7779 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 56/100 
27000/27000 [==============================] - 0s - loss: 3.5563 - acc: 0.7779 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 57/100 
27000/27000 [==============================] - 0s - loss: 3.5598 - acc: 0.7777 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 58/100 
27000/27000 [==============================] - 0s - loss: 3.5607 - acc: 0.7778 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 59/100 
27000/27000 [==============================] - 0s - loss: 3.5611 - acc: 0.7776 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 60/100 
27000/27000 [==============================] - 0s - loss: 3.5558 - acc: 0.7779 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 61/100 
27000/27000 [==============================] - 0s - loss: 3.5620 - acc: 0.7777 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 62/100 
27000/27000 [==============================] - 0s - loss: 3.5592 - acc: 0.7778 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 63/100 
27000/27000 [==============================] - 0s - loss: 3.5608 - acc: 0.7777 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 64/100 
27000/27000 [==============================] - 0s - loss: 3.5587 - acc: 0.7778 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 65/100 
27000/27000 [==============================] - 0s - loss: 3.5586 - acc: 0.7778 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 66/100 
27000/27000 [==============================] - 0s - loss: 3.5608 - acc: 0.7777 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 67/100 
27000/27000 [==============================] - 0s - loss: 3.5605 - acc: 0.7777 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 68/100 
27000/27000 [==============================] - 0s - loss: 3.5598 - acc: 0.7777 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 69/100 
27000/27000 [==============================] - 0s - loss: 3.5621 - acc: 0.7777 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 70/100 
27000/27000 [==============================] - 0s - loss: 3.5607 - acc: 0.7778 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 71/100 
27000/27000 [==============================] - 0s - loss: 3.5609 - acc: 0.7777 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 72/100 
27000/27000 [==============================] - 0s - loss: 3.5603 - acc: 0.7777 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 73/100 
27000/27000 [==============================] - 0s - loss: 3.5586 - acc: 0.7778 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 74/100 
27000/27000 [==============================] - 0s - loss: 3.5603 - acc: 0.7778 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 75/100 
27000/27000 [==============================] - 0s - loss: 3.5625 - acc: 0.7777 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 76/100 
27000/27000 [==============================] - 0s - loss: 3.5573 - acc: 0.7779 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 77/100 
27000/27000 [==============================] - 0s - loss: 3.5590 - acc: 0.7779 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 78/100 
27000/27000 [==============================] - 0s - loss: 3.5608 - acc: 0.7778 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 79/100 
27000/27000 [==============================] - 0s - loss: 3.5613 - acc: 0.7778 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 80/100 
27000/27000 [==============================] - 0s - loss: 3.5564 - acc: 0.7779 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 81/100 
27000/27000 [==============================] - 0s - loss: 3.5638 - acc: 0.7776 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 82/100 
27000/27000 [==============================] - 0s - loss: 3.5609 - acc: 0.7777 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 83/100 
27000/27000 [==============================] - 0s - loss: 3.5591 - acc: 0.7779 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 84/100 
27000/27000 [==============================] - 0s - loss: 3.5599 - acc: 0.7777 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 85/100 
27000/27000 [==============================] - 0s - loss: 3.5615 - acc: 0.7776 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 86/100 
27000/27000 [==============================] - 0s - loss: 3.5616 - acc: 0.7777 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 87/100 
27000/27000 [==============================] - 0s - loss: 3.5616 - acc: 0.7777 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 88/100 
27000/27000 [==============================] - 0s - loss: 3.5586 - acc: 0.7778 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 89/100 
27000/27000 [==============================] - 0s - loss: 3.5615 - acc: 0.7777 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 90/100 
27000/27000 [==============================] - 0s - loss: 3.5581 - acc: 0.7778 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 91/100 
27000/27000 [==============================] - 0s - loss: 3.5580 - acc: 0.7779 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 92/100 
27000/27000 [==============================] - 0s - loss: 3.5586 - acc: 0.7778 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 93/100 
27000/27000 [==============================] - 0s - loss: 3.5611 - acc: 0.7778 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 94/100 
27000/27000 [==============================] - 0s - loss: 3.5589 - acc: 0.7779 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 95/100 
27000/27000 [==============================] - 0s - loss: 3.5595 - acc: 0.7779 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 96/100 
27000/27000 [==============================] - 0s - loss: 3.5623 - acc: 0.7778 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 97/100 
27000/27000 [==============================] - 0s - loss: 3.5623 - acc: 0.7778 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 98/100 
27000/27000 [==============================] - 0s - loss: 3.5605 - acc: 0.7779 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 99/100 
27000/27000 [==============================] - 0s - loss: 3.5612 - acc: 0.7778 - val_loss: 3.6157 - val_acc: 0.7744 
Epoch 100/100 
27000/27000 [==============================] - 0s - loss: 3.5612 - acc: 0.7778 - val_loss: 3.6157 - val_acc: 0.7744 
3000/3000 [==============================] - 0s  
('loss: ', 3.4197844168345135) 
('accuracy: ', 0.78666666682561237) 
() 

Répondre

1

La perte ne diminue pas et qui est un problème. Il y en a plusieurs que vous pouvez faire pour améliorer:

  • Normaliser vos données. Dans votre exemple spécifique, toutes les fonctionnalités ont des plages différentes, donc les normaliser est un must pour un réseau de neurones pour apprendre avec succès. Pour normaliser, vous pouvez calculer la moyenne et l'écart-type de chaque caractéristique, et soustraire la moyenne et la division par l'écart-type, cela laissera les caractéristiques dans une fourchette approximative de [-1, 1]. La normalisation Min-Max devrait également fonctionner. Après normalisation, vous pouvez augmenter la capacité de votre modèle en ajoutant plus de couches. Vous pouvez également essayer d'utiliser différentes non-linéarités comme sigmoïde.

  • Augmenter la régularisation. Vous utilisez Dropout avec p = 0.2, et vous pouvez l'augmenter à p = 0.5. Vous pouvez également utiliser la normalisation par lots.

Cette réponse est assez générique mais ce conseil devrait fonctionner avec de nombreux types de données.