2017-06-22 4 views
0

J'ai pratiqué l'apprentissage automatique, et je suis tombé sur des didacticiels mnist. En apprenant, j'ai fait ce code.Impossible de générer un résultat précis à partir du jeu de données mnist

`tensorflow d'importation comme tf d'importation tensorflow.examples.tutorials.mnist input_data numpy import comme np

mnist = input_data.read_data_sets("/tmp/data/", one_hot=True) 

n_hidden_layer_1 = 500 
n_hidden_layer_2 = 500 
n_hidden_layer_3 = 500 

n_classes = 10 
batch_size = 100 

x = tf.placeholder('float', shape = [None, 784]) 
y = tf.placeholder('float') 

hidden_layer_1 = { 
    'weights': tf.Variable(tf.random_normal(shape = [784, n_hidden_layer_1])), 
    'bias': tf.Variable(tf.random_normal(shape = [n_hidden_layer_1])) 
} 

hidden_layer_2 = { 
    'weights': tf.Variable(tf.random_normal(shape = [n_hidden_layer_1, n_hidden_layer_2])), 
    'bias': tf.Variable(tf.random_normal(shape = [n_hidden_layer_2])) 
} 

hidden_layer_3 = { 
    'weights': tf.Variable(tf.random_normal(shape = [n_hidden_layer_2, n_hidden_layer_3])), 
    'bias': tf.Variable(tf.random_normal(shape = [n_hidden_layer_3])) 
} 

output_layer = { 
    'weights': tf.Variable(tf.random_normal(shape = [n_hidden_layer_3, n_classes])), 
    'bias': tf.Variable(tf.random_normal(shape = [n_classes])) 
} 

hidden_layer_1_output = tf.nn.relu(tf.add(tf.matmul(x, hidden_layer_1['weights']), hidden_layer_1['bias'])) 
hidden_layer_2_output = tf.nn.relu(tf.add(tf.matmul(hidden_layer_1_output, hidden_layer_2['weights']), hidden_layer_2['bias'])) 
hidden_layer_3_output = tf.nn.relu(tf.add(tf.matmul(hidden_layer_2_output, hidden_layer_3['weights']), hidden_layer_3['bias'])) 
final_output = tf.nn.relu(tf.add(tf.matmul(hidden_layer_3_output, output_layer['weights']), output_layer['bias'])) 

cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=final_output, labels=y)) 
model = tf.train.AdamOptimizer().minimize(cost) 

epochs = 10 

with tf.Session() as sess: 
    sess.run(tf.global_variables_initializer()) 
    for i in range(epochs): 
     epoch_loss = 0 
     for _ in range(mnist.train.num_examples/batch_size): 
      P,Q = mnist.train.next_batch(batch_size) 
      _,c = sess.run([model, cost], feed_dict = {x:P, y:Q}) 
      epoch_loss+=c 

     print("Epoch no:",i,"Epoch_loss:",epoch_loss) 
    correct = tf.equal(tf.argmax(final_output,1), tf.argmax(y,1)) 
    accuracy = tf.reduce_mean(tf.cast(correct, 'float'))  
    print("accuracy: ",accuracy.eval({x:mnist.test.images, y:mnist.test.labels})) 

Le résultat généré est

Extracting /tmp/data/train-images-idx3-ubyte.gz 
Extracting /tmp/data/train-labels-idx1-ubyte.gz 
Extracting /tmp/data/t10k-images-idx3-ubyte.gz 
Extracting /tmp/data/t10k-labels-idx1-ubyte.gz 
('Epoch no:', 0, 'Epoch_loss:', 265771.25100541115) 
('Epoch no:', 1, 'Epoch_loss:', 1310.440309047699) 
('Epoch no:', 2, 'Epoch_loss:', 1262.8069067001343) 
('Epoch no:', 3, 'Epoch_loss:', 1262.8069069385529) 
('Epoch no:', 4, 'Epoch_loss:', 1262.8069067001343) 
('Epoch no:', 5, 'Epoch_loss:', 1262.8069069385529) 
('Epoch no:', 6, 'Epoch_loss:', 1262.8069067001343) 
('Epoch no:', 7, 'Epoch_loss:', 1262.8069067001343) 
('Epoch no:', 8, 'Epoch_loss:', 1262.8069064617157) 
('Epoch no:', 9, 'Epoch_loss:', 1262.8069064617157) 
('accuracy: ', 0.1008) 

Pouvez-vous s'il vous plaît dire les raisons possibles pour l'inexactitude de mon résultat dans ce code et comment l'améliorer?

+0

Je parie que c'est pas assez _tremendously_ époques. Vous devriez viser des centaines d'entre eux, je pense. – ForceBru

Répondre

2

Il y a quelques problèmes avec votre code:

  1. Supprimer activation de Relu sur le final_output. Le softmax_cross_entropy_with_logits appliquera l'activation de softmax sur votre final_output.

    final_output = tf.add(tf.matmul(hidden_layer_3_output, output_layer['weights']), output_layer['bias']) 
    
  2. Définir l'écart-type des poids à une valeur inférieure.

    'weights': tf.Variable(tf.random_normal(shape = [784, n_hidden_layer_1], stddev=0.005)) 
    
+0

Merci beaucoup .. :) Il a résolu mon problème grandement .. – Desmnd