2016-12-08 1 views
1

Je suis débutant de Theano.And quand je lance le code de la régression logistique (http://deeplearning.net/tutorial/code/logistic_sgd.py),i un problème, à la fin du code, il y a une fonction de prédiction:tutoriel Théano: prédiction en utilisant un modèle formé

def predict(): 
    """ 
    An example of how to load a trained model and use it 
    to predict labels. 
    """ 

    # load the saved model 
    classifier = pickle.load(open('best_model.pkl')) 

    # compile a predictor function 
    predict_model = theano.function(
     inputs=[classifier.input], 
     outputs=classifier.y_pred) 

    # We can test it on some examples from test test 
    dataset='mnist.pkl.gz' 
    datasets = load_data(dataset) 
    test_set_x, test_set_y = datasets[2] 
    test_set_x = test_set_x.get_value() 

    predicted_values = predict_model(test_set_x[:10]) 
    print("Predicted values for the first 10 examples in test set:") 
    print(predicted_values) 

Il peut recharger le modèle et prédire les étiquettes de nouvelles données. Mais je ne peux pas obtenir la sortie de la prédiction sortie .Mon est comme ça, quand je lance le tout le code

/usr/bin/python2.7 /home/daiy/PycharmProjects/MNISTdigitclassification/logistic-regression-code.py 
... loading data 
... building the model 
... training the model 
epoch 1, minibatch 83/83, validation error 12.458333 % 
epoch 2, minibatch 83/83, validation error 11.010417 % 
. 
. 
. 
epoch 73, minibatch 83/83, validation error 7.500000 % 
Optimization complete with best validation score of 7.500000 %, 
The code run for 74 epochs, with 3.189913 epochs/sec 
The code for file logistic-regression-code.py ran for 23.2s 

    Process finished with exit code 0 

debugger il en pycharm, il ne montre aucune erreur.Et quand je crains mangé une nouvelle py.file, le code comme ceci:

import pickle,numpy 
import theano 
import six.moves.cPickle as pickle 
import gzip 
import os 
import theano.tensor as T 

def load_data(dataset): 
    ''' Loads the dataset 

    :type dataset: string 
    :param dataset: the path to the dataset (here MNIST) 
    ''' 

    ############# 
    # LOAD DATA # 
    ############# 

    # Download the MNIST dataset if it is not present 
    data_dir, data_file = os.path.split(dataset) 
    if data_dir == "" and not os.path.isfile(dataset): 
     # Check if dataset is in the data directory. 
     new_path = os.path.join(
      os.path.split(__file__)[0], 
      "..", 
      "data", 
      dataset 
     ) 
     if os.path.isfile(new_path) or data_file == 'mnist.pkl.gz': 
      dataset = new_path 



    print('... loading data') 

    with gzip.open(dataset, 'rb') as f: 
     try: 
      train_set, valid_set, test_set = pickle.load(f, encoding='latin1') 
     except: 
      train_set, valid_set, test_set = pickle.load(f) 

    def shared_dataset(data_xy, borrow=True): 
     """ Function that loads the dataset into shared variables 

     The reason we store our dataset in shared variables is to allow 
     Theano to copy it into the GPU memory (when code is run on GPU). 
     Since copying data into the GPU is slow, copying a minibatch everytime 
     is needed (the default behaviour if the data is not in a shared 
     variable) would lead to a large decrease in performance. 
     """ 
     data_x, data_y = data_xy 
     shared_x = theano.shared(numpy.asarray(data_x, 
               dtype=theano.config.floatX), 
           borrow=borrow) 
     shared_y = theano.shared(numpy.asarray(data_y, 
               dtype=theano.config.floatX), 
           borrow=borrow) 
     # When storing data on the GPU it has to be stored as floats 
     # therefore we will store the labels as ``floatX`` as well 
     # (``shared_y`` does exactly that). But during our computations 
     # we need them as ints (we use labels as index, and if they are 
     # floats it doesn't make sense) therefore instead of returning 
     # ``shared_y`` we will have to cast it to int. This little hack 
     # lets ous get around this issue 
     return shared_x, T.cast(shared_y, 'int32') 

    test_set_x, test_set_y = shared_dataset(test_set) 
    valid_set_x, valid_set_y = shared_dataset(valid_set) 
    train_set_x, train_set_y = shared_dataset(train_set) 


    rval = [(train_set_x, train_set_y), (valid_set_x, valid_set_y), 
      (test_set_x, test_set_y)] 
    return rval 

dataset = 'mnist.pkl.gz' 
datasets = load_data(dataset) 
train_set_x, train_set_y = datasets[0] 
valid_set_x, valid_set_y = datasets[1] 
test_set_x, test_set_y = datasets[2] 

def predict(): 
    ''' 

    :return: 
    ''' 

    classifier = pickle.load(open('best_model.pkl','rb')) 

    predict_model = theano.function(inputs=[classifier.input],outputs=classifier.y_pred) 

    dataset = 'mnist.pkl.gz' 
    datasets = load_data(dataset) 
    test_set_x ,test_set_y = datasets[2] 
    test_set_x = test_set_x.get_value() 

    predicted_values = predict_model(test_set_x[:10]) 
    print('predicted values for the first 10 examples in test data:') 
    print predicted_values 

la sortie est:

/usr/bin/python2.7 /home/daiy/PycharmProjects/MNISTdigitclassification/yuce.py 
... loading data 

Process finished with exit code0 

il n'y a toujours pas output.but prédictive quand je le débugger, il est:

/usr/bin/python2.7 /raid/pycharm-community-2016.2.3/helpers/pydev/pydevd.py --multiproc --qt-support --client 127.0.0.1 --port 43960 --file /home/daiy/PycharmProjects/MNISTdigitclassification/yuce.py 
warning: Debugger speedups using cython not found. Run '"/usr/bin/python2.7" "/raid/pycharm-community-2016.2.3/helpers/pydev/setup_cython.py" build_ext --inplace' to build. 
pydev debugger: process 4506 is connecting 

Connected to pydev debugger (build 162.1967.10) 
... loading data 
Exception TypeError: TypeError("'NoneType' object is not callable",) in <function _remove at 0x7fe6444f1668> ignored 

Process finished with exit code 0 

Je pense que c'est une question facile, mais je ne peux pas trouver la réponse. et j'utilise python2.7 dans ubuntun14.04.1.

Répondre

0

vous pouvez essayer de changer les parties de la fonction suivante prédire
(comme indiqué par les flèches, ->)

def predict(): 
    """ 
    An example of how to load a trained model and use it 
    to predict labels. 
    """ 

    # load the saved model 
    classifier = pickle.load(open('best_model.pkl')) 

    # compile a predictor function 
    predict_model = theano.function(
    inputs=[classifier.input], 
    outputs=classifier.y_pred) 

    # We can test it on some examples from test test 
    dataset='mnist.pkl.gz' 
    datasets = load_data(dataset) 
    test_set_x, test_set_y = datasets[2] 
    test_set_x = test_set_x.get_value() 

--> predicted_values = predict_model(test_set_x) 
    print("Predicted values for the first 10 examples in test set:")   
--> return predicted_values