2017-10-14 1 views

Répondre

0

Jetez un oeil sur le code here:

def _init_coef(self, fan_in, fan_out): 
    if self.activation == 'logistic': 
     # Use the initialization method recommended by 
     # Glorot et al. 
     init_bound = np.sqrt(2./(fan_in + fan_out)) 
    elif self.activation in ('identity', 'tanh', 'relu'): 
     init_bound = np.sqrt(6./(fan_in + fan_out)) 
    else: 
     # this was caught earlier, just to make sure 
     raise ValueError("Unknown activation function %s" % 
         self.activation) 

    coef_init = self._random_state.uniform(-init_bound, init_bound, 
              (fan_in, fan_out)) 
    intercept_init = self._random_state.uniform(-init_bound, init_bound, 
               fan_out) 
    return coef_init, intercept_init 

La méthode décrite est décrite dans le paper: Glorot, X. & Bengio, Y.. (2010). Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, in PMLR 9:249-256