Mobile Monitoring Solutions

Search
Close this search box.

Deep Learning Networks: Advantages of ReLU over Sigmoid Function

MMS Founder
MMS RSS

Article originally posted on Data Science Central. Visit Data Science Central

This was posted as a question on StackExchange. The state of the art of non-linearity is to use rectified linear units (ReLU) instead of sigmoid function in deep neural network. What are the advantages? I know that training a network when ReLU is used would be faster, and it is more biological inspired, what are the other advantages? (That is, any disadvantages of using sigmoid)?

Below is the best answer.

Advantage:

  • Sigmoid: not blowing up activation
  • Relu : not vanishing gradient
  • Relu : More computationally efficient to compute than Sigmoid like functions since Relu just needs to pick max(0, x) and not perform expensive exponential operations as in Sigmoids
  • Relu : In practice, networks with Relu tend to show better convergence performance than sigmoid. (Krizhevsky et al.)

Disadvantage:

  • Sigmoid: tend to vanish gradient (cause there is a mechanism to reduce the gradient as “a”  increases, where “a” is the input of a sigmoid function. Gradient of Sigmoid: S′(a)=S(a)(1−S(a)). When “a” grows to infinite large, S′(a)=S(a)(1−S(a))=1×(1−1)=0.

  • Relu : tend to blow up activation (there is no mechanism to constrain the output of the neuron, as “a” itself is the output)

  • Relu : Dying Relu problem – if too many activations get below zero then most of the units(neurons) in network with Relu will simply output zero, in other words, die and thereby prohibiting learning.(This can be handled, to some extent, by using Leaky-Relu instead.)

Read full discussion here.

DSC Resources

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.