Activation functions

Smith Flores
7 min readMay 16, 2022

--

Activation functions live inside neural network layers and modify the data they receive before passing it to the next layer. Activation functions give neural networks their power — allowing them to model complex non-linear relationships. By modifying inputs with non-linear functions neural networks can model highly complex relationships between features.

Activation functions typically have the following properties:

  • Non-linear

In linear regression we’re limited to a prediction equation that looks like a straight line. This is nice for simple datasets with a one-to-one relationship between inputs and outputs, but what if the patterns in our dataset were non-linear? (e.g. x2, sin, log). To model these relationships we need a non-linear prediction equation. Activation functions provide this non-linearity.

  • Continuously differentiable

To improve our model with gradient descent, we need our output to have a nice slope so we can compute error derivatives with respect to weights. If our neuron instead outputted 0 or 1 (perceptron), we wouldn’t know in which direction to update our weights to reduce our error.

  • Fixed Range

Activation functions typically squash the input data into a narrow range that makes training the model more stable and efficient.

Linear

A straight line function where activation is proportional to input ( which is the weighted sum from neuron ).

Function

f(x)=b+mx

Derivative

f′’(x)=m

Pros

  • It gives a range of activations, so it is not binary activation.
  • We can definitely connect a few neurons together and if more than 1 fires, we could take the max ( or softmax) and decide based on that.

Cons

  • For this function, derivative is a constant. That means, the gradient has no relationship with X.
  • It is a constant gradient and the descent is going to be on constant gradient.
  • If there is an error in prediction, the changes made by back propagation is constant and not depending on the change in input delta(x) !

ELU

Exponential Linear Unit or its widely known name ELU is a function that tend to converge cost to zero faster and produce more accurate results. Different to other activation functions, ELU has a extra alpha constant which should be positive number.

ELU is very similiar to RELU except negative inputs. They are both in identity function form for non-negative inputs. On the other hand, ELU becomes smooth slowly until its output equal to -α whereas RELU sharply smoothes.

Function

R(z)=z ; z>0

R(z)=α.(e^z−1) ; z≤0

Derivative

f’(x) = 1 ; z > 0

f′(x) = αe^z ; z < 0

Pros

  • ELU becomes smooth slowly until its output equal to −α-\alpha−α whereas RELU sharply smoothes.
  • ELU is a strong alternative to ReLU.
  • Unlike to ReLU, ELU can produce negative outputs.

Cons

  • For x > 0, it can blow up the activation with the output range of [0,∞].

ReLU

A recent invention which stands for Rectified Linear Units. The formula is deceptively simple: max(0,z)max(0,z)max(0,z). Despite its name and appearance, it’s not linear and provides the same benefits as Sigmoid (i.e. the ability to learn nonlinear functions), but with better performance.

Function

R(z) = z ; z > 0

R(z) = 0 ; z ≤ 0

Derivative

f’′(x) = 1 ; z > 0

f′’(x) = 0 ; z < 0

Pros

  • It avoids and rectifies vanishing gradient problem.
  • ReLu is less computationally expensive than tanh and sigmoid because it involves simpler mathematical operations.

Cons

  • One of its limitations is that it should only be used within hidden layers of a neural network model.
  • Some gradients can be fragile during training and can die. It can cause a weight update which will makes it never activate on any data point again. In other words, ReLu can result in dead neurons.
  • In another words, For activations in the region (x<0) of ReLu, gradient will be 0 because of which the weights will not get adjusted during descent. That means, those neurons which go into that state will stop responding to variations in error/ input (simply because gradient is 0, nothing changes). This is called the dying ReLu problem.
  • The range of ReLu is [0,∞). This means it can blow up the activation.

LeakyReLU

LeakyRelu is a variant of ReLU. Instead of being 000 when z<0 , a leaky ReLU allows a small, non-zero, constant gradient α\alphaα (Normally, α=0.01). However, the consistency of the benefit across tasks is presently unclear.

Function

R(z) = z ; z > 0

R(z) = αz ; z ≤ 0

Derivative

f′’(x) = 1 ; z > 0

f′’(x) = α ; z < 0

Pros

Leaky ReLUs are one attempt to fix the “dying ReLU” problem by having a small negative slope (of 0.01, or so).

Cons

As it possess linearity, it can’t be used for the complex Classification. It lags behind the Sigmoid and Tanh for some of the use cases.

Sigmoid

Sigmoid takes a real value as input and outputs another value between 0 and 1. It’s easy to work with and has all the nice properties of activation functions: it’s non-linear, continuously differentiable, monotonic, and has a fixed output range.

Function

S(z)=1 / (1+e^(z))

Derivative

S′(x)=S(x)*(1−S(x))

Pros

  • It is nonlinear in nature. Combinations of this function are also nonlinear!
  • It will give an analog activation unlike step function.
  • It has a smooth gradient too.
  • It’s good for a classifier.
  • The output of the activation function is always going to be in range (0,1) compared to (-inf, inf) of linear function. So we have our activations bound in a range. Nice, it won’t blow up the activations then.

Cons

  • Towards either end of the sigmoid function, the Y values tend to respond very less to changes in X.
  • It gives rise to a problem of “vanishing gradients”.
  • Its output isn’t zero centered. It makes the gradient updates go too far in different directions. 0 < output < 1, and it makes optimization harder.
  • Sigmoids saturate and kill gradients.
  • The network refuses to learn further or is drastically slow ( depending on use case and until gradient /computation gets hit by floating point value limits ).

Tanh

Tanh squashes a real-valued number to the range [−1,1]. It’s non-linear. But unlike Sigmoid, its output is zero-centered. Therefore, in practice the tanh non-linearity is always preferred to the sigmoid nonlinearity.

Function

tanh(z) = [e^z+e^(z) ] / [e^z e^(z) ]

Derivative

tanh′(z) = 1 − tanh(z)^2

Pros

  • The gradient is stronger for tanh than sigmoid ( derivatives are steeper).

Cons

  • Tanh also has the vanishing gradient problem.

Softmax

Softmax function calculates the probabilities distribution of the event over ’n’ different events. In general way of saying, this function will calculate the probabilities of each target class over all possible target classes. Later the calculated probabilities will be helpful for determining the target class for the given inputs.

I hope you have enjoyed reading this blog and learned the importance, advantages and disadvantages of activation functions. :)

“Artificial intelligence is not just learning patterns from data, but understanding human emotions and their evolution from their depth and not only meeting human requirements at the surface level, but sensitivity towards human pain, happiness, mistakes , the sufferings and the well-being of society are the parts of the new evolving AI systems”

--

--

No responses yet