Member-only story
Exploring Activation Functions, Loss Functions, and Optimization Algorithms
A Beginner-friendly overview

When building Deep Learning models, activation functions, loss functions, and optimizing algorithms are crucial components that directly impact performance and accuracy.
Without making the right choices, your model will likely output unpredictable results, or not work at all.
If you are new to Deep Learning or have been practicing Deep Learning for quite some time, then this blog is for you.
In this Blog, we will go through all the important activation functions, loss functions, and optimizing algorithms that you will come across.
Additionally, if you have been practicing deep learning for quite a while, then this blog will serve you as a quick lookup on when to choose particular functions.
Please note that we won’t be deep-diving into mathematical equations, but more of an overview. I will be posting deep dives soon.
What are Loss Functions?
As we know, Deep Learning models are made up of perceptron layers (neural networks that have weights).
These weights are first initialized randomly at the start. During the learning process, the Deep learning algorithm tries to learn these weights iteratively.

To learn these weights, there needs to be some signal of whether the model is going in the right direction or not. Just like when you are trying to learn something.
For example, when learning to paint, you might look at the painting stroke you are trying to replicate and check whether the stroke you made matches the stroke you are trying to copy, the extent of dissimilarity serves as guidance on what needs to be improved.
Similarly, Deep Learning utilizes Loss Functions to provide a metric that compares the difference between the model’s output compared to the original input.
Loss functions measure the difference between the model’s predictions and the actual outcomes.