vevesta • 39 HN points • 02 Sep 22 Everything you need to know about Activation Functions The need, the types and the drawbacks of different activation functions
vevesta • 23 HN points • 15 Sep 22 Everything you need to know about Distributed training and its often untold nuances Understanding Data Parallelism vs Model Parallelism, Their Powers and Their Kryptonite (weakness)
vevesta • 20 HN points • 17 Sep 22 Deep Dive into how Predicting Future Weights of Neural Network is used to mitigate Data Staleness while Distributed Training Introducing SpecTrain as means to predict future weights of neural network to alleviate data staleness and improve speed of training via distributed training
vevesta • 7 HN points • 13 Sep 22 How to use Cyclical Learning Rate to get quick convergence for your Neural Network? Achieve higher accuracy for your machine learning model in lesser iterations.
vevesta • 3 HN points • 05 Sep 22 Deep Dive into Reasons to Choose Focal Loss over Cross-Entropy Learn about how to solve class imbalance problem using Focal Loss over Cross-Entropy
vevesta • 2 HN points • 31 Aug 22 Here is what you need to know about Sparse Categorical Cross Entropy in nutshell Insights into often encountered errors while using Sparse categorical cross entropy HN comments
vevesta • 2 HN points • 08 Sep 22 How To Increase Recall When Given Imbalanced Dataset For Machine Learning Model? SMOTE-Tomek Links is often suited for synthethic data generation when given imbalanced dataset.