Funtamental Machine Learning principles and concepts that are extended into Deep Neural Networks
See moreDiscover what is regularization, why it is necessary in deep neural networks and explore the most frequently used strategies: L1, L2, dropout, stohastic depth, early stopping and more
An overview of the most popular optimization algorithms for training deep neural networks. From stohastic gradient descent to Adam, AdaBelief and second-order optimization
A class of deep networks that use spatial structure and can be thought as regularized semi-connected feed forward networks. They have been extensively applied in Computer vision applications.
See moreAn intuitive guide on why it is important to inspect the receptive field, as well as how the receptive field affect the design choices of deep convolutional networks.
How convolutional neural networks work? What are the principles behind designing one CNN architecture? How did we go from AlexNet to EfficientNet?
Recurrent Neural Networks are deep networks that contain loop connections between nodes. Because of that, they can use their internal memory to process sequences of inputs.
See moreAre you interested to see how recurrent networks process sequences under the hood? That’s what this article is all about. We are going to inspect and build our own custom LSTM model. Moreover, we make some comparisons between recurrent and convolutional modules, to maximize our understanding.
What are the advantages of RNN’s over transformers? When to use GRU’s over LSTM? What are the equations of GRU really mean? How to build a GRU cell in Pytorch?
Autoencoders are an unsupervised type of network that can learn compact representation of the data features. Then can be deterministic or probabilistic
See moreLearn what autoencoders are and build one to generate new images
Explaining the mathematics behind generative learning and latent variable models and how Variational Autoencoders (VAE) were formulated (code included)
Attention and Transformers have already been the standard in NLP applications and they are entering Computer Vision as well
See moreNew to Natural Language Processing? This is the ultimate beginner’s guide to the attention mechanism and sequence learning to get you started
An intuitive understanding on Transformers and how they are used in Machine Translation. After analyzing all subcomponents one by one such as self-attention and positional encodings , we explain the principles behind the Encoder and Decoder and why Transformers work so well