Funtamental Machine Learning principles and concepts that are extended into Deep Neural Networks

See moreHow to buld a neural network library using C++ and OpenCL

An overview of the most popular optimization algorithms for training deep neural networks. From stohastic gradient descent to Adam, AdaBelief and second-order optimization

A class of deep networks that use spatial structure and can be thought as regularized semi-connected feed forward networks. They have been extensively applied in Computer vision applications.

See moreAn intuitive guide on why it is important to inspect the receptive field, as well as how the receptive field affect the design choices of deep convolutional networks.

How can we efficiently train very deep neural network architectures? What are the best in-layer normalization options? We gathered all you need about normalization in transformers, recurrent neural nets, convolutional neural networks.

Recurrent Neural Networks are deep networks that contain loop connections between nodes. Because of that, they can use their internal memory to process sequences of inputs.

See moreAre you interested to see how recurrent networks process sequences under the hood? That’s what this article is all about. We are going to inspect and build our own custom LSTM model. Moreover, we make some comparisons between recurrent and convolutional modules, to maximize our understanding.

What are the advantages of RNN’s over transformers? When to use GRU’s over LSTM? What are the equations of GRU really mean? How to build a GRU cell in Pytorch?

Autoencoders are an unsupervised type of network that can learn compact representation of the data features. Then can be deterministic or probabilistic

See moreLearn what autoencoders are and build one to generate new images

Explaining the mathematics behind generative learning and latent variable models and how Variational Autoencoders (VAE) were formulated (code included)

Attention and Transformers have already been the standard in NLP applications and they are entering Computer Vision as well

See moreNew to Natural Language Processing? This is the ultimate beginner’s guide to the attention mechanism and sequence learning to get you started

An intuitive understanding on Transformers and how they are used in Machine Translation. After analyzing all subcomponents one by one such as self-attention and positional encodings , we explain the principles behind the Encoder and Decoder and why Transformers work so well