Strengthening Deep Learning Concepts
A couple weeks ago, I posted about how the TensorFlow Certification helped me overcome my imposter syndrome within the deep learning community. The studying, and as a result my certification, was primarily focused on the high-level implementation of various model architectures within the TensorFlow framework. Which meant that even though I was certified, I felt that there was still a lot to learn in regards to deep learning.
I wanted to know what was truly happening under the hood in deep learning, to understand the low-level algorithms that make up each individual part of a whole model. I wanted to be able to make intelligent choices when doing my own research and development, and knew that learning the mathematics of deep learning would be the only way to give me that confidence.
I set up a learning plan similar to how I approached the TensorFlow Certification. I paired up multiple sources of information (Coursera, Grokking Deep Learning, and misc. YouTube videos) to gain multiple perspectives on fundamental deep learning algorithms. The below table outlines my study plan, with YouTube being used to supplement random areas I wanted more information on:
Coursera Course | Grokking Chapters |
---|---|
Neural Networks and Deep Learning | 3, 4, 5, 6 |
Improving Deep Neural Networks | 7, 8, 9 |
Deep Learning Frameworks | |
Convolutional Neural Networks | 10 |
Sequence Models | 11, 12, 13, 14 |
The confidence and skills I gained from my TensorFlow Certification really helped me to get through this lesson plan in under two weeks, without doubting myself or my abilities. Having the experience in TensorFlow provided high-level context while studying the deep learning formulas.
By the end of two weeks, I had implemented my own forward propagation, back propagation, regularization, loss functions, optimizers, and standard network layers. Having built it all from scratch, I now have a better understanding of how hyperparameters affect individual parts, and how it all affects the model’s learning as a whole. This knowledge will not only help when building my own models, but also with debugging errors and training data selection.
The benefits of what I learned aren’t limited to just developing my own models, but also will help play a large part in researching existing models. By understanding the effects of each and every aspect of deep learning I can now better comprehend research papers, and hopefully be able to build upon existing works instead of just copying them.
After all of this I can finally step into the world of deep learning and start developing. Next up is to put all this into real projects, break away from studying, and finally start building things on my own!