Steps toward neural networks. Slow steps. I have been working on completing Parag Mital’s Creative Applications of Deep Learning with TensorFlow, but there hardship ahead. The first two lessons were rather heavy, especially as it required adaptation to Python as well as a cohort of various libraries and utilities such as NumPy, SciPy, MatPlotLib, and a not-so-smooth recap on matrices and other mathematical beasts.
The notebook sessions, both lecture and exercises, are particularly tough and steep at first, although now that I’m approaching lesson 3 things might be a little less abstruse (I think I also have this tendency of finding things very hard at first, and easier as I get deeper into them, even if abstraction levels increase).
Another issue I’m facing is that I haven’t found text-oriented material that attracted me as, say, Mital’s course or other resources. It is probably due to a certain weakness of mine for the ‘nec plus ultra’ smell deep learning has. I can clearly see already that my next direction, as soon as the whole machine learning business becomes a little less unactionable/intractable, is properly to combine my textual interests with machine/deep learning.
Found a Master’s thesis by someone called Partiksha Taneia on ‘Textual Generation Using Different Recurrent Neural Networks’ as well as a quite austere machine-read video summarising it.
Various resources around Neural Networks and Machine Learning in general, showing how deep the rabbit hole goes (which can be a little overwhelming):
- Full courses by Andrew Ng on Deep Learning, (here as well), and Machine Learning, which can be found on Coursera;
- Far too many videos by Stanford University on various areas of Machine Learning.
I had a go at a few specific topics, which was both productive but also difficult to maintain, as it seemed that all my energy could easily be engulfed in the amount of technique (mathematical or otherwise) that is required to reach that level.