Neuroscience is a complex topic to dig into in general. Studying the nervous system is a complex thing to do before you add in the concept of machine learning or artificial intelligence. Within the context of machine learning it gets even more interesting for academic researchers, practitioners, and anybody building neural networks. Understanding that context of complexity within any inquiry into neuroscience, it will make sense here to focus on 5 scholarly articles that could help provide a solid context here for the relationship between neuroscience and machine learning. Within this section of inquiry the articles are really going to bring forward the complexity of the issue. The scholarly articles selected to cover neuroscience include a lot of focus on how the two subjects work together and the future of that collaboration.
Savage, N. (2019). How AI and neuroscience drive each other forwards. Nature, 571(7766), S15-S15. https://www.nature.com/articles/d41586-019-02212-4
Richards, B. A., Lillicrap, T. P., Beaudoin, P., Bengio, Y., Bogacz, R., Christensen, A., ... & Kording, K. P. (2019). A deep learning framework for neuroscience. Nature neuroscience, 22(11), 1761-1770. https://www.nature.com/articles/s41593-019-0520-2
Marblestone, A. H., Wayne, G., & Kording, K. P. (2016). Toward an integration of deep learning and neuroscience. Frontiers in computational neuroscience, 94. https://www.frontiersin.org/articles/10.3389/fncom.2016.00094/pdf
Richiardi, J., Achard, S., Bunke, H., & Van De Ville, D. (2013). Machine learning with brain graphs: predictive modeling approaches for functional imaging in systems neuroscience. IEEE Signal processing magazine, 30(3), 58-70. https://archive-ouverte.unige.ch/unige:33936/ATTACHMENT01
Vu, M. A. T., Adalı, T., Ba, D., Buzsáki, G., Carlson, D., Heller, K., ... & Dzirasa, K. (2018). A shared vision for machine learning in neuroscience. Journal of Neuroscience, 38(7), 1601-1607. https://www.jneurosci.org/content/jneuro/38/7/1601.full.pdf
Bonus Papers
This section includes a few additional papers that I have enjoyed and thought you might as well. They are not sorted in any particular order. This section may see the most updates between first publication and any updates of this syllabus. I’m sure that papers will get recommended to be included and if they don’t naturally fit into the main structure without overloading the reader, then they will end up here in the bonus papers section.
Marcus, G. (2018). Deep learning: A critical appraisal. arXiv preprint arXiv:1801.00631. https://arxiv.org/ftp/arxiv/papers/1801/1801.00631.pdf
Nakkiran, P., Kaplun, G., Bansal, Y., Yang, T., Barak, B., & Sutskever, I. (2021). Deep double descent: Where bigger models and more data hurt. Journal of Statistical Mechanics: Theory and Experiment, 2021(12), 124003. https://arxiv.org/pdf/1912.02292.pdf
Lake, B., & Baroni, M. (2018). Still not systematic after all these years: On the compositional skills of sequence-to-sequence recurrent networks. https://openreview.net/pdf?id=H18WqugAb
Mitchell, M. (2021). Why AI is harder than we think. arXiv preprint arXiv:2104.12871. https://arxiv.org/pdf/2104.12871.pdf
Biderman, S., & Scheirer, W. J. (2020). Pitfalls in machine learning research: Reexamining the development cycle. http://proceedings.mlr.press/v137/biderman20a/biderman20a.pdf
Henderson, P., & Brunskill, E. (2018). Distilling information from a flood: A possibility for the use of meta-analysis and systematic review in machine learning research. arXiv preprint arXiv:1812.01074. https://arxiv.org/pdf/1812.01074.pdf
Links and thoughts:
“The Future of AI is Self-Organizing and Self-Assembling (w/ Prof. Sebastian Risi)”
“The Man behind Stable Diffusion”
“Lab Naming Controversy - WAN Show August 26, 2022”
Top 6 Tweets of the week:







Research Note:
You can find the files from the syllabus being built on GitHub. The latest version of the draft is being shared by exports when changes are being made. https://github.com/nelslindahlx/Introduction-to-machine-learning-syllabus-2022
What’s next for The Lindahl Letter?
Week 86: Ethics, fairness, bias, and privacy (ML syllabus edition 7/8)
Week 87: MLOps (ML syllabus edition 8/8)
Week 88: The future of publishing
Week 89: your ML model is not an AGI
Week 90: What is probabilistic machine learning?
Week 91: What are ensemble ML models?
Week 92: National AI strategies revisited
Week 93: Papers critical of ML
Week 94: AI hardware (RISC-V AI Chips)
Week 95: Quantum machine learning
I’ll try to keep the what’s next list forward looking with at least five weeks of posts in planning or review. If you enjoyed this content, then please take a moment and share it with a friend. If you are new to The Lindahl Letter, then please consider subscribing. New editions arrive every Friday. Thank you and enjoy the week ahead.
Share this post