The Lindahl Letter
The Lindahl Letter
Papers critical of ML
0:00
Current time: 0:00 / Total time: -4:19
-4:19

Papers critical of ML

We are going to get to the 104th Substack post before you know it here for The Lindahl Letter publication. Things are moving along, and we are on the very tail end of that journey. Don’t panic about this post not being full of links. (Spoiler alert) You will have plenty of perspectives to read this week that are linked to for your reading pleasure.

This is one of the topics that deserves a lot of attention. I circle back to asking people to always consider ethics within the context of both ML and AI. One of the considerations within that argument would be to really try to understand the outcomes of what the technology is being used to achieve or the negative externalities that would be possible from it’s use. You have heard me say it before and you will certainly hear it again, “Just because you can do a thing, does not mean you should.” It’s a real consideration within the AI/ML space. A lot of the things that can be done with both ML and AI are unconscionable and should be avoided. That is why ethics should be a core part of the AI/ML journey without question. Full stop.

Beyond that consideration I tried to gather up a bunch of papers critical of ML in general. It was actually much harder in practice to find written criticism of ML than I expected in published article forms. You are certainly welcome to out and subscribe to the Substack of Gary Marcus who publishes, “The Road to AI We Can Trust,” [1]. A good scroll across those posts will give you a real sense of criticism and questions about the ML space. It has some really solid engagement as well from people who care enough to deeply question things. I wholesale consider that to be a healthy part of the process.

Marcus, G. (2018). Deep learning: A critical appraisal. arXiv preprint arXiv:1801.00631. https://arxiv.org/ftp/arxiv/papers/1801/1801.00631.pdf

Nakkiran, P., Kaplun, G., Bansal, Y., Yang, T., Barak, B., & Sutskever, I. (2021). Deep double descent: Where bigger models and more data hurt. Journal of Statistical Mechanics: Theory and Experiment, 2021(12), 124003. https://arxiv.org/pdf/1912.02292.pdf

Lake, B., & Baroni, M. (2018). Still not systematic after all these years: On the compositional skills of sequence-to-sequence recurrent networks. https://openreview.net/pdf?id=H18WqugAb

Mitchell, M. (2021). Why AI is harder than we think. arXiv preprint arXiv:2104.12871. https://arxiv.org/pdf/2104.12871.pdf

Biderman, S., & Scheirer, W. J. (2020). Pitfalls in machine learning research: Reexamining the development cycle. http://proceedings.mlr.press/v137/biderman20a/biderman20a.pdf

Henderson, P., & Brunskill, E. (2018). Distilling information from a flood: A possibility for the use of meta-analysis and systematic review in machine learning research. arXiv preprint arXiv:1812.01074. https://arxiv.org/pdf/1812.01074.pdf

Vinny, P. W., Garg, R., Padma Srivastava, M. V., Lal, V., & Vishnu, V. Y. (2021). Critical Appraisal of a Machine Learning Paper: A Guide for the Neurologist. Annals of Indian Academy of Neurology, 24(4), 481–489. https://doi.org/10.4103/aian.AIAN_1120_20 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8513942/

You can get to a bit more content outside of scholarly articles when it comes to finding critics of machine learning. I’m going to share a handful of links to different things that I found interesting along the way to researching this Substack post.

“5 myths about learning and innateness” 

https://open.substack.com/pub/garymarcus/p/5-myths-about-learning-and-innateness?r=8oh0m&utm_campaign=post&utm_medium=web

“The Limitations of Machine Learning”

https://towardsdatascience.com/the-limitations-of-machine-learning-a00e0c3040c6

“When Machine Learning Goes Off the Rails”

https://hbr.org/2021/01/when-machine-learning-goes-off-the-rails

“The way we train AI is fundamentally flawed”

https://www.technologyreview.com/2020/11/18/1012234/training-machine-learning-broken-real-world-heath-nlp-computer-vision/

“Why deep-learning AIs are so easy to fool”

https://www.nature.com/articles/d41586-019-03013-5

“How a Pioneer of Machine Learning Became One of Its Sharpest Critics”

https://www.theatlantic.com/technology/archive/2018/05/machine-learning-is-stuck-on-asking-why/560675/

“AI researchers allege that machine learning is alchemy”

https://www.science.org/content/article/ai-researchers-allege-machine-learning-alchemy

Links and thoughts:

“Generative AI is Here. Who Should Control It?”

“Twitter is now an Elon Musk company”

“Apple's new App Store tax, Microsoft Surface reviews, and Meta's earnings”

“Emergency Pod: Elon Musk Owns Twitter”

Top 5 Tweets of the week:

Footnotes:

[1] Gary Marcus’s Substack “The Road to AI We Can Trust”

What’s next for The Lindahl Letter?

  • Week 94: AI hardware (RISC-V AI Chips)

  • Week 95: Quantum machine learning

  • Week 96: Generative AI: Where are large language models going?

  • Week 97: MIT’s Twist Quantum programming language

  • Week 98: Deep generative models

I’ll try to keep the what’s next list forward looking with at least five weeks of posts in planning or review. If you enjoyed this content, then please take a moment and share it with a friend. If you are new to The Lindahl Letter, then please consider subscribing. New editions arrive every Friday. Thank you and enjoy the week ahead.

Discussion about this episode