The post this week is going to be on the shorter side of things. I think that is in part due to the very straightforward nature of the topic under consideration. It really could have just been a link to a single book on the subject with a polite note that reading it would help you understand pretty much everything you need to know. To that end, it looks like the book on probabilistic machine learning from Kevin Patrick Murphy has been downloaded 168 thousand times [1]. That is pretty darn good for something in the machine learning space where the inflection point is generally over or under around 10,000 points of interest on a topic. It appears that Kevin really surpassed that ceiling by a ton of downloads. The book is very easy to get to and the search engines really seem to algorithmically love it as well. Given that this topic has a lot of refences to Bayesian decision theory you probably could predict that it would get my full attention. The topics that generally grounds all of my efforts in the machine learning space is that background in statistics and my enjoyment of working with Bayesian pooling. Let’s begin to breakdown the idea of probabilistic machine leaning involves understanding two general steps. First, you must accept that you want to explain observed data with your machine learning models. Second, those explanations are going to need to come from inferring plausible models to aid you in that explanation. Together those two steps help you begin to evaluate data in a probabilistic way which means that you are aided by the power of statistical probability grounding you to a rational approach. To me this sort of spells out an approach that is not based on randomness or anything particularly chaotic.
Murphy, K. P. (2012). Machine learning: a probabilistic perspective. MIT press. https://research.google/pubs/pub38136.pdf
Probabilistic machine learning papers
Ghahramani, Z. (2015). Probabilistic machine learning and artificial intelligence. Nature, 521(7553), 452-459. https://www.repository.cam.ac.uk/bitstream/handle/1810/248538/Ghahramani%25202015%2520Nature.pdf?sequence=1
Rain, C. (2013). Sentiment analysis in amazon reviews using probabilistic machine learning. Swarthmore College. https://www.sccs.swarthmore.edu/users/15/crain1/files/NLP_Final_Project.pdf
Probabilistic deep learning papers
Nie, S., Zheng, M., & Ji, Q. (2018). The deep regression bayesian network and its applications: Probabilistic deep learning for computer vision. IEEE Signal Processing Magazine, 35(1), 101-111. https://sites.ecse.rpi.edu/~cvrl/Publication/pdf/Nie2018.pdf
Peharz, R., Vergari, A., Stelzner, K., Molina, A., Shao, X., Trapp, M., ... & Ghahramani, Z. (2020, August). Random sum-product networks: A simple and effective approach to probabilistic deep learning. In Uncertainty in Artificial Intelligence (pp. 334-344). PMLR. http://proceedings.mlr.press/v115/peharz20a/peharz20a.pdf
Andersson, T. R., Hosking, J. S., Pérez-Ortiz, M., Paige, B., Elliott, A., Russell, C., ... & Shuckburgh, E. (2021). Seasonal Arctic sea ice forecasting with probabilistic deep learning. Nature communications, 12(1), 1-12. https://www.nature.com/articles/s41467-021-25257-4?tpcc=nleyeonai
Links and thoughts:
“How Arm conquered the chip market without making a single chip, with CEO Rene Haas”
Top 5 Tweets of the week:





Footnotes:
[1] https://probml.github.io/pml-book/book1.html
What’s next for The Lindahl Letter?
Week 91: What are ensemble ML models?
Week 92: National AI strategies revisited
Week 93: Papers critical of ML
Week 94: AI hardware (RISC-V AI Chips)
Week 95: Quantum machine learning
I’ll try to keep the what’s next list forward looking with at least five weeks of posts in planning or review. If you enjoyed this content, then please take a moment and share it with a friend. If you are new to The Lindahl Letter, then please consider subscribing. New editions arrive every Friday. Thank you and enjoy the week ahead.
Share this post