Explainability in modern ML
This is one of those topics that seems so obvious that it probably requires some deeper consideration. Boardrooms from coast to coast are hearing about gold hiding in the mountains of machine learning these days from all sorts of vendors and consultants and the fear of missing out does not abide inaction. When you get into the process of actually doing some machine learning within your workflows and as a part of your use cases the next question inevitably relates to the mechanics of how it happened. You can see the path your use case is going to take, but the question of why the modeling went a certain direction could be a bit of a black box. That is where explainability in modern machine learning has hit a crossroads of sorts. A lot of journal articles contain the words expainabity and machine learning.[1] Seriously, academics talk about this a lot. Outside of that wealth of academic articles companies like Weights & Biases are working to keep track of what is happening and to track all the steps along the way.[2] All of that helps you recreate the process of getting to machine learning deployment, but it does not necessarily explain what is happening. It more or less just shows you how to do it again.
After a bit of digging into the topic a lot of content starts to talk about the difference between explainability and interpretability.[3] Being able to go step by step and interpret what is happening is a different sort of activity than trying to explain the outcome of the machine learning model to somebody. I read it a bit deeper and focus on two other considerations. First, I’m always curious about how something can be reproduced. That begs the question, “Did the model training succeed on accident or is it something that can be replicated and understood as a process?” Second, is to try to understand what potential biases or ethical considerations are present either in the data or the use case. Those two considerations combine to set the stage for understanding machine learning modeling at the very start of the process. Taking the time to consider those things after the fact breaks down any potential design considerations that could help at the start stop. It is easier to understand what is being built at the very start of the process than trying to change the architecture of something that is already either built or well on the way to deployment.
Maybe you were looking for more coverage on this topic? You are in luck if that is the case… Go check out this really excellent video, “Explainability, Reasoning, Priors and GPT-3,” where Dr. Tim Scarfe and Dr. Keith Duggar discuss explainability, reasoning, priors, and GPT-3 while checking out Christoph Molnar's book on interpretability. This video is from back on September 16, 2020, but it is still excellent.
Links and thoughts:
Some congratulations are owed to Yannic Kilcher for hitting 100,000 subscribers on YouTube… “Celebrating 100k Subscribers! (w/ Channel Statistics)” This is a good sign that the theories that only about 10,000 people really deeply care about AI/ML are not accurate.
Petar Veličković from DeepMind gave this talk I really enjoyed, “Intro to graph neural networks (ML Tech Talks)”
Yannic this week shared, “[ML News] Roomba Avoids Poop | Textless NLP | TikTok Algorithm Secrets | New Schmidhuber Blog”
We got a 3 hour special edition of Machine Learning Street Talk this week, “#60 Geometric Deep Learning Blueprint (Special Edition)”
“Did Apple Just Prove the iPhone Could be Cheaper? - WAN Show September 17, 2021”
Top 5 Tweets of the week:






Footnotes:
[3] https://www.kdnuggets.com/2018/12/machine-learning-explainability-interpretability-ai.html
What’s next for The Lindahl Letter?
Week 36: AIOps/MLOps: Consumption of AI Services vs. operations
Week 37: Reverse engineering GPT-2 or GPT-3
Week 38: Do most ML projects fail?
Week 39: Machine learning security
Week 40: Applied machine learning skills
I’ll try to keep the what’s next list forward looking with at least five weeks of posts in planning or review. If you enjoyed reading this content, then please take a moment and share it with a friend.