Fairness and machine learning
Tools are starting to roll out to help with fairness and bias detection in machine learning. Some of it is targeted at mitigation and other parts are true efforts to remediate. Really the most important part of the process is the recognition of fairness as a concept. You have to begin to understand what bias can mean within machine learning to start working on mitigation and remediation. Ensuring the process is definable and repeatable creates the potential for quality. Within that framework a high quality definable and repeatable process can consider fairness within the machine learning development and deployment life cycle. People have been tackling this topic for a long time. You can look back at this Fall 2017 course from UC Berkeley to get a sense of where we were.[1] You can still watch the introduction over on Vimeo.[2] You really get the feel from the lecture that they were hopeful that we could address bias in algorithms in a meaningful way to reduce bias from human centric data collected or evaluated.
We have over the years since that lecture was published seen a variety of things being built to help consider bias outside of technical concerns. I read an article from 2018 that has been referenced 331 times from Sam Corbett-Davies and Sharad Goal called, “The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning.”[3] This helped reinforce the idea for me that people have been really trying to deeply think about and consider fairness in machine learning. That effort to educate unfortunately did not translate to the use of algorithms and machine learning in practice. A lot of use cases were tackled based on the art of what was possible and not based on considerations of how appropriate and fair the application of that technology was.
Links and thoughts:
I watched this video about how to “Build a custom ML model with Vertex AI” via Google Cloud Tech
Ok. I watched an episode of the AI Show from Microsoft Developers “AI Show Live - Episode 19 - Improving customer experiences with Speech to Text and Text to Speech”
This week I watched about 25 minutes of Luke and Linus mostly during the writing process of this post… “I want to talk to you about Windows 11 - WAN Show Jun 25, 2021”
Yannic took a look at a paper this week, “XCiT: Cross-Covariance Image Transformers (Facebook AI Machine Learning Research Paper Explained)” note that it has an audience of about 10,000 people..
Top 6 Tweets of the week:








Footnotes:
[1] https://fairmlclass.github.io/ CS 294: Fairness in Machine Learning, UC Berkeley, Fall 2017
[2]
which has 8,855 views since the release in 2017
[3] https://arxiv.org/pdf/1808.00023.pdf
What’s next for The Lindahl Letter?
Week 24: Evaluating machine learning
Week 25: Teaching kids ML
Week 26: Machine learning as a service
Week 27: The future of machine learning
Week 28: Machine learning certifications?
I’ll try to keep the what’s next list forward looking with at least five weeks of posts in planning or review. If you enjoyed reading this content, then please take a moment and share it with a friend.