Integrations and your ML layer
All I really want to write about this week is the demo from OpenAI on the new Codex API they released and how potentially dangerous that technology could be from both an ethical and practical perspective. Check out the link below its first up in the next section.
Back on August 11th after watching that demo I wrote the following:
“Yesterday, I watched the OpenAi Codex demo on my living room television. It was potentially that important of a demo of a new technology. It is almost an implementation of conversational coding. Watching a generative interactive code interpolation based language model work during a demo was super interesting. That technology could very well be the future of coding inactions with a computing interface. I’m really worried about the ethical considerations of such a technology. That was the first thing that came to mind after the realization of what they had built. The OpenAI team should be working diligently on an ethical boundaries module. That course of action should probably be a top priority for that entire group. It is entirely possible that right now Boston Dynamics dog has a working machine learning Spot API and this could functionally teach and command it. Consider for the moment, “What are the ethical boundaries of how the Open AI Codex should be used to issue programming and commands using the Spot API?” To me that is a very real concern and something that should be deeply considered before more complex robotic implementations are built out and operationalized.”
Sitting here doing some searching about machine learning integrations, I have learned that the autocomplete feature in Google Chrome keeps trying to get me to click machine learning interview questions. Apparently, researching interview questions is much more popular of a search choice than researching integrations.[1] When I finally got past my shock on that one (it took a few seconds) I realized that four different advertisements were displayed before the actual results. People are paying for those placements. I’m sure the marketing departments for Domo, Alteyx, Dataiku, and BMC were well intentioned, but none of those companies that won the bidding rights to display advertisements to me is going to get any of my business. Integrations with your machine learning layer are hard. Full stop. There is no way around that reality at the moment. The interesting part of the equation is that on Google Scholar you don’t immediately see articles about difficult integrations.[2]
Someone more practical like the folks over at Towards Data Science will pretty quickly talk about why it is so hard to integrate machine learning into your applications.[3] People are wholesale building machine learning into applications and you can call and work with APIs from external vendors within your workflow without all that much difficulty. That API is going to be well defined in exactly what you need to send to it and exactly what you are going to get back. All of those APIs from GCP, AWS, Azure, or a host of other machine learning providers will be well scripted and the only real concerns are cost and latency. Setting up your own machine learning stack and serving a machine learning model is going to be more difficult. We are now seeing a lot more end-to-end platforms for deploying and managing your machine learning pipelines for rock solid integrations.[4] All of the groundwork is being done to make integrations between the workflow and the ML layer easier. One of the central areas I spend time studying is MLOps and that is a problem that people are eagerly working to remedy. Things like Weights & Biases are increasing functionality, but they are not full stack mission control systems yet. That is where the MLOps space is going, but it is taking some time for that trend to coalesce.
Links and thoughts:
Take 30 minutes and watch the future from OpenAI “OpenAI Codex Live Demo”
Check out this week’s episode of Machine Learning Street Talk “#58 Dr. Ben Goertzel - Artificial General Intelligence”
Yannic Kilcher was at it again this week, “[ML NEWS] Apple scans your phone | Master Faces beat face recognition | WALL-E is real”
Much later in the day I watched the entire WAN show this week with Linus and Luke, “NVIDIA's CEO is FAKE - WAN Show August 13, 2021”
Top 5 Tweets of the week:







Footnotes:
[1] I’m adding machine learning interview questions as a topic for week 53 of The Lindahl Letter
[4] https://www.tensorflow.org/tfx
What’s next for The Lindahl Letter?
Week 31: Edge ML integrations
Week 32: Federating your ML models
Week 33: Where are AI investments coming from?
Week 34: Where are the main AI Labs? Google Brain, DeepMind, OpenAI
Week 35: Explainability in modern ML
I’ll try to keep the what’s next list forward looking with at least five weeks of posts in planning or review. If you enjoyed reading this content, then please take a moment and share it with a friend.