The Lindahl Letter
The Lindahl Letter
That ML model is not an AGI
0:00
Current time: 0:00 / Total time: -6:47
-6:47

That ML model is not an AGI

A lot of people talk about deploying AI in the business world and almost all that conjecture is entirely based on deploying a machine learning model into a production environment or some interesting POC. When those same people deploy an actual AI product into production, they will hopefully see the difference. They are not the same. A lot of the AI hype is underpinned by advances in machine learning. Artificial general intelligence or more commonly abbreviated as AGI represents an interesting summation of possibility contained in a name. You have seen representations of AGIs in books, movies, comics, and all sorts of works of fictions. At the moment, machine learning models are generally trained to do one thing well and cannot generally pick up and learn tasking like a person would.

That is why most of those fiction writers do not bother to include a machine learning model as the antagonist in stories. The expectation is that a person (or villain for that matter) would be able to generally pick up and learn tasking for a wide variety or purposes. That exception gets rolled up into what an AGI would be expected to achieve in practice. Generally, the expectation would be that the AGI could complete a mix of tasking just like a person would be able to handle. You would need a large number of machine learning models to complete the tasking that a person does in a single day. You could test this as a practical exercise with a sheet of paper and a pen throughout the day. As you built up the list of machine learning models you would need throughout the day to accomplish all the various tasking it would become very obvious that your ML model is not an AGI as your list would be much greater than a single model. Or even a small collection of models being sorted out by a piece of software upfront. To be fair to the idea contained within that point, we don’t even have a good method to switch between a collection of ML models to deploy a collection to complete a variety of tasks.

Artificial general intelligence - Let’s begin by digging into a few books and papers related to AGI before introducing the ML part of the equation. This will help create a foundation for the concept and scholarly evaluation of AGI without spending as much time on ML. You will find a theme in the literature here were Goertzel is prominently featured.

Goertzel, B., Orseau, L., & Snaider, J. (2015). Artificial general intelligence. Scholarpedia10(11), 31847. http://var.scholarpedia.org/article/Artificial_General_Intelligence

Goertzel, B. (2007). Artificial general intelligence (Vol. 2). C. Pennachin (Ed.). New York: Springer. https://www.researchgate.net/profile/Prof_Dr_Hugo_De_GARIS/publication/226000160_Artificial_Brains/links/55d1e55308ae2496ee658634/Artificial-Brains.pdf

Goertzel, B. (2014). Artificial general intelligence: concept, state of the art, and future prospects. Journal of Artificial General Intelligence5(1), 1. https://sciendo.com/abstract/journals/jagi/5/1/article-p1.xml

I did discover along the way that Dr. Ben Goertzel who has papers referenced above has made a lot of content on YouTube. You may remember some of the Sophia the robot content (Hanson Robotics) from 2016 to 2018 as it was fairly prevalent in the media. You can read and article from The Verge about this one [1].If you wanted to dig into a more video based set of content, then feel free to check out this 7 video playlist on the general theory of general intelligence:

https://www.youtube.com/playlist?list=PLAJnaovHtaFTK9E1xHnBWZeKtAOhonqH5

Machine learning – This next set of research will consider both ML and AGI together.

Pei, J., Deng, L., Song, S., Zhao, M., Zhang, Y., Wu, S., ... & Shi, L. (2019). Towards artificial general intelligence with hybrid Tianjic chip architecture. Nature572(7767), 106-111. https://aiichironakano.github.io/cs653/Pei-ArtificialGeneralIntelligenceChip-Nature19.pdf

Silver, D. L. (2011, August). Machine lifelong learning: Challenges and benefits for artificial general intelligence. In International conference on artificial general intelligence (pp. 370-375). Springer, Berlin, Heidelberg. https://www.researchgate.net/profile/Daniel-Silver-3/publication/221328970_Machine_Lifelong_Learning_Challenges_and_Benefits_for_Artificial_General_Intelligence/links/00463515d5bc70ed5c000000/Machine-Lifelong-Learning-Challenges-and-Benefits-for-Artificial-General-Intelligence.pdf

Goertzel, B. (2014). Artificial general intelligence: concept, state of the art, and future prospects. Journal of Artificial General Intelligence5(1), 1. https://sciendo.com/abstract/journals/jagi/5/1/article-p1.xml

Conclusion – Back during week 62, I started to question how close we were to touching the singularity and that question aligns somewhat to when we will see a true AGI. A well referenced paper was mentioned titled, “Future Progress in Artificial Intelligence: A Survey of Expert Opinion,” published in 2016 by Vincent C. Müller and Nick Bostrom [2].

Müller, V. C., & Bostrom, N. (2016). Future progress in artificial intelligence: A survey of expert opinion. In Fundamental issues of artificial intelligence (pp. 555-572). Springer, Cham. https://philpapers.org/rec/MLLFPI

Within that paper they note that expert opinions found that there is a 50/50 chance between 2040 and 2050 that a general artificial intelligence or AGI would spring into existence or be created. Keep in mind that debating when it will happen does not judge the ethics of creating it and what purpose it would have. Arguments can be made and are being made about if the singularity is inherently good or bad for civil society and civility in general. That is not a consideration I’m working with at the moment. My consideration of this is as an event or more to the point right before the event occurs. I did go back and read an article from a 2015 issue of the New Yorker magazine online called, “The Doomsday Invention: Will artificial intelligence bring us utopia or destruction?” [3]. That article is principally about Nick Bostrom and does consider utopia and destruction if you want to go give that a read.

Links and thoughts:

This was a really solid conversation between Kara and Chris. The discussions of journalistic ethics and making choices is what caught my attention. “Chris Cuomo’s Comeback”

Top 5 Tweets of the week:

Footnotes:

[1] https://www.theverge.com/2017/11/10/16617092/sophia-the-robot-citizen-ai-hanson-robotics-ben-goertzel

[2] fhttps://philpapers.org/rec/MLLFPI

[3] https://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom

What’s next for The Lindahl Letter?

  • Week 90: What is probabilistic machine learning?

  • Week 91: What are ensemble ML models?

  • Week 92: National AI strategies revisited

  • Week 93: Papers critical of ML

  • Week 94: AI hardware (RISC-V AI Chips)

I’ll try to keep the what’s next list forward looking with at least five weeks of posts in planning or review. If you enjoyed this content, then please take a moment and share it with a friend. If you are new to The Lindahl Letter, then please consider subscribing. New editions arrive every Friday. Thank you and enjoy the week ahead.

Discussion about this episode