

Discover more from Software Field Notes
I was about to put on “Coded Bias” on Netflix and Dad asked me a curious question: Why is AI important?
I could have answered by talking about the impact of AI/ML on our everyday lives — i.e., how it impacts everyday people who are the users of technology and software.
But I opted to approach this question as a software engineer instead. Specifically, I approached this question as someone who takes deep interest in how software came to be, historically speaking.
Through a short-yet-rambling answer, which turned into an hours long discussion between us, I told him that the whole point of programming was to implement an artificially intelligent machine; and so, AI is important because it is the whole reason why computer programming exists in the first place.
It was a long discussion.
And I accept that most people — most software engineers or computer scientists — would not take that viewpoint. But that is where I am. I think AI is important because it was the whole point of modern computer programming. As such, the motivations behind making AI work, likely shaped computer programming in its early stages. AI/ML is/are certainly applying ‘evolutionary pressures’ (if you will) on how programming languages are designed, and how programming is thought of.
And given the importance of programming in and of itself, it is important to view it through AI’s lens. If you humor me for a bit …
I think of Turing as a pivotal figure in computing and consider him to be the father of modern-day computers and programming. But he was really a mathematician trying to solve artificial intelligence. In 1950, Turing authored a paper with the title: “Computing Machinery and Intelligence” [↪].
See! It is right there in the title: he is not saying, “Computing Machinery and Finances” or “Computing Machinery and Art”. He is being pretty on the nose about it: he is specifically calling out “Intelligence” and how it may be relevant to computing itself.
He opens the paper with a simple question: “Can machines think?”
Four sections later, he reframes that question to: “Are there digital computers which would do well in the imitation game?”
The “imitation game” [↪] that Turing refers to here, is popularly known today as the Turing Test [↪]. Turing’s original definition of the imitation game is simply this: Can an interrogator “C” distinguish between “A” (a human) and “B” (a machine), by engaging with “A” and “B” in a series of questions and answers? Or rather, can we create a “B” (a machine) that can come across as “A” (a human)?
He actually takes great pains to define such a machine or a “digital computer”, but also does it in plain terms. He expects a digital computer to have a store (or memory), an executive unit, and a control unit (which would house a table of instructions, executed by the executive unit).
Sound familiar? It is 1950 and Turing just defined the modern-day computer (mobile- and desktop-class computers). But he is not done. He actually posits what it means to program such a machine (he calls it ‘programme’ … he was a Brit).
He considers a program to be a table of instructions fed into the machine, i.e., the digital computer. He expects those instructions will be executed my the computer’s Executive unit. And the act of programming is to put those instructions in the computer’s Control unit.
That is a very accurate, and down-to-earth definition of programming in 2022 — as defined in a paper on intelligent machines, back in 1950, by a mathematician who was thinking about machines that can think.
Back to the future in 2022, we still do not have general-purpose AI that can really think, and reason and learn concepts and exhibit creativity — indistinguishable from humans. So far, to Turing’s original question the answer is (a qualified) “No”.
But in his attempt to conjure a thinking machine, he defined the modern-day computer and computer/software programming. Any one avoiding those roots of programming and software engineering is not being honest about where programming comes from.
The idea of programming existed before Turing — and he refers to ideas like the Analytical Engine that pre-date his time. But it are his definitions of computers and programming that have endured the last 72 years and they show no signs of waning.
That vision of Computers and Programming may not have yet given rise to thinking machines, in the sense of being sentient entities, but they have set humanity on the course of immense digitization and connectivity. And so, it becomes important to understand appreciate that the way we write code (for programs such as a word processor or an online store or a search engine) is primitively rooted in humanity’s desire to make machines think.
I dare to make this claim: every program we typically write is a deliberately failed attempt at creating a Turing Machine. (I am lucky that no one will read this, otherwise I would be taken apart by all corners of the internet 😅)
Even as I think about routine concepts in programming such as immutability, streams, and lambdas … it reminds me how such concepts have helped software engineers reason about large-scale/big-data/distributed systems … which have in-turn been essential to creating AI/ML systems (albeit not general purpose AI, yet).
So in closing, I contend that even before impacting our lives in the form of driverless cars or smart homes, AI gave us the concepts of programming and programable digital computers, which have been immensely consequential.
And even though we may not have achieved general-purpose AI just yet, humanity’s mere attempts at achieving that seemingly unattainable prize has restructured the way we live.
In how many other ways will we end up changing our world, in the pursuit of AI?