

Discover more from Software Field Notes
In a previous note on AI and programming I posited that modern-day programming can find its roots in a desire to build intelligent machines.
That note triggered some feedback, and I got this interesting follow up:
Is the ultimate state of a computer the ability to think for itself and take decisions based on those thought processes? Is this what AI will ultimately lead to?
This is an interesting cluster of questions as it relates to the future development and use of A.I., algorithms
and software. It raises some very interesting ideas that I explore in the rest of this note. But let me summarize this note by answering the question directly:I do not think a machine or an algorithm can ever have a human-like ability to think for itself. But it is possible today to create algorithms, i.e., machine learning models, that appear to have such an ability.
Let me elaborate.
i. To think for oneself? itself?
I assume that “to think for oneself” refers to the idea of independent thinking or critical thinking. The basic idea behind independent thinking is to be capable of churning through and analyzing information (i.e., data) by oneself, to arrive at conclusions (or insights, or decisions, or results).

Algorithms already perform this kind of analysis “by themselves” — as long as they are parameterized (or trained, or programmed) to analyze a certain kind of data, to produce a certain kind of result.
Critical thinking around a problem also requires a clear motivation. Why analyze and solve a problem for no good reason? Such reasons — often rooted in personal interest or material benefit — always exist. The reason to solve a problem also contextualize the problem — consider these questions: why is the problem important? what do we gain from solving it? are some aspects of a solution of greater value?
So it would seem that to think for itself, a computer or an algorithm would need to not only work through a problem, but also find a reason to solve it in the first place.
So, can we find or devise such an algorithm that can find a reason to solve, or do, something? More interestingly, can we devise an algorithm that appears, or imitates finding a reason to solve a problem. Let’s look at recommendation systems
.ii. Recommendation Systems seem motivated by something
Ever wondered what to (binge) watch on Netflix? I struggle with that nearly every day. I watch a great show or movie, I love it… but then despair hits me. “What’s next?”
But Netflix does a good job of giving a list of recommendations to select from. There is typically something that catches my interest in the top two rows of Netflix’s recommendations when I land on its home page. It is important to realize that I have not made a search request like I would on Google. All I do is open Netflix’s home page, and there is something worth watching that I stumble into quickly. It could be because Netflix is producing and sourcing a lot of very very good content (shows and movies). But even if Netflix is sourcing only the very best of content, how does it know which show/movie to list first, second, or tenth in a list … especially when all of them are good?
Youtube too, seems similarly motivated.
Twitter’s home feed is algorithmically curated to your specific account.
Same goes with Instagram and Facebook.
Curiously, in such systems, I never need to search for anything. I just need to land on the home page. These computer systems just seem to “know” or “guess” what I would find interesting, and they just serve that to me. It’s like going to a restaurant and the server telling me what I will likely like and eat for dinner — oh wait! Uber does something like that.
Swiping away autonomy, already?
TikTok or Youtube Shorts or Instagram Reels take this one step further. Open Shorts on YT — there is a video and it is interesting. Now swipe once … and another intestine video. Swipe, another video worth watching. I find that I have lost any autonomy over what I want to watch. The computer system, or app, or algorithm or AI has taken over my executive function of deciding how to entertain myself when watching video.
This represents an entire universe of systems that are motivated with the idea of making content recommendations that you — the end user — will find interesting, with little-to-no direct input from you.
These appear like computers that are thinking critically on how to solve a clear problem (serve me/you with interesting content).
And they are doing it with a clear motivation (keep me engaged to serve me/you with ads). But who defines that motivation? The product engineers behind these products I assume, i.e., people … not algorithms.
Whose Motivations: Human’s or Machine’s?
When I see a recommendation system at work, it is very easy to think, “Well, looks like Netflix wants me to watch Xyz.” But is it Netflix or the engineers behind Netflix? Does it really matter? I am treating Netflix (or any such system) as an independent entity with some agency that is capable of thought and I am letting it make decisions for me.
Seems to me that it does not matter that Netflix or TikTok are capable of independent thought. It is more important that they appear to capable of independent thinking and decision making in the service of humans.
It probably matters that if the machine were actually capable of independent thinking — all manners of assumptions that we make about society, economics, politics, geopolitics, culture and humanity come into question. But we are not there yet.
But the more pressing issue is, how does humanity react and evolve, when it thinks and believes that algorithms can think for themselves? Keep that in mind the next time you play something suggested by Netflix.
For now, algorithms cannot motivate a problem like humans can. The human engineers need to program these algorithms, to make them appear to be motivated by something. It will take a few more breakthroughs in areas such as “Motivation in AI” and “Algorithmic Sentience”, before machines discover motivation, or cause, or purpose. Do not panic, yet.
iii. Human in the loop
It needs to be told.
Sam Altman, CEO of OpenAI, made a suggestion in an interview recently
that technology around self-improving ML models can progress to a point where, if the model understand what racism is, you can tell the model “do not be racist”, and it can improve itself to not be racist.That is a very profound thing to say/claim/predict/envision. But when I heard him say that, I found it curiously interesting for this note that I was writing at the time. In that scenario, someone, or thing, needs to tell the ML Model to “not be racist.” There are two assumptions here:
the model would know what racism is; and
it would not innately know that being racist is not acceptable in a human world.
Imagine a typical human child that knows everything there is to know about racism — the history, the consequences, the legalities, the geography, the culture, everything. I contend that this child would know that being racist is not acceptable; in fact they would know that it is reprehensible.
I am not pining a single statement in a single (informal) interview … but it is really curious that OpenAI’s CEO thinks that there will be an AI model that knows everything there is to know about racism, but not to the point that it will just use that knowledge to self-improve. The model would need some external prompt to not be racist.
It’s about human requirements in a human world.
Sam’s claim seems to make sense. But I will go one step further. I think that such an external prompt (“do not be racist”) to a ML model will come from a human.
In fact, I do not think that AI or any form of machines or algorithms will ever evolve to the point where they understand human motivations, feeling, ethics or moralities. As such, to make any machine meaningful, you cannot get rid of the human being out of the loop of value-creation and -consumption.
I am not sure why AI/machines/software would ever exist if not serving a human requirement rooted in a very messy human world. And if you are going to be in the business of meeting and fulfilling a person’s requirements, you need to talk to them and understand them. Another human needs to do that. Those are the humans in the loop.
I do not think machines would every be capable of gathering such human requirements.
If a machine learning model can grasp human requirements just as we need to do in the process of software engineering, then that would be impressive. And then all bets are off.
I use the terms “computer”, “algorithm” and “AI model” interchangeably in this note. Only for the purposes of this note, I feel to talk of the computer, is to essentially talk of the algorithm or AI model.
I am not trying to put recommendation systems on the spot here. The arguments I am making are highly applicable for other AI/ML problem spaces.