Once upon a time
Read as several years ago, in my early days of career, when I was working with a home appliance manufacturing company, an exciting incident happened.
My team would report to a Senior Manager, who was quite steadfast in pushing certain things, especially when he had made up his mind about something. One day he asked us to prepare a report, highlight critical items in italics, and colored fonts, and then we were required to print it and give it to him.
The only (big) problem was, we were using a program called “WordStar,” and all we had was a dot-matrix printer! For those who do not know, it was a DOS (Disk Operating System) based word processor program. Moreover, I do not have to explain why having a dot-matrix printer was a problem for the given task.
Our obvious response was — not possible! He wouldn’t listen; he kept pushing us to make it happen. He insisted, saying something on the lines of — don’t try to fool me, I know that computer can do anything, and your boss promised that to us when buying it!
So, we stared at our boss, with somewhat mixed feelings, waiting for him to own it up (after all, he promised the impossible) and fix it! Eventually, he did something behind the scenes that ultimately saved us.
However, that incident left a powerful impression on my ‘green-self.’ It was not just the case of over-promising but also the one where business expectations were inappropriate. Our interactions (with that Senior Manager) remained full of friction after that because we lost some credibility during this battle.
Then and now
Fast forward to today’s date, almost twenty years have passed, and here we are — still dealing with these types of problems! It feels like a déjà vu, but why?
For some funny reason, people are increasingly assuming that computers are better than humans, and can do wonders. What is more, their assumption says that humans may get it wrong sometimes, but machines will not, ever! This assumption is posing a different set of challenges for us.
These challenges are aggravating as computers become more pervasive and take part in our daily life in many ways. It does not take a genius to tell that the computers don’t have a brain of their own, let alone intelligence or conscience. It is the developer who instructs and teaches the computer — what to do. If they make a mistake and poorly design & develop the code, it will be a problem.
If developers do not test their development appropriately or have used sub-par hardware, or if there are fundamental flaws in the understanding of user requirements, the computer would perform poorly!
So, what is the problem
The problem is not the emerging technology and the bright future it promises. Our adamant belief that there is a bright technological future just around the corner is the issue!
Our adamant belief that there is a bright technological future just around the corner is the issue!
Many businesses, I mean senior responsible managers in those businesses, still feel and believe that IoT, AI, or Automation will solve their problems for good, only to find themselves in a differently flavored soup post-implementation. Why do you think this might be the case?
IT is not a solution to behavioral problems!
In my view, it starts with raising expectations early in the adoption process when the person in charge of such initiatives demonstrates only one side of the story. They usually do not tell the other side of the story, either because they do not know (lack of full knowledge) or they have some particular interest in doing so (often happens with vendors of specific technology). Accepting the fact that we don’t know what we don’t know is quite critical here.
This problem further increases with unrealistic expectations from the technology as well as failing to define the acceptance criteria up front. Failing to establish such standards upfront only results in the endowment effect. Implementation teams try to justify (much later) that whatever have they developed should be consumed because they worked on it so hard; worse part is when they try to retrofit the acceptance criteria, only to make it happen!
So necessarily, businesses do not have stubborn goals (read acceptance criteria) and do not have a handle on the means. This situation results in a somewhat uncontrollable situation. Development teams may tell you that the machine will learn eventually — but won’t tell you when and how it would improve.
Garbage in will always result in garbage out — no matter how many years you keep doing it and how intelligent the machine is!
Know thy limits
Any machine or AI cannot differentiate between right or wrong; it can only choose what is popular as it sees from the learned data.
There is a fundamental problem with AI. Unfortunately, it learns from the data fed to it. Whether it is supervised or unsupervised, it does not matter. The data has to be good and balanced. If we want to teach the machine with examples, they have to be good.
If, for some reason, that (clean data) cannot be ensured, then the testing of developed AI has to be flawless. If testing has gaps, and data is terrible, bad AI will rise. It will not only have garbage-in to give garbage-out, but it will also do it at a much faster rate and large scale. No one would want that. Moreover, therefore, we need to know these limitations and deal appropriately with emerging technologies.
There are several challenging aspects, which an AI machine cannot handle. Virtues such as fairness, morality, and ethics, we cannot teach to the computer, and hence machines cannot make specific judgemental calls based on those.
Many narrow AI programs are also not per se flawless. These programs merely try to imitate human behavior (which itself is questionable sometimes). As long as choices are black-and-white, it works well, but soon it folds when the problems go in the grey area. Poorly designed programs then tend to make random (often wrong) choices costing businesses heaps of money. Many businesses feel that this is perhaps acceptable error-rate, which sometimes could be an unfounded assumption, especially when they fail to establish acceptance criteria before starting the journey.
However, from a big-picture perspective, only the narrow AI is lesser of two evils. Anything further would mean we have to define and codify a lot of grey matter — and humans have limits!
What are the takeaways
For sure, there is a lot to talk about teaching machines morality, ethics, working in grey areas, and many things alike. However, we cannot wait for all the lights to turn green, and we must keep moving forward, learn, and improvise.
However, the biggest takeaway, for now, would be to remain positively skeptical, maintain sensibility hats always on, and adopt the technology with a grain of salt.
Machines make mistakes, just like humans do, and they will keep making them in the future too. Businesses must accept this fact and know that machines, much like humans, also need attention, retraining, and a performance improvement plan before they go live again.
Businesses must make sure, just as they do for humans, they progressively train machines and rigorously test before giving them more responsibilities. Any failure of machines’ performance should be dealt with, somewhat relatively stronger than humans.
I also suggest that businesses should establish or augment their existing HR department to HAIR (Human & Artificial Intelligent Resources) department. The department should develop appropriate policies for managing those AI resources, just as you do for humans. This idea may sound a bit silly for now, but the direction we are heading towards would soon dictate that. A movement towards making AI transparent is catching up.
Lastly, do not get carried away and assume that just because we have cool technology, we can use it to solve every problem around us. Emerging technologies are new hammers, let us avoid treating all of the issues as nails and avoid rushing into the emerging future. It is challenging to undo strategic and technological mistakes these days.
Sometimes, it is better to deal with humans than machines - sanity is the key!