Chapter 1: Humanity in AI

Business of AI Apr 28, 2021

Let me ask you a question - What is Intelligence ?

Well everybody has their own definition. For me -

Intelligence is recognition of patterns and sequences and predicting action or reaction based on Instincts and Stimuli.

If the previous statement holds true then every form of living being holds some form of intelligence. Now that we are talking about living creatures then, Reflexes and Emotions are innate abilities of every individual. We all know that such characteristics create a certain level of unpredictability within the environment. It may be in terms of choices, preferences, capabilities or even actions. Now that being said those were just valid statement for living beings.

What would happen if we were to introduce the same parameters into an ARTIFICIAL INTELLIGENCE ? Imagine an AI with its own choices, preferences, capabilities and even predicted actions. I know the thought itself gives me Goosebumps 🎃. But what if I told you all of it is real ?

Developers and Researchers across the globe many a times unknowingly introduce such parameters into the system without being aware of the repercussions of the same. In today's article we are going to briefly take a look at a variety of "Ethical Components" involved in the field of AI. Through out this article we will be drawing parallels between humans and machines to understand the concept and its importance much better.

Let’s talk about some of the reflexive actions we humans have or tend to perform in our everyday lives —

Bias or Preference

Often in our everyday work, we simulate our own biases, knowingly or unknowingly. This could be a product of multitude of factors including our environment, upbringing, cultural norms or even for that fact our inherent nature. At the end of the day we still are biased towards something or someone irrespective of the reason.

Now let’s ask ourselves a question, who makes the data that is fed to Intelligent systems for training. Well that’s us — HUMANS. It is only natural for the systems to reflect and amplify the same biases introduced by the data which was in turn put there by an individual.

Hence it is very important to be overly cautious in terms of the data used to train a system, as all you are doing is teaching a child and introducing bias is probably the easiest thing. In order to avoid such critical errors, let’s take a look at some of the types of biases —

Selection Bias
These type of biases are one of the most commonly found ones. The basic reason behind this is that the dataset is not the apt representation of the distribution of the real world but rather is skewed towards a subset of categories. The most found example of such a bias is Speech Recognition systems in Virtual assistants. As most of the technology is built by a controlled and niche group of developers, thereby implying that some spoken accents are over presented as a part of their dataset, whereas other accents have no data at all.

Implicit Bias
This type of bias creeps in because of the implicit and intuitive assumption that we make based on selective cognitive acceptance.

Perspective matters.

Based on which segment of information one perceives the prediction tends to vary. In the same manner, based on how much of information the system is exposed to, it tends to make predictions which may be false. Now this may be cliched but the system is “Not Looking at the BIG Picture”.

It’s very important to keep in mind, HOW, WHEN, WHERE the data was collected. As that would play a significant role in the outcome of your model’s reliability.

Accountability and Explainability

Let's say for example, if a teacher fails to teach a student the difference between right and wrong or may be between good or bad. At the end of the day it's the teacher who is held accountable for the actions of the student, as ideal supervision was not imparted to the recipient.

In the same manner earlier, with the traditional approach which involved the process of Feature engineering, where the data scientists used to select features and feed it to the system for training and learning process, provided more control over the system. But nowadays developers build models and feed them volumes of data, without having a clear understanding of what features or patterns did the system learn or not learn for that fact.

Although the AI based models are providing state of the art accuracies and outperforming humans in most of the tasks,

but it’s like choosing to cross a road blindfolded with the help of a stranger.

Today’s neural networks are more of a Black box than a Glass box. Although the results are extremely satisfying but there is no complete clarity in understanding of the functionalities of neural networks. There has been recent advancements and studies around “Explainable AI ”and “Responsible AI”. But we shouldn’t forget that it’s the duty of the developer to make sure that the system is put under scrutiny at every step, because AI is not only being used for personal use but for the greater good of the society and effects billions of lives.

Instincts and Predictability

Based on the limitations and learning curriculum available to an individual during his/her development cycle, it paves the road for their choices and actions in the future when they interact with the real world. Growing up to be a responsible, reliable and independent adult is what every parent aspires their child to be. But what if there was a flaw in their childhood. There is no assurance that one would be able to cope up in the real world and may lead to unpredictability and instinctive decisions.

If we were to draw a parallel, then it's pretty simple to say if the training, data processing or testing hasn't been performed to the required standards, but rather the system was probably over-fitted or hard coded to clear the POC phase, then the system can lead to biased predictions, irrelevant results, non explainable inferencing patters, which may adversely effect the users.

For example an AI based system within Life Sciences and Healthcare, cannot be expected to have any degree of bias, preference or instincts. The impact of such errors or cause of fabricated results can directly impact an individual's life.


At the end of the day what we need to understand is that a system that is built to imitate its creators, and hence tends to latch itself onto the traits of the same. It is very crucial for the creators to understand the significance and social implications that such highly intelligent systems can create in the society. Even the slightest of mis-understanding or laid back character can lead to disastrous results.

But if built in the right manner, considering the impact and ethics involved in the process, it can prove to be a revolutionising technology.

I hope this article find you well and encourages every one of you to develop sound technologies for the upliftment of the society. 😁


Vaibhav Satpathy

AI Enthusiast and Explorer

Great! You've successfully subscribed.
Great! Next, complete checkout for full access.
Welcome back! You've successfully signed in.
Success! Your account is fully activated, you now have access to all content.