Every time we talk to an individual we wonder if that person understood what you were talking about, if that person comprehended the right information, if that individual grasped what you were trying to convey.
These are some of the usual concerns of every Human being, and in order to resolve this, we generally validate our concerns by questioning the other individual.
But when it comes to AI, most developers often forget to ask this very specific question to the engine.
What did you learn? or Can you Explain yourself?
It is very essential that as an AI Designer we understand what exactly are we teaching the system.
Why is it important?
When a teacher caters to a group of students in a class, it is difficult for her to get feedback from every student whether they understood the chapter or not. Very often it happens that because some of the specific students grasped the concept the teacher assumes that the whole class understood it. But for most of the students that may not be the case.
Now ideally speaking, until the teacher makes sure that all the students have understood as expected, they shouldn't move to the next chapter.
The reason being - If the students haven't gotten the complete grasp of the lesson, it is very likely that in the EXAMS they would FAIL.
Now the same can happen with our AI models as well. There is an old tale that was often cited as a key danger in neural networks.
The tale begins with the US military teaching AI to recognise the difference between US and enemy tanks using images.
While this tale was told as far back as the 90's, it is believed that they performed this experiment in the late 80's.
They received amazing results and rolled out the model into production. On the first day of real training exercise, the system failed miserable. Not only no enemy tanks were recognised but rather some of the friendlies were recognised as enemies.
The problem? A savvy engineer noticed that the AI model has been trained to recognise the difference between cloudy days and clear days instead of the difference between US and enemy tanks.
How can you ask?
Continuing with the same analogy, if the teacher where to go and ask each and every student about their status quo and understanding of the chapter, it would we an extremely exhaustive process, both in terms of time and resources. So how do we speed up this process.
What if the teacher could pass a review form and all the students once fill it, the teacher can see the summary or infographics over a dashboard. This would not only speed up the process but would also give a generic feedback of the status of the students and the difficulty level of the chapter.
If we were to draw parallels with our AI model, think of the -
Teacher - AI Developer
Student - Neurons
Review form - Framework
If an AI developer could ask each layer in the model architecture to give a gist of what they learnt, it would accelerate the review process, instead of having to ask Billions of neurons.
To solve such problems and help AI designers to make more efficient models and provide better solutions, researchers have developed a framework called - SHAP.
SHAP helps AI designers visualise the features and patterns that the system learns and interprets, thereby giving a better of understanding of our AI model and providing us with a peek into the BLACK BOX.
It is very important for AI Designers to evaluate their models and understand what the patters that the system has learnt to recognise. As there is a saying -
Number can be misleading.
Once you have accomplished answering the above questions, you have successfully ventured into a new world of Explainable AI.
I hope this article was helpful in understanding one of the very crucial domains under the responsibilities of an AI Designer. For more mind boggling articles around AI and its responsibilities, STAY TUNED 😁.