Neural Networks: Chapter 10 - Types of Learning

Neural Networks Jun 28, 2021

Introduction

Neural Networks are used to solve different kinds of problem statements ranging from Image classification, natural language processing, generating speech from text, learning how to play a game or drive a car to detecting patterns from data. To support such capabilities the technology has evolved from a simple Multi-layer Perceptron to a sophisticated stack of different kinds of layers to do specific tasks.

There are several different types of learning and, in this article we will take a brief look at each one of them.

Types of Learning

  • Supervised - In this type learning, labelled dataset is used for training the models and the model learns about each type of data. Once the training process is completed, the model is tested on the basis of test data (a subset of the training set which separated before hand and is not part of the training process), and then it predicts the output.
Let's take an example of how a child learns i.e. we show examples of objects which we want them to learn and you also tell them what it is. So if we tell a child that a dog is a dog and a cat is cat (with some pictures or may be by the sound that they make) the child will be able to learn these patterns in sound or picture and start classifying it on their own. This is in a nut shell of a Supervised classification algorithm works.
  • Unsupervised - The goal of unsupervised learning is to find the underlying structure of dataset, group that data according to similarities, and represent that dataset in a compressed format.
Continuing with the child's example, we do not teach the child explicitly, that means we don't give any labels to the objects. We just tell them that they need to segregate objects based on what they think is similar. At the end what we get is clusters of objects based on what the child feels looks similar, though they may not know what these objects are called.
  • Reinforcement Learning - RL is a feedback-based Machine learning technique in which an agent learns to behave in an environment by performing the actions and seeing the results of actions. For each good action, the agent gets positive feedback, and for each bad action, the agent gets negative feedback or penalty.
The child's example may be inappropriate for this section so let's take an example of potty training a dog! So the idea is to reward a dog when it does the right (desired action) which is to do pee or potty at the right spot and every time the dog does that we reward it with some treat. In other scenario, when the dog doesn't follow the desired action you penalize it by gesturing disappointment or basically letting the dog know that we are not happy with the behavior. This will eventually lead your dog to be potty trained and you a proud owner of a well trained dog :-)
  • Incremental - In Incremental learning, input data is continuously used to extend the existing model's knowledge i.e. to further train the model. It represents a dynamic technique of supervised learning and unsupervised learning that can be applied when training data becomes available gradually over time or its size is out of system memory limits.
Coming by to our child's example, a child learns new concepts incrementally meaning every year passing from one grade to another, the child is taught different topics as per relevant age and capability to understand concepts. Well in the case of Neural Network's Incremental learning this is slightly different, the amount of memory any computer has is far more than any human but it still sometimes falls short for some problem statements which usually involves large number of data, in such cases the ability to train a model incrementally comes in handy just like how a child goes through various level of learning over the years in the school.
  • Semi supervised - It uses a small amount of labeled data and a large amount of unlabeled data, which provides the benefits of both unsupervised and supervised learning while avoiding the challenges of finding a large amount of labeled data. That means you can train a model to label data without having to use as much labeled training data.
  • Transfer - TL is a machine learning technique where a model trained on one task is re-purposed on a second related task. Transfer learning is quite common in Computer vision and Natural Language Processing use cases, the first or the base model requires millions of data points to be able to train the model with useful amount of accuracy. This requires tons of data and computing resources to be able to undertake such tasks. TL allows user to pick up a pre-trained model and train with additional samples (more like customization of the model) for the use case he/she is trying to solve.
Transfer learning and domain adaptation refer to the situation where what has been learned in one setting … is exploited to improve generalization in another setting — Page 526, Deep Learning, 2016.
  • Teacher and student - Teacher-student (T-S) learning is a transfer learning approach, where a teacher network is used to “teach” a student network to make the same predictions as the teacher.

Takeaway

Now that we know the different types of learning out there for Neural Networks at a basic level, we encourage you to deep dive into the type of learning which you find most interesting. The best way to learn such topics is to pick up use-case which you think can be solved with one of these techniques and choose accordingly. Feel free to go through the reference links given below, they helped us in our research.

References -

Supervised Machine learning - Javatpoint
Supervised Machine learning with Machine Learning, Machine Learning Tutorial, Machine Learning Introduction, What is Machine Learning, Data Machine Learning, Applications of Machine Learning, Machine Learning vs Artificial Intelligence, dimensionality reduction, deep learning, etc.
Reinforcement Learning Tutorial - Javatpoint
Reinforcement Learning Tutorial with What is Reinforcement Learning, Key Features, What is Q-Learning, Algorithm, Types, The Bellman Equation, Approaches to Implementing Reinforcement Learning etc.
ML | Semi-Supervised Learning - GeeksforGeeks
A Computer Science portal for geeks. It contains well written, well thought and well explained computer science and programming articles, quizzes and practice/competitive programming/company interview Questions.
A Gentle Introduction to Transfer Learning for Deep Learning
Transfer learning is a machine learning method where a model developed for a task is reused as the starting point […]
Modeling Teacher-Student Techniques in Deep Neural Networks for Knowledge Distillation
Knowledge distillation (KD) is a new method for transferring knowledge of astructure under training to another one. The typical application of KD is inthe form of learning a small model (named as a student) by soft labels producedby a complex model (named as a teacher). Due to the novel idea intr…

Tags

Nikhil Akki

Full Stack AI Tinkerer

Great! You've successfully subscribed.
Great! Next, complete checkout for full access.
Welcome back! You've successfully signed in.
Success! Your account is fully activated, you now have access to all content.