Know How of MLOps ...

MLOps Aug 16, 2021
MLOps - Currently one of the hottest topics in the field of AI

Every Industry, Organisation, Research Team or Developer, somewhere is discussing
What is It?
What do they need?
Where to begin?
How to go about it?
Where to Implement it?

There are tons of questions floating around in the minds of AI enthusiasts all over the globe.

Even though there are loads of solutions available in the market that pave the path to MLOps, but one of the crucial components that they miss out on is explaining the Prerequisites for the same.

Before we dive into understanding what it takes to build an End-to-End MLOps solution, let's first try and answer some of the trivial questions.

What is MLOps?

To put it simply, it is a perfect blend between a Machine Learning Engineer and a DevOps Engineer

Well in another language it just stands for Machine Learning Operations. But what we also need to understand is What is Operations?

Operations in literal sense means to put into Action or in Effect. But keeping our context in mind, it means to setup the necessary components on Cloud Platforms and Activating them for User consumption.

In order to understand it better, we need to dive a little deeper into the Responsibilities encompassed by the umbrella of MLOps.

Responsibilities of MLOps Engineer

5 years ago knowing How to build a Machine Learning Model was more than sufficient to put your foot through the door. That's not the case anymore.

With the plethora of Frameworks and Tutorials floating around on the Internet, having the capability to develop ML Models has become a must.

This is where MLOps Engineer Stand Out in the crowd. Let's jot down some of the responsibilities under the role of MLOps -

  1. Developing Models
  2. Training and Evaluating Model results
  3. Packaging and Tagging Models
  4. Version control
  5. Deploying Models into Production
  6. Creating continuous Feedback to the Model
  7. Running All the Above Flawlessly in Loop

I know that looks scary but before jumping to conclusions, let's take a look at another fundamental question for the same.

Where to begin?

Well, the Journey to MLOps is not a quick-paced one. One has to go through multiple stages of Learning before attaining a complete understanding of the same.

But not to worry, this series of articles will take you through the various nuances involved with MLOps and unravel the mysteries and risks on your journey.

Let's not beat around the bush shall we, based on the topics mentioned above, we do realize that there are tons of content to cover.

Neural Modelling

Designing Neural Network architectures to fit your needs is of critical importance for AI-based solutions. In addition to that being said, there are tons of Open Source and Cloud-based solutions available to jumpstart your learning process at this level.

Google Cloud Platform - Chronicles of AI
Enjoy your journey through different faces of AI. Explore and unravel the mysteries of Artificial Intelligence and build cutting edge technologies challenging the conventions.
For Cloud based solutions - GCP
Tools and Frameworks - Chronicles of AI
Enjoy your journey through different faces of AI. Explore and unravel the mysteries of Artificial Intelligence and build cutting edge technologies challenging the conventions.
For Open Source solutions

Packaging Models

Model packaging can be done in a variety of ways. Based on what the end goal is or How one wants to perform Training and Inferencing, the process of model packaging keeps varying.

The most standard approach for learning packaging is Docker. Along with that, there are other Cloud Native solutions that offer similar packaging for Edge or Wed applications.

Containerisation - Go Chronicles
Take the first step towards deploying applications! Container technology is currently the hottest technology used to launch applications. And the most popular product out there is Docker! Know more about the concepts behind this awesome technology in this series
For Containerisation
Amazon SageMaker Neo
For Cloud Native Packaging for Edge

Model Versioning

Once a model has been approved and packaged, it needs to be managed, implying that - the model version, metadata tagging, the hyper-parameters used, evaluation metrics and more need to be mapped and stored for every model release.

One of the leading Open Source solutions in this sector is MLflow by DataBricks, whereas there are powerful Cloud Native solutions for the same - For example, Vertex AI by Google Cloud Platform.

MLflow - A platform for the machine learning lifecycle
An open source platform for the end-to-end machine learning lifecycle
MLflow
Google Cloud Platform - Chronicles of AI
Enjoy your journey through different faces of AI. Explore and unravel the mysteries of Artificial Intelligence and build cutting edge technologies challenging the conventions.
Cloud Native Solution - Vertex AI and more

Deployment into Production

Once the models are packaged and tagged, now comes the industry requirement of pushing them into Production. Now based on your use case the way to Deploy a Model for Inference may vary.

For a Standardised approach, the leading Open Source platform to go to is Kubeflow, in addition to that, every Cloud has stable support for both REST and Edge-based deployment.

Kubeflow
Kubeflow makes deployment of ML Workflows on Kubernetes straightforward and automated
Kubeflow for Model Ops

Conclusion

For someone aiming to kickstart their career in MLOps, this content may look overwhelming, but not to worry. As we progress in this series we will cover all the necessary components that would help you understand the big picture for an Enterprise-grade Solution.

I hope this article finds you well. STAY TUNED for more content. 😁

Tags

Vaibhav Satpathy

AI Enthusiast and Explorer

Great! You've successfully subscribed.
Great! Next, complete checkout for full access.
Welcome back! You've successfully signed in.
Success! Your account is fully activated, you now have access to all content.