Medallion Data Lakehouse Architecture

“Who rules the data, rules the world”

Business world has realised data is the most important asset for the growth and business success. Since traditional data storage and architectural methods are not capable of handling massive data volumes and advance analytics capabilities, organisations are looking for new approaches for dealing with it.

When it comes to big data stores, Data Warehouses and Data Lakes are well-known and widely used in the industry for a long time. Data Warehouses are used for storing structured data, Data Lakes are occupied when there’s the need for storing unstructured data from variety of formats.

Source : Databricks

As the table shows Data Warehouses and Data Lakes are having their own advantages and disadvantages. Modern data platforms often need the features from both worlds. Data Lakehouse is a highbred approach which combines the features from Data Warehouse and Data Lake.

In short, a Data Lakehouse is an architecture that enables efficient and secure Artificial Intelligence (AI) and Business Intelligence (BI) directly on vast amounts of data stored in Data Lakes.

https://www.databricks.com/blog/2021/08/30/frequently-asked-questions-about-the-data-lakehouse.html

With the increasing need of AI and BI workloads, Data Lakehouse architecture has caught up the attention of data professionals and has become the to-go approach to handle massive amount of data from different source systems and different formats. As the Data Lakehouse is the central location for data in an organisation, it is necessary to follow a multi-layer architecture that ensure consistency, isolation, and durability of data as it passes through multiple processing stages.

The medallion approach consists of 3 data layers. The terms bronze (raw), silver (validated), and gold (enriched) describe the quality of the data at each of these levels.

Let’s see the features and usage of each data layer of the architecture.

The Bronze Layer

This is where the journey starts! The data in various formats coming from different source systems, streaming data etc. are stored in a distributed file store which can hold vast data volumes. Normally the data in the bronze layer is not cleansed or processed (Data is in in their raw format). It can be timestamped or saved as it comes from sources.   

The Silver Layer

Though we have zillions of data in the bronze layer, most of it could be just garbage and not worthy enough for any analytical tasks. When transferring data to the silver layer, transformation rules are applied to cleanse the data, remove null values, and join with lookup tables. The cleansed and validated data in the silver layer can be used for ad-hoc reporting, sophisticated analytics and even for machine learning model training.  

The Gold Layer

As the name implies, this is the most valuable portion or the form of data in the organisation. This layer is accountable for aggregating data from the silver layer for business use. Since the data is enriched with business logic principles, data in the gold layer is mostly used in BI reporting and analytical tasks. Data analysts, data scientists and business analysts highly rely on the data available in the gold layer.

Storing the data with a medallion architecture ensure the data quality and its transparency. Medallion architecture is mainly promoted by Databricks, but can use as a blueprint for structuring Lakehouse in other platforms too.

What’s your choice for storing big data? Sticking with data lake or moving to Lakehouse?

References:

https://learn.microsoft.com/en-us/azure/databricks/lakehouse/medallion

https://www.databricks.com/blog/2020/01/30/what-is-a-data-lakehouse.html

Data Factory Vs. Synapse Vs. Databricks Vs. Machine Learning on Azure

In order to cope up with these different needs of storage, computation and technology stacks public cloud providers have come up with different service offerings to cater the needs of data analytics domain. Sometimes it confusing to select which tools to use in order to work with the project/ experiment we have.

Analyzing data always involves with computations. It’s quite usual to have big datasets both in structures and unstructured formats sitting in enterprise level databases to perform analysis/ ETL tasks.  

In such scenarios, relying on traditional sequential processing and data storage options are not a viable option. We must use an approach that can write scalable applications that can do parallel processing to process a large amount of data.  

In this post, I’m going to share my experience and a comparison on some of the data related tools comes with Microsoft Azure and their usage in real world applications. It may help you to get an idea on the toolset to choose with the need you have.

Azure Data Factory Vs. Azure Synapse Analytics

Azure Synapse Analytics is a newly introduced analytics service that brings together data integration, enterprise data warehousing, and big data analytics. Azure Data Factory (ADF) is a fully managed, serverless data integration service. Both are ETL and data analytics tools with some different approaches and features.

If you building an analytics solution in Azure, Synapse would be the best choice since it provides a single pane where you can do data integration, management, monitoring and security on the same place. If you want the power of Apache Spark, Synapse would be the choice to go forward with.

Note that some integration features on Synapse is in public preview yet compared to matured ADF platform.  

In my personal experience, I have seen lot of industrial use cases are moving towards Synapse since it’s providing more unified experience of data engineering + data analysis in a single place.

For further references : https://docs.microsoft.com/en-us/azure/synapse-analytics/data-integration/concepts-data-factory-differences

Azure Data Factory Vs. Azure Databricks

ADF is primarily used for Data Integration services to perform ETL processes and orchestrate data movements at scale. In contrast, Databricks provides a collaborative platform for Data Engineers and Data Scientists to perform ETL as well as build Machine Learning models under a single platform.

There’s a big difference when it comes to the ways of working and flexibility. ADF offers a drag and drop feature to create and maintain data pipelines visually while Databricks offers a programmatic approach with a notebook environment which supports Python, Spark, R, Java, or SQL.

If an enterprise wants to experience a no-code ETL Pipeline for Data Integration, ADF is better. On the other hand, Databricks provides a Unified Analytics platform to integrate various ecosystems for BI reporting, Data Science, and Machine Learning.

Azure Synapse Vs. Azure Databricks

Apache Spark powers both Synapse and Databricks. With optimized Apache Spark support, Databricks allows users to select GPU-enabled clusters that do faster data processing and have higher data concurrency.

Both Synapse and Databricks consist smart Notebooks. Databricks Notebooks support real-time co-authoring along with automated version control. While Synapse environment is not supportive for local IDEs, Databricks can be remotely connected with VSCode or PyCharm IDEs.

Synapse Is having built-in support for .NET applications which is an advantage in end-to-end solution development.

Personally, I would say, Synapse is not mature yet compared to Databricks in certain areas.

Azure Machine Learning Studio Vs. Azure Databricks

While Databricks is built upon Apache Spark, Azure Machine Learning Studio can be defined as an integration of machine learning model development toolset with even no-code setups. If you are familiar with building machine learning models using python or R, you’ll be very familiar with the Jupyter Notebook like environment AML Studio offers. AML Designer and Automated-ML capabilities are good for prototyping and developing machine learning models with no-code.

In contrast, Databricks are more geared towards data engineers and data scientists, not for the general user as Azure machine learning.

For computer vision/ NLP kinda experiments which uses GPU based computations I would prefer using Azure machine learning with native Python support, rather than spinning up set of nodes on Databricks.

Selecting the best tool to develop your solution and perform the experiments is a decision to take after analyzing the project life cycle, resources, cost, and the capabilities of the team. Let me know what’s your favorite platform out from these list?

References :

Data Selection for Machine Learning Models

Data is the key component of machine learning. Thus, high quality training dataset is always the main success factor of a machine learning training process. A good enough dataset leads to more accurate model training, faster convergence as well as it’s the main deciding factor on model’s fairness and unbiases too.
Let’s discuss the dos and don’ts when selecting/ preparing a dataset for training a machine learning model and the factors we should consider when composing the training data. These are valid for structured numerical data as well as for unstructured data types such as images and videos.

What does the dataset’s distribution look like?

This is important mostly with numerical datasets. Calculating and plotting the frequency distribution (how often each value occurs in the dataset) of a dataset leads us to the insights on the problem formation as well as on class distribution. ML engineers tend to have datasets with normal distribution to make sure they are having sufficient data points to train the models.


Though normal distribution is more common in nature and psychology, there’s no need of having a normal distribution on every dataset you use for model training. (Obviously some real-world data collections don’t fit for the noble bell curve).

Does the dataset represent the real world?

We are training machine learning models to solve real world problems. So, the data should be real too. It’s ok to use synthetic data if you are having no other option to collect more data or need to balance the classes, but always make sure to use the real -world data since it makes the model more robust on testing/ production. Please don’t put some random numbers into a machine learning model and expect that to solve your business problem with 90% accuracy 😉

Does the dataset match the context?

Always we must make sure the characteristics of the dataset used for training the model matches with the conditions we have when the model goes live in production. For example, will say we need to train a computer vision model for a mobile application which identify certain types of tree leaves from images captured from the mobile camera. There’s no use of training the model with images only captured in a lab environment. You should have images which are captured in wild (which is closely similar for the real-world use case of the application) in your training set.

Is the data redundant?

Data redundancy or data duplication is important point to pay attention when training ML models. If the dataset contains the same set of data points repeatedly, model overfits for that data points and will not perform well in testing. (mostly underfitting)

Is the dataset biased?

A bias dataset never produces an unbiased trained model. Always the dataset we choose should be balanced and not bias to certain cases.
Let’s get an example of having a supervised computer vision model which identifies the gender of people based on their face. Will assume the model is trained only with images from people from USA and we going to use it in an application which is used world-wide. The model will produce unrealistic predictions since the training context is bias to a certain ethnicity. To get a better outcome, the training set should have images from people from different ethnicities as well as from different age groups.

Which data is there too little/too much of?

“How much data we need to train a model with good accuracy?” – This is a question which comes out quite often when we planning ML projects. The simple answer is – “we don’t know!” 😀
There are no exact numbers on how much of data needed for training a ML model. We know that deep learning models are data hungry. Yes, we need to have large datasets for training deep neural networks since we are using those for representing non-linear complex relationships. Even with traditional machine learning algorithms, we should make sure to have enough data from all the classes even from the edge/ corner cases.
What will happen if we have too much data? – that doesn’t help at all. It only makes the training process lengthy and costly without getting the model into a good accuracy. This may end up producing an overfitted trained model too.

These are only very few points to consider when selecting a dataset for training a machine learning model. Please add your thoughts on dataset selection in comments.

Different Approaches to Perform Machine Learning Experiments on Azure

We have discussed a lot about Azure Machine Learning Studio; the one-stop portal for all ML related workloads on Azure cloud. AzureML Studio provides different approaches to work on the machine learning experiments based on needs, resources and constraints you have.  Selecting the plan to attack is completely your choice.  

We all have our own way of performing machine learning experiments. While some prefer working on Jupyter notebooks, some are more into less code environments. Being able to onboard data scientists with their familiar development environment without a big learning overhead is one of the main advantages of AzureML.

In this article, let’s have a discussion on different methods available in AzureML studio and their usage in practical scenarios. We may discuss pros and cons of each approach as well.

Please keep in mind that, these are my personal thoughts based on the experiences I had with ML experiments and this may change in different scenarios.

Automated ML

Summary of an Automated ML experiment

As the name implies, this is all automated. Automated ML is the easiest way of producing a predictive model just in few minutes. You don’t need to have any coding experience to use Automated ML. Just need to have an idea on machine learning basics and an understanding on the problem you going to solve with ML.

The process is pretty straight forward. You can start with selecting the dataset you want to use for ML model training and specify the ML task you want to perform. (Right now, it supports classification, regression, time series forecasting. Computer vision and NLP tasks are in preview). Then you can specify the algorithms you want to test it with and other optional parameters. Azure does all the hard work for you and provides a deployment ready model which can be exposed as a REST API.

Pros:

  • Zero code process.
  • Easy to use and well suited for fast prototyping.
  • Eliminate the environment setup step in ML model development
  • Limited knowledge on machine learning is needed to get a production viable result.  

Cons:

  • Limited machine learning capabilities.
  • Right now, only works with supervised learning problem scenarios.
  • Works well with relational data, computer vision and NLP are still in preview.
  • There’s no way of using custom machine learning algorithms in the process.

Azure ML Designer

Azure ML Designer

Azure ML Designer is an upgraded version of the pretty old Azure ML Studio drag and drop version. Azure ML Designer is having a similar drag and drop interface for building machine leaning experiment pipelines. You have a set of prebuilt components which you can connect together in a flowchart like manner to build machine learning experiments. You have the ability to use SQL queries or python/ R scripts if you want in the process too. After training a viable ML model, you can deploy it as a web service with just few clicks.

I personally prefer this for prototyping. Plus, I see a lot f potential on Azure ML designer in educational purposes. It’s really easy to visualize the ML process through the designer pipelines and it increases the interpretability of the operation.

Pros:

  • Zero code/ Less code environment
  • Easy to use graphical interface
  • No need to worry on development/ training environment configurations
  • Having the ability to expand the capabilities with python/ R scripts
  • Easy model deployment

Cons:

  • Less flexibility for complex ML model development.
  • Less support for deep learning workloads.
  • Code versioning should handle separately.

Azure ML notebooks

Performing data visualization on AzureML notebooks

This maybe the most favourite feature on Azure ML for data scientists. I know Jupyter notebooks are like bread and butter for data scientists. Azure ML offers a fully managed notebook experience for them without giving them the hassle of setting up the dev environment on local computers. You just have to connect the notebook with a compute instance and it allows you to do your model development and training on cloud in the same way you did on a local compute or elsewhere on notebooks.

I would recommend this as the to-go option for most of the machine learning experiments since it’s really easy to spin up a notebook instance and get the job done. Most of the ML related libraries are pre-installed on the compute instance and you even have the flexibility to install 3rd party packaged you need through conda or pip.

Pros:

  • Familiar notebook experience on cloud.
  • Option to use different python kernels.
  • No need to worry about dev environment setup on local compute.
  • Can use the powerful compute resources on Azure for model training.
  • Flexibility to install required libraries through package managers.

Cons:

  • Comes with a price for computation.
  • No direct support for spark workloads.
  • Code version control should manage separately.

Developing on local and connect to AzureML service through AML Python SDK.

This is the option I would suggest for more advanced users. Think of a scenario where you have a deep learning based computer vision experiment to run on Azure with a complex code base. If this is the case, I would definitely use AzureML python SDK and connect my prevailing code base with the AzureML service.

In this approach, your code base sits on your local computer and you are using Azure for model training, deployment and monitoring purposes. You have the total flexibility of using the power of cloud for computations as well as the flexibility of using local machine for development.

Pros:

  • Total flexibility in performing machine learning experiments with our comfortable dev tools.
  • AzureML python SDK is an open-source library.
  • Code version controlling can be handled easily.
  • Whole ML process can be managed using scripts. (Easy for automation)

Cons:

  • Setting up the local development environment may take some effort.
  • Some features are still in experimental stage.

Choosing the most convenient approach for your ML experiment is totally based on the need and resources you have. Before getting into the big picture, start small. Start with a prototype, then a workable MVP, gradually you can move forward with expanding it with complex machine learning approaches.

What’s your most preferred way of model development from these options? Please mention in the comments.

Cheers!

MLOps : Let’s start plumbing ML experiments!

What’s all this hype on MLOps? What’s the difference between machine learning and MLOps? Is MLOps essential? Why we need MLOps? Through this article series we going to start a discussion on MLOps to get a good start with the upcoming trend. The first post is not going to go deep with technicalities, but going to cover up essential concepts behind MLOps.

What is MLOps?

As the name implies, it is obviously having some connection with DevOps. So, will see what DevOps is first.

“A compound of development (Dev) and operations (Ops), DevOps is the union of people, process, and technology to continually provide value to customers.”

Microsoft Docs

This is the formal definition of DevOps. In the simpler terms, DevOps is the approach of streamlining application development life cycle of software development process. It ensures the quality engineering and security of the product while making sure the team collaboration and coordination is managed effectively.

Imagine you are a junior level developer in a software development company who develops a mission critical system for a surveillance application. DevOps process make sure each and every code line you write is tracked, managed and integrated to the final product reliably. It doesn’t stop just by managing the code base. It involves managing all development life cycle steps including the final deployment and monitoring of the final product iteratively too.

That’s DevOps. Machine Learning Operations (MLOps) is influenced by DevOps principles and practices to increase the efficiency of machine learning workflows. Simply, it’s the way of managing ML workflows in a streamlines way to ensure quality, reliability, and interpretability of machine learning experiments.

Is MLOps essential?

We have been playing around with machine learning experiments with different tools, frameworks and techniques for a while. To be honest, most of our experiments didn’t end up in production environments :D. But, that’s the ultimate goal of predictive modeling.

Machine Learning experiment is an iterative process
Source : https://azure.microsoft.com/en-au/resources/gigaom-delivering-on-the-vision-of-mlops/

Building a machine learning model and deploying it is not a single step process. It starts with data collection and goes in an iterative life cycle till monitoring the deployed model in the production environment. MLOps approaches and concepts streamline these steps and interconnect them together.

Answer is Yes! We definitely need MLOps!

Why we need MLOps?

As I said earlier, MLOps interconnect the steps in ML life cycle and streamline the process.

I grabbed these points from Microsoft docs. As it implies, these are the goals of MLOps.

  • Faster experimentation and development of models

Good MLOps practices leads for more code and component reusability which leads for faster experiments and model development. For an example, without having separate training loops or data loading components for each experiment, we can reuse an abstract set of methods for those tasks and connect them with a machine learning pipeline for running different experiment configurations. That’s make the life easy of the developer a lot!

I do lot of experiments with computer vision. In my case, I usually have a set of abstract python methods that can be used for model training and model evaluation. When performing different experiments, I pass the required parameters to the pipeline and reuse the methods which makes the life easy with less coding hassle.    

  • Faster deployment of models into production

Machine learning model deployment is always a tricky part. Managing the endpoints and making sure the deployment environment is having all the required platform dependencies maybe hard to keep track with manual processes. A streamlines MLOps pipeline helps to manage deployments by enabling us to choose which trained model should go for production etc. by keeping track of a model registry and deployment slots.

  • Quality assurance and end-to-end lineage tracking

Maintaining good coding practices, version controlling, dataset versioning etc. ensures the quality of your experiments. Good MLOps practices helps you to find out the points where errors are occurring easily rather than breaking down the whole process. Will say your trained model is not performing well with the testing data after sometime from model deployment. That might be caused by data drift happened with time. Correctly configured MLOps pipeline can track such changes in the inference data periodically and make sure to notify such incidents.

  • Trustworthiness and ethical AI

This is one of the most important use cases of MLOps. It’s crucial to have transparency in machine learning experiments. The developer/ data scientist should be able to interpret each and every decision they took while performing the experiment. Since handling data is the key component of ML model, there should be ways to maintain correct security measures in experiments too. MLOps pipelines ensure these ethical AI principles are met by streamlining the process with a defined set of procedures.

How we gonna do this?

Now we all know MLOps is crucial. Not just having set of python scripts sitting in a notebook it’s all about interconnecting all the steps in a machine learning experiments together in an iterative process pipeline. There are many methods and approaches to go forward with. Some sits on-prem while most of the solutions are having hybrid approach with the cloud. I usually use lot of Azure services in my experiments and Azure machine learning Studio provides a one-stop workbench to handle all these MLOps workloads which comes pretty handy. Let’s start with a real-world scenario and see how we can use Azure Machine Learning Studio in MLOps process to streamline our machine learning experiments.

FAQs on Machine Learning Development – #AskNaadi Part 1

Happy 2022!

It’s almost 7 years since I started playing with machine learning and related domains. These are some FAQs that comes for me from peers. Just added my thoughts on those. Feel free to any questions or concerns you have on the domain. I’ll try my best to add my thoughts on that. Note that all these answers are my personal opinions and experiences.

01. How to learn the theories behind machine learning?

The first thing I’d suggest would be ‘self-learning’. There are plenty of online resources out there where you can start studying by your own. Most of them are free. Some may need a payment for the certification (That’s totally up to you to pay and get it). I’ve listed down some of the famous places to get a kickstart for learning AI. Just take a look here.

Next would be keep practising. Never stop coding and training models in various domains. Kaggle is a good place to sharpen your skill set. Keep learning and keep practising at the same time.

02. Do we really need mathematics for ML?

Yes. To some extend you should know the theories behind probability and and some from basic mathematics. No need to worry a lot on that. As I said previously, there are plenty of places to catch up your maths too.

03. Is there a difference between data analysis and machine learning?

Yes. There is. Data analysis is about find pattern in the prevailing data and obtain inferences due to those patterns. It may have the data visualization components too. When is comes to machine learning, you train a system to learn those patterns and try to predict the upcoming pattern.

04. Does the trend in AI/ML going to fade out in the near future?

Mmm.. I don’t think so. Can’t exactly say AI is going to be ‘the’ future. Since all these technical advancements going to generate hell a lot of data, there should be a way to understand the patterns of those data and get a value out of that. So, data science and machine learning is going to be the approach to go for.

Right… those are some general questions I frequently get from people. Let’s move into some technicalities.

05. What’s the OS you use on your work rig?

Ubuntu! Yes it’s FOSS and super easy to setup all the dependencies which I need on it. (I did a complete walk through on my setup previously. Here’s it). Sometimes I use Windows too. But if it’s with docker and all, yes.. Ubuntu is the choice I’m going with.

06. What’s your preferred programming language to perform machine learning experiments?

I’m a Python guy! (Have used R a little)

07. Any frameworks/ libraries you use most in your experiments?

Since am more into deep learning and computer vision, I use PyTorch deep learning framework a lot. NumPy, Sci-kit learn, Pandas and all other ML related Python toolkits are in my toolbox always.

08. Machine learning is all about neural networks right?

No it’s not! This is one of the biggest myths! Artificial neural networks (ANNs) are only one family of algorithms which we can perform machine learning. There are plenty of other algorithms which are widely used in performing ML. Decision trees, Support Vector Machines, Naive Bayes are some popular ML algorithms which are not ANNs.

09. Why we need GPUs for training?

You need GPUs when you need to do parallel processing. The normal CPUs we have on our machines are typically having 4-5 cores and limited number of threads can be handled simultaneously. When it comes to a GPU, it’s having thousands of small cores which can handle thousands of computational threads in parallel. (For an example Nvidia 2080Ti is having 4352 CUDA cores in it). In Deep learning, we have to perform millions or calculations to train models. Running these workloads in GPUs is much faster and efficient.

10. When to use/ not to use Deep learning?

This is a tricky questions. Deep learning is always good in understanding the non-linear data. That’s why it’s performing really well in computer vision and natural language processing domains. If you have a such task, or your feature space is really large and having a massive amount of data, I’d suggest you to go with deep learning. If not sticking with traditional machine learning algorithms might be the best case.

11. Do I need to know all complex theories behind AI to develop intelligent applications?

Yes and No. In some cases, you may have to understand the theories behind AI/ML in order to develop a machine learning based applications. Mostly I would say model training and validation phases need this knowledge. Will say you are a software developer who’s very good with .Net/ Java and you are developing an application which is having a component where you have to read some text from a scanned document. You have to do it using computer vision. Fortunately you don’t have to build the component from the scratch. There are plenty of services which can be used as REST endpoints to complete the task. No need to worry on the underlying algorithms at all. Just use the JSON!

12. Should I build all my models from scratch?

This is a Yes/No answer too. This question comes mostly with deep learning model development. In some complex scenarios you may have to develop your models from the scratch. But most of the cases the problem you having can be defined as a object detection/ image classification/ Key phrase extraction from text etc. kinda problem. The best approach to go forward would be something like this.

  • Use a simple ANN and see your data loading and the related things are working fine.
  • Use a pre-trained model and see the performance (A widely used SOTA model would be the best choice).
  • If it’s not working out, do transfer learning and see the accuracy of the trained model. (You should get good results most of the times by this step)
  • Do some tweaks to the network and see if it’s working.
  • If none of these are working, then think of building a novel model.

13. Is cloud based machine learning is a good option?

In most of the industrial use cases yes! Since most of the data in prevailing systems are already sitting in the cloud and industries are heavily relying on cloud services these days, cloud based ML is a good approach. Obviously it comes with a price. When it comes to research phases, the price of purchiasing computation power maybe a problem. In those cases, my approach would be doing the research phase on-prem and moving the deployment to the cloud.

14. I’ve huge computer vision datasets to be trained? Shall I move all my stuff to the cloud?

Ehh… As I said previously, if you planning on a research project, which goes for a long time and need a lot of computational hours, I’d suggest to go with a local setup first, finalize the model and then move to the cloud. (If dollars aren’t your problem, no worries at all! Go for the cloud! Obviously it’s easy and more reliable)

15. Which cloud provider to choose?

There’s a lot of cloud providers out there having various services related to ML. Some provides out of the box services where you can just call and API to do the ML tasks (Microsoft Cognitive services etc. ). There are services where you can use your own data to train prevailing models (Custom Vision service by Azure etc.)

If you want end-to-end ML life cycle management, personally I find Azure ML service is a good solution since you can use any of your ML related frameworks and tools and just use cloud to train, manage and deploy the models. I find MLOps features that comes with Azure Machine Learning is pretty useful.

16. I’ve trained and deployed a pretty good machine learning model. I don’t need to touch that again right?

No way! You have to continuously check their performance and the accuracy they are providing for the newest data that comes to the service. The data that comes into the service may skewed. It’s always a good idea to train the model with more data. So better to have a re-training MLOps pipelines to iteratively check your models.

17. My DL models takes a lot of time to train. If I have more computation power the things will speed up?

mm.. Not all the time. I have seen cases where data loading is getting more time than model training. Make sure you are using the correct coding approaches and sufficient memory and process management. Make sure you are not using old libraries which may be the cause for slow processing times. If your code is clean and clear then try adjusting the computation power.

This is just few questions I noted down. If you have any other questions or concerns in the domain of machine learning/ deep learning and data science, just drop a comment below. Will try to add my thoughts there.

Analyzing Performance of Neural Networks with PyTorch Profiler – Part 2

PyTorch Profiler output for model training

In the previous post, we explored the basic concepts of PyTorch profiler and the newest capabilities comes with its recent updates. One of the coolest things I tried is the TensorBoard plugin comes with PyTorch Profiler. Yes.. you heard to right.. The well-known deep learning visualisation platform TensorBoard is having a Profiler plugin which makes network analysis much more easy.

I just tried the PyTorch Profiler official tutorials and seems the visualisations are pretty descriptive with analysis. I’ll do a complete deep dive with the tool in the next article.

One of the cool things I’ve noticed is the performance recommendations. Most of the recommendations make by the tool makes sense and am pretty sure they going to increase the model training performance.

In the meantime you can play around with the tool and see how convenient it is to use in your deep learning experiments. Here’s the script I used for starting the initial steps with the tool.

import torch
import torch.nn
import torch.optim
import torch.profiler
import torch.utils.data
import torchvision.datasets
import torchvision.models
import torchvision.transforms as T

#load data
transform = T.Compose(
    [T.Resize(224),
     T.ToTensor(),
     T.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
train_set = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform)
train_loader = torch.utils.data.DataLoader(train_set, batch_size=32, shuffle=True)

#create model 
device = torch.device("cuda:0")
model = torchvision.models.resnet18(pretrained=True).cuda(device)
criterion = torch.nn.CrossEntropyLoss().cuda(device)
optimizer = torch.optim.SGD(model.parameters(), lr=0.001, momentum=0.9)
model.train()


#train function
def train(data):
    inputs, labels = data[0].to(device=device), data[1].to(device=device)
    outputs = model(inputs)
    loss = criterion(outputs, labels)
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()

#use profiler to record execution events
with torch.profiler.profile(
        schedule=torch.profiler.schedule(wait=1, warmup=1, active=3, repeat=2),
        on_trace_ready=torch.profiler.tensorboard_trace_handler('./log/resnet18'),
        record_shapes=True,
        profile_memory=True,
        with_stack=True
) as prof:
    for step, batch_data in enumerate(train_loader):
        if step >= (1 + 1 + 3) * 2:
            break
        train(batch_data)
        prof.step()

Analyzing Performance of Neural Networks with PyTorch Profiler

Deep neural networks are complex! Literally it takes quite an amount of effort and time to make them work near to perfect. Despite the effort you put on fitting the model well with your data and getting an admirable accuracy you have to keep your eye on model efficiency and performance. Sometimes it’s a trade-off between the model accuracy and the efficiency in inference. In order to do this, analysing the memory and computation usage of the networks is essential. This is where profiling neural networks comes in to the scene.

Since PyTorch is my preferred deep learning framework, I’ve been using PyTorch profiler tool it had for a while on torch.autograd.profiler . It was pretty sleek and had some basic functionalities for profiling DNNs. Getting a major update PyTorch 1.8.1 announced PyTorch Profiler, the imporved performance debugging profiler for PyTorch DNNs.

One of the major improvements it has got is the performance visualisations attached with tensorboard. As mentioned in the release article, there are 5 major features included on PyTorch Profiler.

  1. Distributed training view
  2. Memory view
  3. GPU utilization
  4. Cloud storage support
  5. Jump to course code

You don’t need to have extensive set of codes for analyzing the performance of the network. Just a set of simple Profiler API calls. To get the things started, let’s see how you can use PyTorch Profiler for analyzing execution time and memory consumption of the popular resnet18 architecture. You may need to have PyTorch 1.8.1 or higher to perform these actions.

import torch
import torchvision.models as models
from torch.profiler import profile, record_function, ProfilerActivity

use_cuda = torch.cuda.is_available()
device = torch.device("cuda:0" if use_cuda else "cpu")

#init simple resnet model
model = models.resnet18().to(device)

#create a dummy input
inputs = torch.randn(5,3,224,224).to(device)

# Analyze execution time
with profile(activities=[
        ProfilerActivity.CPU, ProfilerActivity.CUDA], record_shapes=True) as prof:
    with record_function("model_inference"):
        model(inputs)

#print the output sorted with CPU execution time
print(prof.key_averages().table(sort_by="cpu_time_total", row_limit=10))


#Analyzing memory consumption
with profile(activities=[ProfilerActivity.CPU, ProfilerActivity.CUDA],
        profile_memory=True, record_shapes=True) as prof:
    model(inputs)

#print the output sorted with CPU memory consumption
print(prof.key_averages().table(sort_by="self_cpu_memory_usage", row_limit=10))
Output from the execution time analysis
Output from the memory consumption analysis

Will do discuss on using Profiler visualizations for analyzing model behaviours in the next post.

Perspective Transformation of Coordinate Points on Polygons

I blogged on some of the challenges I face with my deep learning based experiments and the approaches I used for overcoming those challenges in previous blog posts. This is going to be one of them where I’m going to explain a technique I used for calculating perspective relationship between two different planes in a computer vision application.

Background :

Computer vision is widely used in surveillance applications, object detection and sports analytics. Mapping the imagery/ video footage generated from a single camera or from set of cameras to a relative space is one of the major tasks we may have to deal with. Mostly this need comes with mapping people/ object locations.

Use Case –

Imagine a sports analytics application where you capture a soccer game from a fixed camera and run a human detection algorithm on the image to find out the player positions. That’s quite straightforward. (You can see that has been done in the following figure). The tricky part is mapping the player positions which are in the camera space to the actual soccer field coordinate and generate graph with player positions relative to the soccer field (or you may want to normalize the location coordinates) . What we need was an output similar to the left bottom one in the figure.

How to do that?

We can clearly understand that soccer field is rectangular in shape. So that, if we know the frame-space location coordinates of the 4 corners of the field, we can easily transform any point inside that polygon for a given coordinate space. In geometry this is called “perspective transformation” . (This is bit different from affine transformation which is a common application.)

Perspective transformation

If you interested in digging deep and see how this mathematical transformation is happening, I strongly encourage you to follow this link and see the related matrix calculations behind this operation.

I found out a pretty neat JavaScript snippet by Florian Segginger and I did ported the logic to a python script.

import numpy as np
from matplotlib import pyplot as plt

resultRect = {
  'p1': {'x': 0, 'y': 0},
  'p2': {'x': 1, 'y': 0},
  'p3': {'x': 1, 'y': 1},
  'p4': {'x': 0, 'y': 1}
}

resultPoint = {'x': 0, 'y': 0}

# # solve function
# First, find the transformation matrix for our deformed inputPolygon
# [a b c]
# [d e f]
# [g h 1]
def perspective_transform(inputPolygon, point):
  x0 = inputPolygon['p1']['x']
  y0 = inputPolygon['p1']['y']
  x1 = inputPolygon['p2']['x']
  y1 = inputPolygon['p2']['y']
  x2 = inputPolygon['p3']['x']
  y2 = inputPolygon['p3']['y']
  x3 = inputPolygon['p4']['x']
  y3 = inputPolygon['p4']['y']

  X0 = resultRect['p1']['x']
  Y0 = resultRect['p1']['y']
  X1 = resultRect['p2']['x']
  Y1 = resultRect['p2']['y']
  X2 = resultRect['p3']['x']
  Y2 = resultRect['p3']['y']
  X3 = resultRect['p4']['x']
  Y3 = resultRect['p4']['y']


  dx1 = x1 - x2
  dx2 = x3 - x2
  dx3 = x0 - x1 + x2 - x3
  dy1 = y1 - y2
  dy2 = y3 - y2
  dy3 = y0 - y1 + y2 - y3

  a13 = (dx3 * dy2 - dy3 * dx2) / (dx1 * dy2 - dy1 * dx2)
  a23 = (dx1 * dy3 - dy1 * dx3) / (dx1 * dy2 - dy1 * dx2)
  a11 = x1 - x0 + a13 * x1
  a21 = x3 - x0 + a23 * x3
  a31 = x0
  a12 = y1 - y0 + a13 * y1
  a22 = y3 - y0 + a23 * y3
  a32 = y0

  transformMatrix = [
  [a11, a12, a13],
  [a21, a22, a23],
  [a31, a32, 1]
  ]

  #find inverse of matrix
  transformMatrix = np.array(transformMatrix)
  inv = np.linalg.inv(transformMatrix)

  #convert point to a matrix
  pointMatrix = np.array([point['x'], point['y'],1])

  #matrix multiplication
  resultMatrix = np.matmul(pointMatrix, inv)

  #result point
  resultPoint['x'] = resultMatrix[0] / resultMatrix[2]
  resultPoint['y'] = resultMatrix[1] / resultMatrix[2]

  return resultPoint

########

#perform transformation with an example

inputPolygon = {
  'p1': {'x': 158, 'y': 2044},
  'p2': {'x': 669, 'y': 573},
  'p3': {'x': 2797, 'y': 594},
  'p4': {'x': 3686, 'y': 2062}
}

point = {'x': 1800, 'y': 900}

resultPoint = perspective_transform(inputPolygon, point)

How to use this?

Pretty easy! You have to know 3 things.

1. Coordinates of the corner 4 points of the polygon to be transformed

2. Coordinate/ coordinates of the points to be transformed

3. 4 corner points of the transformed polygon (This can be a rectangle or any 4 point polygon)

perspective_transform method will get the input polygon coordinates and point coordinates and output the resultPoint perspective to the resultRect we have defined. (In this code I’ve used a 1-1 plane to map the points)

Feel free to use this method in your applications and let me know your thoughts on this. Cheers!

Handling Imbalanced Classes with Weighted Loss in PyTorch

When it comes to real world data collections, we don’t have the prestige of having perfectly balanced labelled datasets for training models. Most of the machine learning algorithms are not immune for imbalanced classes and cause less accurate and biased models. There are many approaches that we can follow to tackle imbalanced data problem. Either we have to choose a ML algorithm which is reluctant for imbalanced data or we may have to generate synthetic data in order to make the classes balanced.

Neural networks are trained using backpropagation which treats each class same when calculating the loss. If the data is not balanced, that makes the model bias for one class than another.

A, B, C, D classes are imbalanced

I had to face this issue when experimenting with a computer vision based multi-class classification problem. The data I had was so much skewed and some classes had a very less amount of data compared to the majority class. Model was not performing well at all and need to take some actions to tackle the class imbalance problem.

These were the solutions I thought of try out.

  1. Creating synthetic data –
    Creating new synthetic data points is one of the main methods which is used mostly for numerical data and in some cases in imagery data too with the help of GAN and image augmentations. As in the starting point, I took the decision not to go with synthetic data generation since it may introduce abnormal characteristics to my dataset. So I keep that for a later part.
  2. Sampling the dataset with balanced classes –
    In this approach, what we normally do is, sample the dataset similar number of samples for each data label. For an example, will say we have a dataset which is having 3 classes named A, B & C with 100, 50, 20 data points for each class accordingly. When sampling what we do is randomly selecting 20 samples from each A, B & C classes and get a dataset with 60 data points.

In some cases this approach comes as a better option if we have very large amounts of data for each class (Even for the minority classes) In my case, I was not able to take the cost of loosing a huge portion of my data just by sampling it based on the data points having in the minority class.

Since both methods were not going well for me, I used a weighted loss function for training my neural network. Since this is a multi-class classification problem, I used Cross Entropy Loss in PyTorch as my loss function. (You can follow the similar approach if you using BCELoss for binary classification too)

import torch.nn as nn

#class weights for 6 class multi-class classification
class_weights = [0.5281, 0.8411, 0.9619, 0.8634, 0.8477, 0.9577]

#loss function with class weights
criterion = nn.CrossEntropyLoss(weight = class_weights) 

How I calculated the weight for each class? –

This is so simple. What I did was calculating a manual re-scaling weight for each class and pass it to “weight” parameter in the loss function. Make sure that you have a Tensor with the size of number of classes as the class weights. (In simpler words each class should have a weight).

Hint : If you using GPU for model training, make sure to put your class weights tensor to the GPU too.

Did it worked? Hell yeah! I was able to train my model accurately with less bias and without overfitting for a single class by using this simple trick. Let me know any other trick you use for training neural network models with imbalanced data.

Happy coding 🙂