How to: Easily deploying Azure Machine Learning models to Azure Functions

In this How to I assume you to have some prior knowledge and experience with Azure and Azure Machine Learning. If the latter is completely new you, I whole-heartedly recommend the quite excellent and free “Create no-code predictive models with Azure Machine Learning” learning path by Microsoft to get started!

Azure Machine Learning, especially with Automated Machine Learning, has made training machine learning models increasingly more affordable. The lowering cost of implementing ML will hopefully make forays into it more appealing to smaller businesses. However, with Azure ML one significant cost-related issue has remained: The cost of deploying the trained ML models for real-time predictions. Using the features that Azure ML provides straight out-of-the-box you could deploy your models to either Azure Container Instances – which are recommended for testing use only – or Azure Kubernetes Services, which is a very expensive choice as Azure ML requires an AKS cluster to have a minimum of 12 cores.

INSTANCE: 
02 VB: 2 vCPUs, 8 GB RAM, 50 Ga Temporary storage, €0.101/hour 
Savings Options 
6 
Virtual 
Machines 
x 
730 
Hours 
Save up to 72% on pay-as-you-go prices with I-year or 3-year Reserved Virtual Machine Instances. Reserved Instances are great for applications with steady-state usage and 
applications that require reserved capacity. Learn more about Reserved VM Instances pricing. 
@ Pay as you go 
C) 1 year reserved (—36% savings) 
C) 3 year reserved (—56% savings) 
€443.24 
Average per month 
(€0.00 charged upfront) 
€443.24 
Average per month 
(€0.00 charged upfront)

This is quite an expense, especially since having a good machine learning model is of no use if you don’t have it deployed somewhere where it can be accessed. Fortunately, there are other deployment options as well, such as deploying your machine learning models to Azure Functions. Using Azure Functions you can deploy the ML model to a Linux App Service, which at the lowest Basic performance tier costs a fraction of the cheapest AKS option: At the time of writing, you can get a Basic Linux App Service in the West Europe region for 11 euros a month! Now that’s something I’d call affordable.

In this blog post I’ll show you how to get an AutoML model up and running on an HTTP triggered Azure Function. Microsoft has provided their own documentation on the topic as well, but it uses a storage account trigger (instead of HTTP), omits details on quite a few crucial steps and prefers using Azure CLI for performing Azure operations. Personally, I like to do things in the Azure portal, so that’s the approach I’ve taken here.

The feature of deploying Azure ML models to Azure Functions is still in preview as of the time of writing, and as such it’s possible that a few months from now on some steps in the process work slightly differently. Let me know if you notice any changes!

Preparing your AutoML model for deployment

For this How-to article I’ll assume that you have already created an AutoML run and trained the model you wish to deploy to Azure Functions. Before the deployment can take place, there are several things you need to do:

  1. Register the model’s .pkl file to Azure Machine Learning
  2. Prepare a scoring script
  3. Create an inference config
  4. Create a Docker image containing your machine learning function

Fortunately for us AutoML provides us with a ready scoring script that you can use as-is, and it also generates a YAML file containing all of the information needed for creating an inference config. So, let’s get started!

1. Register the model’s .pkl file to Azure Machine Learning

The very first thing you need to do is to download the model from Azure Machine Learning. The trained machine learning model takes form as a .pkl file – a file format used by Python to serialize objects – and it is this file that you need to register into Azure ML so that it can be used when you are eventually creating then Azure Function deployment image. Open your AutoML experiment, and from the Models-tab select the model you wish to use and click “Download.” You’ll get a zip file which contains three files: The model .pkl file, a sample scoring script and a YAML file containing Python conda and pip dependencies needed for running the scoring script. Extract these files to your hard drive since you’ll be needing all of them.

Run I Completed 
C.) Refresh 
Cancel 
Details 
Data guardrails 
Models Outputs + logs 
Explain model 
Child runs 
Deploy Download 
Algorithm name 
VotingEnsemble 
Explained 
View explanation

Next, in Azure Machine Learning open the Models-page and click to register a new model. Provide your model with a name, select AutoML as the model framework and choose to upload a file. Select the .pkl file which you extracted from the previously downloaded zip file and click Register.

Name 
coffeemodel 
Description 
Model framework * 
AutoML 
Framework version * 
Model file or folder * 
@ Upload file 
model.pkl 
Add tags 
Name 
Add properties 
Name 
O Upload folder 
Value 
Value 
Browse 
Add tag 
Add property

2. Prepare a scoring script

In order to use the model you trained a scoring script is needed. To put it simply, a scoring script is a Python script that will be run in the Azure Function and it does two things: It loads and de-serializes the model from its .pkl file in the init-method and it receives parameters sent to the Azure Function, passes them into the machine learning model for scoring and then returns the resulting values in the run-method.

So, next, you need to jump into a Python environment of choice. Personally, I prefer doing everything Azure Machine Learning related in, well, Azure Machine Learning, so my environment of choice is an Azure ML compute instance. If you haven’t used one before, jump into the Compute-page in Azure ML and click New. Choose your desired virtual machine size for the compute instance (I recommend something small, like DS2), click Next, give your compute instance a name and click Create. It will take a few minutes for your compute instance to get provisioned, so feel free to grab a cup of coffee while waiting.

And, once you’re done with using the compute instance, don’t forget to shut it down as simply having a running virtual machine costs money!

+ New CD 
Refresh 
Sta rt 
@ Stop 
Status 
O Running 
Resta rt 
Delete 
View quota 
Instances I can use 
e 
Name 
Cl-jod 
Application URI 
JupyterLab Jupyter 
RStudio 
SSH 
Virtual machine size 
STANDARD DS2 V2

Once your compute instance is provisioned, click the Jupyter-link under Application URI to launch the Jupyter environment which you’ll use for the next few steps. Once you’ve got Jupyter open, create a new folder for the deployment files, give it a descriptive name and open it. Once you have the folder open, create a new text file and name it “score.py”.

Once you have the empty scoring script open, open the sample scoring script you extracted from the AutoML model zip package and copy its contents into the empty file in Jupyter. If you feel like it, you can modify the scoring script to your own needs, but for the purposes of this How to…that’s it!

File 
2 
4 
6 
7 
g 
12 
13 
14 
16 
17 
18 
27 
46 
score.pyv in a few seconds 
Edit View Language 
# Copyright (c) "Microsoft Corporation. 
import json 
import logging 
import os 
import pickle 
import numpy as np 
import pandas as pd 
from sklearn.externals import joblib 
import azureml. automl. core 
Python 
ALL rights 
from azureml.automl.core.shared import logging_utilities, log_server 
from azureml. telemetry import INSTRUMENTATION KEY 
from inference_schema.schema_decorators import input_schema, output_schema 
from inference_schema.parameter_types.numpy_parameter_type import NumpyParameterType 
from inference_schema.parameter_types.pandas_parameter_type import PandasParameterType 
input_sample = pd " . pd. dtype="object"), 
"Time of day": 
"Finnish people": pd dtype-"int64") 
, "British people": pd dtype=" int64"), 
"Other nationalities" . 
output_sample = np.array(Ca]) 
try: 
log_server. enable _ telemetry (INSTRUMENTATION KEY) 
log_server. ' INFO ' ) 
logger = logging. getLogger( ' azureml . automl . core. scoring_script ' ) 
except : 
pass 
def init() 
global model 
# This name is modeL.id of model that we want to deploy deseriatize the model file back 
# into G skLearn model 
model path = os 'AZURE-ML mODEL DIR' ) , 
'model.pkl 
try 
model = joblib.load(model path) 
except Exception as e: 
path = os .path.normpath(model_path) 
path_split 
path. split (os. sep) 
log_server. update custom 'model _ name' : 
logging_utilities. logger) 
'model version' . 
path_sp1itC2]}) 
raise 
@input_schema( 'data , 
PandasParameterType (input_sample))

3. Create an inference config

Next you need to create an inference config for your scoring script. An inference config is an object in the Azure Machine Learning Python libraries that does two things: It tells where the scoring script your solution uses resides, and what the Python execution environment will be like: What package dependencies does it have and is it going to be run on Docker, for example. Go back to your working folder in Jupyter and create a new Python 3.6 – Azure ML notebook. Give your notebook a descriptive name and then create a second notebook cell with the +-button – you’ll see why in a moment.

Select the second notebook cell and copy the following script into it:

import azureml.core
from azureml.core import Workspace
from azureml.core.environment import Environment
from azureml.core.model import InferenceConfig
from azureml.core.conda_dependencies import CondaDependencies

ws = Workspace.from_config()

# Create an environment and add conda dependencies to it
myenv = Environment(name="myenv")
# Enable Docker based environment
myenv.docker.enabled = True
# Build conda dependencies
myenv.python.conda_dependencies = CondaDependencies.create(conda_packages=[],                                                           
                         pip_packages=[])
inference_config = InferenceConfig(entry_script="score.py", environment=myenv)

Note! If you are NOT using a compute instance associated with your Azure Machine Learning workspace the above script will not work as-is. At the very least you will have to install all required Azure ML Python packages and then connect to your Azure ML workspace using a subscription ID, a resource group ID and a workspace name.

Then go back to your downloaded AutoML model files and open the YAML file in a text editor. See the package dependencies defined in there? Copy all of those into the inference config: Conda dependencies into the conda_packages array on line 14 and pip dependencies into the pip_packages array on line 15. For my model the YAML file looked like this:

conda env_v_l O_O.ymI

And after adding those dependencies to the inference config, my notebook looked like this:

In C 
In C 
• import azureml. core 
from azureml.core import Workspace 
from azureml.core. environment import Environment 
from azureml.core.model import InferenceConfig 
from azureml. core. conda dependencies import CondaDependencies 
Workspace. from_config( ) 
# Create an environment and add conda dependencies to it 
myenv = 
# Enable Docker based environment 
myenv. docker.enabled = True 
# Build conda dependencies 
myenv. python. conda_dependencies = CondaDependencies. ' 22.1 ' , ' numpy> 
' 'panda 
' 16 8' , ' inference-schem 
inference_config = score.py" 
envi ronment=myenv )

That’s it for the inference config! Next, you’ll keep working on this same notebook to…

4. Create a Docker image containing your machine learning function

In the three previous parts you prepared all the components required to have your model running and giving predictions: You registered the model itself, created a scoring script that uses model and created an inference config that describes all dependencies that your scoring script and model have. The next step is to bundle all of those (and some other things that Azure ML handles for you behind the scenes) into a Docker image that can be deployed into Azure Functions. But before you can create the image you need to install a pip package into the compute instance. Copy the following line of code into the first notebook cell in your notebook:

pip install azureml-contrib-functions

And retrieve your registered machine learning model from the workspace:

model = ws.models['coffeemodel']

Finally, at the end of the script, add the following lines for creating a Docker container image for your model and registering it in a container registry in your Azure subscription. If you don’t have one yet, executing the notebook will create one. The method package-http has more optional arguments as well, for example to create Docker images for local deployment. You can see the documentation for all of the possible arguments and other Azure Function packaging options.

docker_image = package_http(ws, [model], inference_config, auth_level=None)
docker_image.wait_for_creation(show_output=True);

The final notebook should look like this:

pip install azureml-contrib-functions

import azureml.core
from azureml.core import Workspace
from azureml.core.environment import Environment
from azureml.core.model import InferenceConfig
from azureml.core.conda_dependencies import CondaDependencies
from azureml.contrib.functions import package_http

ws = Workspace.from_config()

model = ws.models['coffeemodel']

# Create an environment and add conda dependencies to it
myenv = Environment(name="myenv")
# Enable Docker based environment
myenv.docker.enabled = True
# Build conda dependencies
myenv.python.conda_dependencies = CondaDependencies.create(conda_packages=['scikit-learn==0.22.1','numpy>=1.16.0,<1.19.0','pandas==0.25.1','py-xgboost<=0.90','fbprophet==0.5','holidays==0.9.11','psutil>=5.2.2,<6.0.0'],
                                                           pip_packages=['azureml-train-automl-runtime==1.16.0','inference-schema','azureml-interpret==1.16.0','azureml-defaults==1.16.0'])
inference_config = InferenceConfig(entry_script="score.py", environment=myenv)

docker_image = package_http(ws, [model], inference_config, auth_level=None)
docker_image.wait_for_creation(show_output=True);

Then, simply run the cells in order and wait. Both installing the new pip package on the first cell and creating the Docker container image in the second will take a while, so this is also a great chance for a coffee break. We’ll continue after everything is done.

Oh, and once the scripts have completed is a good time to remember to shut down your compute instance!

Deploying the AutoML model to Azure Functions

Before you can actually deploy your model to Azure Functions you first need to create the Function App itself. Go to Azure Portal and create a new Function App as usual. The options of note here are setting Publish to Docker Container and creating a Linux app service plan with a Basic B1 SKU. Once your Function App has been provisioned, open it and move to the Container Settings page. There, after selecting Azure Container Registry as your image source, you should be able to select the Docker image you just created in Azure ML: The name of the registry is a unique string of alphanumeric characters, the name of the image should be “package” and the tag should be a timestamp in the format YYYYMMDDHHMMSS. Then, finally, click Save and wait for the Docker image to be installed. This, too, can take a while so go ahead and grab your fourth cup of coffee for this How to! To keep an eye on the progress of the installation, you can check the logs with the Refresh-button.

Once the Docker image has been installed that should be it! But before we call it a day let’s make sure that everything went according to plan and test the deployment. On the Functions-page of the Function App you should find a new function named azureml-service. Open it, get the function url together with a key and use it to create a new POST request in Postman or your other HTTP testing tool of choice. For the body of the request include a JSON object with one property – “data” – which is an array of other JSON objects, each object containing a set of parameters for one request into your ML model. Go ahead and send the request, and if everything has gone right, you’ll receive results to your requests as response.

Note! When you perform HTTP requests against your model, you must include the parameters in the JSON body in the exact same order they were listed in the scoring script’s input_sample variable! At least at the time being the scoring script that AutoML generates automatically does not actually map the parameters by name, but by order, which can be quite confusing!

POST 
Params • 
none 
https://....azurewebsites.net/api/azureml-service?code= 
Authorization 
Headers (10) 
Body 
Pre-request Script 
Tests 
Settings 
form-data x-www-form-urlencoded raw binary GraphQL 
"data" : 
"Season": "Winter", 
"Time of day": "morning" , 
"Finnish people" • 
"British people" 
"Other nationalities" : 
Body Cookies Headers (6) 
Pretty 
Raw 
Preview 
Test Results 
Visualize 
" [252.48617685231147]} "

In closing…

So that’s the bare minimum needed to deploy an AutoML model into Azure Functions! There’s still a lot more to cover, such as using function triggers other than HTTP, capturing request data for logging purposes, custom scoring scripts and so on, but this should get your started! I hope you’ve found this How to helpful, and let me know if you’ve done any cool machine learning solutions with Azure ML!


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s