In this How to I assume you to have some prior knowledge and experience with Azure and Azure Machine Learning. If the latter is completely new you, I whole-heartedly recommend the quite excellent and free “Create no-code predictive models with Azure Machine Learning” learning path by Microsoft to get started!
Azure Machine Learning is a great service for anyone looking forward to performing, well, machine learning in a cloud environment. This is particularly true if you are already using Azure services, since Azure Machine Learning comes with lots of useful integrations to other Azure products. One set of these integrations is the ability to deploy your machine learning models straight from Azure Machine Learning to various Azure services to be available as an API. Certainly you could also use Azure Machine Learning’s own “Publish to endpoint” feature for deployment as well, but these endpoints deploy on very powerful, and thus very expensive, Azure Kubernetes clusters. Thus if you are looking for a more affordable deployment target, these alternative options become very tempting. A while back I wrote a post about deploying Azure Machine Learning models to Azure Functions, and now it’s time to show to do the same thing with App Services. If you have already read the previous blog on Azure Functions you’ll see that the first three steps described here are mostly identical, and it’s only the last part that is significantly different for App Services.

In this blog post I’ll show you how to get an AutoML model up and running on an Azure App Service API, which can be triggered by easy HTTP requests. Microsoft has provided their own documentation on the topic as well, but it omits details on quite a few crucial steps and prefers using Azure CLI for performing Azure operations. Personally, I like to do things in the Azure portal, so that’s the approach I’ve taken here.
The feature of deploying Azure ML models to Azure App Service is still in preview as of the time of writing, and as such it’s possible that a few months from now on some steps in the process work slightly differently. Let me know if you notice any changes!
Preparing your AutoML model for deployment
For this How-to article I’ll assume that you have already created an AutoML run and trained the model you wish to deploy to Azure App Service. Before the deployment can take place, there are several things you need to do:
- Register the model’s .pkl file to Azure Machine Learning
- Prepare a scoring script
- Create an inference config
- Create a Docker image containing your machine learning App Service
Fortunately for us AutoML provides us with a ready scoring script that you can use as-is, and it also generates a YAML file containing all of the information needed for creating an inference config. So, let’s get started!
1. Register the model’s .pkl file to Azure Machine Learning
The very first thing you need to do is to download the model from Azure Machine Learning. The trained machine learning model takes form as a .pkl file – a file format used by Python to serialize objects – and it is this file that you need to register into Azure ML so that it can be used when you are eventually creating then Azure App Service deployment image. Open your AutoML experiment, and from the Models-tab select the model you wish to use and click “Download.” You’ll get a zip file which contains three files: The model .pkl file, a sample scoring script and a YAML file containing Python conda and pip dependencies needed for running the scoring script. Extract these files to your hard drive since you’ll be needing all of them.

Next, in Azure Machine Learning open the Models-page and click to register a new model. Provide your model with a name, select AutoML as the model framework and choose to upload a file. Select the .pkl file which you extracted from the previously downloaded zip file and click Register.

2. Prepare a scoring script
In order to use the model you trained a scoring script is needed. To put it simply, a scoring script is a Python script that will be run in the Azure App Service and it does two things: It loads and de-serializes the model from its .pkl file in the init-method and it receives parameters sent to the Azure App Service, passes them into the machine learning model for scoring and then returns the resulting values in the run-method.
So, next, you need to jump into a Python environment of choice. Personally, I prefer doing everything Azure Machine Learning related in, well, Azure Machine Learning, so my environment of choice is an Azure ML compute instance. If you haven’t used one before, jump into the Compute-page in Azure ML and click New. Choose your desired virtual machine size for the compute instance (I recommend something small, like DS2), click Next, give your compute instance a name and click Create. It will take a few minutes for your compute instance to get provisioned, so feel free to grab a cup of coffee while waiting.
And, once you’re done with using the compute instance, don’t forget to shut it down as simply having a running virtual machine costs money!

Once your compute instance is provisioned, click the Jupyter-link under Application URI to launch the Jupyter environment which you’ll use for the next few steps. Once you’ve got Jupyter open, create a new folder for the deployment files, give it a descriptive name and open it. Once you have the folder open, create a new text file and name it “score.py”.

Once you have the empty scoring script open, open the sample scoring script you extracted from the AutoML model zip package and copy its contents into the empty file in Jupyter. If you feel like it, you can modify the scoring script to your own needs, but for the purposes of this How to…that’s it!
![File
2
4
6
7
g
12
13
14
16
17
18
27
46
score.pyv in a few seconds
Edit View Language
# Copyright (c) "Microsoft Corporation.
import json
import logging
import os
import pickle
import numpy as np
import pandas as pd
from sklearn.externals import joblib
import azureml. automl. core
Python
ALL rights
from azureml.automl.core.shared import logging_utilities, log_server
from azureml. telemetry import INSTRUMENTATION KEY
from inference_schema.schema_decorators import input_schema, output_schema
from inference_schema.parameter_types.numpy_parameter_type import NumpyParameterType
from inference_schema.parameter_types.pandas_parameter_type import PandasParameterType
input_sample = pd " . pd. dtype="object"),
"Time of day":
"Finnish people": pd dtype-"int64")
, "British people": pd dtype=" int64"),
"Other nationalities" .
output_sample = np.array(Ca])
try:
log_server. enable _ telemetry (INSTRUMENTATION KEY)
log_server. ' INFO ' )
logger = logging. getLogger( ' azureml . automl . core. scoring_script ' )
except :
pass
def init()
global model
# This name is modeL.id of model that we want to deploy deseriatize the model file back
# into G skLearn model
model path = os 'AZURE-ML mODEL DIR' ) ,
'model.pkl
try
model = joblib.load(model path)
except Exception as e:
path = os .path.normpath(model_path)
path_split
path. split (os. sep)
log_server. update custom 'model _ name' :
logging_utilities. logger)
'model version' .
path_sp1itC2]})
raise
@input_schema( 'data ,
PandasParameterType (input_sample))](https://joonasaijala.files.wordpress.com/2020/11/image-6.png?w=1024)
3. Create an inference config
Next you need to create an inference config for your scoring script. An inference config is an object in the Azure Machine Learning Python libraries that does two things: It tells where the scoring script your solution uses resides, and what the Python execution environment will be like: What package dependencies does it have and is it going to be run on Docker, for example. Go back to your working folder in Jupyter and create a new Python 3.6 – Azure ML notebook. Give your notebook a descriptive name and then copy the following script into it:
import azureml.core
from azureml.core import Workspace
from azureml.core import Model
from azureml.core.environment import Environment
from azureml.core.model import InferenceConfig
from azureml.core.conda_dependencies import CondaDependencies
ws = Workspace.from_config()
# Create an environment and add conda dependencies to it
myenv = Environment(name="myenv")
# Enable Docker based environment
myenv.docker.enabled = True
# Build conda dependencies
myenv.python.conda_dependencies = CondaDependencies.create(conda_packages=[],
pip_packages=[])
inference_config = InferenceConfig(entry_script="score.py", environment=myenv)
Note! If you are NOT using a compute instance associated with your Azure Machine Learning workspace the above script will not work as-is. At the very least you will have to install all required Azure ML Python packages and then connect to your Azure ML workspace using a subscription ID, a resource group ID and a workspace name.
Then go back to your downloaded AutoML model files and open the YAML file in a text editor. See the package dependencies defined in there? Copy all of those into the inference config: Conda dependencies into the conda_packages array on line 14 and pip dependencies into the pip_packages array on line 15. For my model the YAML file looked like this:

And after adding those dependencies to the inference config, my notebook looked like this:

That’s it for the inference config! Next, you’ll keep working on this same notebook to…
4. Create a Docker image containing your machine learning App Service
In the three previous parts you prepared all the components required to have your model running and giving predictions: You registered the model itself, created a scoring script that uses model and created an inference config that describes all dependencies that your scoring script and model have. The next step is to bundle all of those (and some other things that Azure ML handles for you behind the scenes) into a Docker image that can be deployed into Azure App Service. First thing you need to do is to retrieve your registered machine learning model from the workspace:
model = ws.models['coffeemodel']
Finally, at the end of the script, add the following lines for creating a Docker container image for your model and registering it in a container registry in your Azure subscription. If you don’t have one yet, executing the notebook will create one. The method Model.package has more optional arguments as well, for example to create Docker images for local deployment. You can see the documentation for all of the possible arguments and other packaging options. The Model.package -function takes two additional parameters image_name and image_label which describe the image in the container registry. You can think of image_name as the name of this particular model, and image_label as the version number. So for label you can, for example, use a timestamp string, a version number or go for something simple like the word “latest.” Note that if you deploy an other version of the model with the same name and label the old one gets overwritten!
docker_image_name = 'auto-coffee-model'
docker_image_label = 'latest'
docker_image = Model.package(ws, [model], inference_config,
image_name=docker_image_name,
image_label=docker_image_label)
docker_image.wait_for_creation(show_output=True)
The final notebook should look like this:
import azureml.core
from azureml.core import Workspace
from azureml.core import Model
from azureml.core.environment import Environment
from azureml.core.model import InferenceConfig
from azureml.core.conda_dependencies import CondaDependencies
ws = Workspace.from_config()
model = ws.models['coffeemodel']
# Create an environment and add conda dependencies to it
myenv = Environment(name="myenv")
# Enable Docker based environment
myenv.docker.enabled = True
# Build conda dependencies
myenv.python.conda_dependencies = CondaDependencies.create(conda_packages=['scikit-learn==0.22.1','numpy>=1.16.0,<1.19.0','pandas==0.25.1','py-xgboost<=0.90','fbprophet==0.5','holidays==0.9.11','psutil>=5.2.2,<6.0.0'],
pip_packages=['azureml-train-automl-runtime==1.16.0','inference-schema','azureml-interpret==1.16.0','azureml-defaults==1.16.0'])
inference_config = InferenceConfig(entry_script="score.py", environment=myenv)
docker_image_name = 'auto-coffee-model'
docker_image_label = 'latest'
docker_image = Model.package(ws, [model], inference_config,
image_name=docker_image_name,
image_label=docker_image_label)
docker_image.wait_for_creation(show_output=True)
Then, simply run the cells in order and wait. Creating the Docker container image will take a while, so this is also a great chance for a coffee break. We’ll continue after everything is done.
Oh, and once the scripts have completed is a good time to remember to shut down your compute instance!
Deploying the AutoML model to Azure App Service
Before you can actually deploy your model to Azure App Service you first need to create the App Service resource itself. Go to Azure Portal and create a new App Service as usual. The options of note here are setting Publish to Docker Container and creating a Linux app service plan with a Basic B1 SKU. Once your App Service has been provisioned, open it and move to the Container Settings page. There, after selecting Azure Container Registry as your image source, you should be able to select the Docker image you just created in Azure ML: Unless you have named the registry yourself, the name of the registry is a unique string of alphanumeric characters. The name of the image should be the value you used for the parameter image_name in the deployment script, and the tag is the value you assigned to image_label. Then, finally, click Save and wait for the Docker image to be installed. This, too, can take a while so go ahead and grab your fourth cup of coffee for this How to! To keep an eye on the progress of the installation, you can check the logs with the Refresh-button.


Once the Docker image has been installed that should be it! But before we call it a day let’s make sure that everything went according to plan and test the deployment. When deployed to an App Service, Azure Machine Learning models can be called by performing an HTTP POST request to the /score endpoint, for example https://yourmodel.azurewebsites.net/score. You can test your model deployment by creating a new POST request in Postman or your other HTTP testing tool of choice. For the body of the request include a JSON object with one property – “data” – which is an array of other JSON objects, each object containing a set of parameters for one request into your ML model. Go ahead and send the request, and if everything has gone right, you’ll receive results to your requests as response.
Note! When you perform HTTP requests against your model, you must include the parameters in the JSON body in the exact same order they were listed in the scoring script’s input_sample variable! At least at the time being the scoring script that AutoML generates automatically does not actually map the parameters by name, but by order, which can be quite confusing!

In closing…
So that’s the bare minimum needed to deploy an AutoML model into Azure App Services! There’s still a lot more to cover, such as adding authentication, capturing request data for logging purposes, custom scoring scripts and so on, but this should get your started! I hope you’ve found this How to helpful, and let me know if you’ve done any cool machine learning solutions with Azure ML!