In this How to I assume you to have some prior knowledge and experience with Azure and Azure Machine Learning. If the latter is completely new you, I whole-heartedly recommend the quite excellent and free “Create no-code predictive models with Azure Machine Learning” learning path by Microsoft to get started!
Azure Machine Learning, especially with Automated Machine Learning, has made training machine learning models increasingly more affordable. The lowering cost of implementing ML will hopefully make forays into it more appealing to smaller businesses. However, with Azure ML one significant cost-related issue has remained: The cost of deploying the trained ML models for real-time predictions. Using the features that Azure ML provides straight out-of-the-box you could deploy your models to either Azure Container Instances – which are recommended for testing use only – or Azure Kubernetes Services, which is a very expensive choice as Azure ML requires an AKS cluster to have a minimum of 12 cores.
This is quite an expense, especially since having a good machine learning model is of no use if you don’t have it deployed somewhere where it can be accessed. Fortunately, there are other deployment options as well, such as deploying your machine learning models to Azure Functions. Using Azure Functions you can deploy the ML model to a Linux App Service, which at the lowest Basic performance tier costs a fraction of the cheapest AKS option: At the time of writing, you can get a Basic Linux App Service in the West Europe region for 11 euros a month! Now that’s something I’d call affordable.
In this blog post I’ll show you how to get an AutoML model up and running on an HTTP triggered Azure Function. Microsoft has provided their own documentation on the topic as well, but it uses a storage account trigger (instead of HTTP), omits details on quite a few crucial steps and prefers using Azure CLI for performing Azure operations. Personally, I like to do things in the Azure portal, so that’s the approach I’ve taken here.
The feature of deploying Azure ML models to Azure Functions is still in preview as of the time of writing, and as such it’s possible that a few months from now on some steps in the process work slightly differently. Let me know if you notice any changes!
Preparing your AutoML model for deployment
For this How-to article I’ll assume that you have already created an AutoML run and trained the model you wish to deploy to Azure Functions. Before the deployment can take place, there are several things you need to do:
- Register the model’s .pkl file to Azure Machine Learning
- Prepare a scoring script
- Create an inference config
- Create a Docker image containing your machine learning function
Fortunately for us AutoML provides us with a ready scoring script that you can use as-is, and it also generates a YAML file containing all of the information needed for creating an inference config. So, let’s get started!
1. Register the model’s .pkl file to Azure Machine Learning
The very first thing you need to do is to download the model from Azure Machine Learning. The trained machine learning model takes form as a .pkl file – a file format used by Python to serialize objects – and it is this file that you need to register into Azure ML so that it can be used when you are eventually creating then Azure Function deployment image. Open your AutoML experiment, and from the Models-tab select the model you wish to use and click “Download.” You’ll get a zip file which contains three files: The model .pkl file, a sample scoring script and a YAML file containing Python conda and pip dependencies needed for running the scoring script. Extract these files to your hard drive since you’ll be needing all of them.
Next, in Azure Machine Learning open the Models-page and click to register a new model. Provide your model with a name, select AutoML as the model framework and choose to upload a file. Select the .pkl file which you extracted from the previously downloaded zip file and click Register.
2. Prepare a scoring script
In order to use the model you trained a scoring script is needed. To put it simply, a scoring script is a Python script that will be run in the Azure Function and it does two things: It loads and de-serializes the model from its .pkl file in the init-method and it receives parameters sent to the Azure Function, passes them into the machine learning model for scoring and then returns the resulting values in the run-method.
So, next, you need to jump into a Python environment of choice. Personally, I prefer doing everything Azure Machine Learning related in, well, Azure Machine Learning, so my environment of choice is an Azure ML compute instance. If you haven’t used one before, jump into the Compute-page in Azure ML and click New. Choose your desired virtual machine size for the compute instance (I recommend something small, like DS2), click Next, give your compute instance a name and click Create. It will take a few minutes for your compute instance to get provisioned, so feel free to grab a cup of coffee while waiting.
And, once you’re done with using the compute instance, don’t forget to shut it down as simply having a running virtual machine costs money!
Once your compute instance is provisioned, click the Jupyter-link under Application URI to launch the Jupyter environment which you’ll use for the next few steps. Once you’ve got Jupyter open, create a new folder for the deployment files, give it a descriptive name and open it. Once you have the folder open, create a new text file and name it “score.py”.
Once you have the empty scoring script open, open the sample scoring script you extracted from the AutoML model zip package and copy its contents into the empty file in Jupyter. If you feel like it, you can modify the scoring script to your own needs, but for the purposes of this How to…that’s it!
3. Create an inference config
Next you need to create an inference config for your scoring script. An inference config is an object in the Azure Machine Learning Python libraries that does two things: It tells where the scoring script your solution uses resides, and what the Python execution environment will be like: What package dependencies does it have and is it going to be run on Docker, for example. Go back to your working folder in Jupyter and create a new Python 3.6 – Azure ML notebook. Give your notebook a descriptive name and then create a second notebook cell with the +-button – you’ll see why in a moment.
Select the second notebook cell and copy the following script into it:
import azureml.core from azureml.core import Workspace from azureml.core.environment import Environment from azureml.core.model import InferenceConfig from azureml.core.conda_dependencies import CondaDependencies ws = Workspace.from_config() # Create an environment and add conda dependencies to it myenv = Environment(name="myenv") # Enable Docker based environment myenv.docker.enabled = True # Build conda dependencies myenv.python.conda_dependencies = CondaDependencies.create(conda_packages=, pip_packages=) inference_config = InferenceConfig(entry_script="score.py", environment=myenv)
Note! If you are NOT using a compute instance associated with your Azure Machine Learning workspace the above script will not work as-is. At the very least you will have to install all required Azure ML Python packages and then connect to your Azure ML workspace using a subscription ID, a resource group ID and a workspace name.
Then go back to your downloaded AutoML model files and open the YAML file in a text editor. See the package dependencies defined in there? Copy all of those into the inference config: Conda dependencies into the conda_packages array on line 14 and pip dependencies into the pip_packages array on line 15. For my model the YAML file looked like this:
And after adding those dependencies to the inference config, my notebook looked like this:
That’s it for the inference config! Next, you’ll keep working on this same notebook to…
4. Create a Docker image containing your machine learning function
In the three previous parts you prepared all the components required to have your model running and giving predictions: You registered the model itself, created a scoring script that uses model and created an inference config that describes all dependencies that your scoring script and model have. The next step is to bundle all of those (and some other things that Azure ML handles for you behind the scenes) into a Docker image that can be deployed into Azure Functions. But before you can create the image you need to install a pip package into the compute instance. Copy the following line of code into the first notebook cell in your notebook:
pip install azureml-contrib-functions
And retrieve your registered machine learning model from the workspace:
model = ws.models['coffeemodel']
Finally, at the end of the script, add the following lines for creating a Docker container image for your model and registering it in a container registry in your Azure subscription. If you don’t have one yet, executing the notebook will create one. The method package-http has more optional arguments as well, for example to create Docker images for local deployment. You can see the documentation for all of the possible arguments and other Azure Function packaging options.
docker_image = package_http(ws, [model], inference_config, auth_level=None) docker_image.wait_for_creation(show_output=True);
The final notebook should look like this:
pip install azureml-contrib-functions import azureml.core from azureml.core import Workspace from azureml.core.environment import Environment from azureml.core.model import InferenceConfig from azureml.core.conda_dependencies import CondaDependencies from azureml.contrib.functions import package_http ws = Workspace.from_config() model = ws.models['coffeemodel'] # Create an environment and add conda dependencies to it myenv = Environment(name="myenv") # Enable Docker based environment myenv.docker.enabled = True # Build conda dependencies myenv.python.conda_dependencies = CondaDependencies.create(conda_packages=['scikit-learn==0.22.1','numpy>=1.16.0,<1.19.0','pandas==0.25.1','py-xgboost<=0.90','fbprophet==0.5','holidays==0.9.11','psutil>=5.2.2,<6.0.0'], pip_packages=['azureml-train-automl-runtime==1.16.0','inference-schema','azureml-interpret==1.16.0','azureml-defaults==1.16.0']) inference_config = InferenceConfig(entry_script="score.py", environment=myenv) docker_image = package_http(ws, [model], inference_config, auth_level=None) docker_image.wait_for_creation(show_output=True);
Then, simply run the cells in order and wait. Both installing the new pip package on the first cell and creating the Docker container image in the second will take a while, so this is also a great chance for a coffee break. We’ll continue after everything is done.
Oh, and once the scripts have completed is a good time to remember to shut down your compute instance!
Deploying the AutoML model to Azure Functions
Before you can actually deploy your model to Azure Functions you first need to create the Function App itself. Go to Azure Portal and create a new Function App as usual. The options of note here are setting Publish to Docker Container and creating a Linux app service plan with a Basic B1 SKU. Once your Function App has been provisioned, open it and move to the Container Settings page. There, after selecting Azure Container Registry as your image source, you should be able to select the Docker image you just created in Azure ML: The name of the registry is a unique string of alphanumeric characters, the name of the image should be “package” and the tag should be a timestamp in the format YYYYMMDDHHMMSS. Then, finally, click Save and wait for the Docker image to be installed. This, too, can take a while so go ahead and grab your fourth cup of coffee for this How to! To keep an eye on the progress of the installation, you can check the logs with the Refresh-button.
Once the Docker image has been installed that should be it! But before we call it a day let’s make sure that everything went according to plan and test the deployment. On the Functions-page of the Function App you should find a new function named azureml-service. Open it, get the function url together with a key and use it to create a new POST request in Postman or your other HTTP testing tool of choice. For the body of the request include a JSON object with one property – “data” – which is an array of other JSON objects, each object containing a set of parameters for one request into your ML model. Go ahead and send the request, and if everything has gone right, you’ll receive results to your requests as response.
Note! When you perform HTTP requests against your model, you must include the parameters in the JSON body in the exact same order they were listed in the scoring script’s input_sample variable! At least at the time being the scoring script that AutoML generates automatically does not actually map the parameters by name, but by order, which can be quite confusing!
So that’s the bare minimum needed to deploy an AutoML model into Azure Functions! There’s still a lot more to cover, such as using function triggers other than HTTP, capturing request data for logging purposes, custom scoring scripts and so on, but this should get your started! I hope you’ve found this How to helpful, and let me know if you’ve done any cool machine learning solutions with Azure ML!