Building an API for your credit model in 5 minutes

The last 3 posts explained how to create a credit model, build an API for the model using plumber and scale it up using AWS and Docker:

These posts demonstrate machine learning models can be easily delivered as a service in the form of an API. Nonetheless, the approach suggested in the three posts can be daunting for data scientists and companies which are about to start predictive modeling initiatives for the following reasons:

  • Lack of knowledge: A data scientist might know many things about machine learning and data science but not necessarily possess knowledge on cloud or container technology (dev ops engineers or data engineers are likely more familiar with these subjects)
  • Lack of interest: a data scientist might be only interested in finding insights, making informative graphs and creating predictive models from data, but not necessarily in making a service out of models or maintaining the services
  • Lack of resources: a data scientist or a company which made APIs for predictive models might find they need human beings to handle any sort of emergency situation (two popular examples: instance failures in AWS and difficulty to scale up a relational database), and realize that they do not have enough human resources to do so. In the end, a data scientist has to sleep; how can she ensure that her machine learning model as a service is up and running without any issue while she sleeps?

This post shows how to conveniently create an API for the credit model used throughout the 3 blog posts above in 5 minutes - what's going to be as rewarding as the convenience is that once you have an API running, you don't have to worry about all the technical details from maintaining the infrastructure to creating a documentation for the API.

5-minute instructions

Step 1. Sign-up or sign-in

Time: 1 minute

If you have not signed up, go to our sign up page to sign up. The page only asks you a username, your email address and a password. You will receive a verification email to the email address you provided.

Or, if you already have signed up, please go to our sign in page to sign in.

Step 2. Prepare a wrapper for the credit model

Time: 2 minutes

Write a small wrapper around the credit model built. Name it knowledge.R. The script should have a function named run. An example file is provided below. This file along with GermanCreditDecisionTree.RData which we created in our first blog post can be downloaded here.

run <- function(data) {
  load("GermanCreditDecisionTree.RData")
  
  model.result <- predict(german.credit.decision.tree, data)
  
  return(
    list(
      predicted.default.probability=round(model.result[1,2], digits=4)
    )
  )
}

Step 3. Upload the wrapper and saved credit model

Time: 1 minute

Once signed in, click the "Models" button on the left menu.

Click the "Deploy A Model" button on the right bottom.

You will see a form like below. Give it a name like "Credit Model API Demo," choose "R 3.2" for the language type and select the two files (knowledge.py and GermanCreditDecisionTree.RData) from your local drive. If you have not downloaded or created these files yet, they can be downloaded here: download credit model example files. If you are not familiar with how the file GermanCreditDecisionTree.RData was generated, refer to the first blog post.

You will see that the API is getting built in the backend. Its status will change from "building" to "docking" and "ready." Building is a step where all relevant files are gathered together into a docker container, docking is moving that container to an AWS instance and ready is when all these previous processes are complete. The whole process should't take more than 30 seconds. Let us click the model's title to have a closer look.

Step 3. Check it works

Time: 1 minute

First, let us check whether the API is "ready." Again it should be ready within seconds because the API itself does not involve any sort of heavy libraries. Typically it is downloading packages that take the most time and if your model requires downloading many and heavy libraries, this process can take a while.

Once it is ready, click the "Run" button on the bottom right.

Then, provide the input JSON as below. How these values came up should be straight forward if you read the first blog post in this sequel. Basically when we created the credit model, we knew what variables to provide and we are only providing these values to the API.

{
  "Credit.history": "A32", 
  "Duration.in.month": 24, 
  "Savings.account.bonds": "A63", 
  "Status.of.existing.checking.account": "A11"
}

Hooray! It worked and returned a predicted default probability.

Let us click the name of the run record to see more.

Once we come back to the model detail page where we made the run request, we can see that the activity graph correctly shows that we made one request in this hour.

Knowru creates an API documentation and Graphical User Interface (GUI) automatically for you as well. Knowru can also validate input to your API even before it hits the API. Let us talk about these in more detail because I promised only 5 minutes of your time :).

Now the API is up and running you can start calling it from anywhere with connections to the Internet using any programming language that supports the HTTP protocol.

Sign up to experience it yourself.

Why Knowru

Hope that you enjoyed reading this post. We hope that the value we provide through our platform could demonstrate itself while you go through the steps illustrated in the previous and this blog posts. Just to reiterate, below is a concise list of benefits our platform offers for you and your organization.

Lower cost

How long did it take for you to follow the steps in the first 3 blog posts to set up an API for a machine learning model? How long did it take this time? Also if you are responsible for data science initiatives in your organization, how many data engineers do you hire to create services for machine learning models? How many dev ops to maintain and monitor services? This platform can greatly improve their efficiency.

Auto-scale

Once models are deployed in our platform, we automatically adjust the number of containers and servers to meet the demands of your models. For business customers, we offer dev ops (monitoring and maintenance) services as well. You do not need to worry about a mid-night call asking your attention for hardware failure issues.

Auto API documentation

You do not need to write a documentation for your API - we will do for you.

Alerting

You can choose to get an email when there is an error in your email.

Reporting

The activity graph succinctly shows your volume. We are also adding many features to visualize distribution of input and output variables over time and set up alarms based on these distributions.

Access Control Management

In our business version, you can choose who can read, execute, edit and delete your API for granular access control.

Related blog posts

Receive notices on new features, blog posts and so many more!