Building and deploying ML models using Qwak ML platform

Table of Contents

  1. MLOps automation with Qwak
  2. How does it work?
  3. Feature Store
  4. Plans for the future

This is one of the articles about "A.I. in production." It contains a story about a real company using A.I. in their products or building an MLOps tool.
This is NOT a sponsored content. I don't endorse the interviewed company and its products in any way. They don't endorse me either.


Qwak is an end-to-end machine learning platform designed to minimalize the work required to build, deploy, and monitor machine learning models. “So what?” you may ask. There are dozens of such solutions. Is Qwak different?

MLOps automation with Qwak

What I like about Qwak is that it hides the details I don’t want to see on a daily basis. It makes model deployments easy because all we need is a Python class. No weird deployment scripts. No 1000-line long YAML files. If you can fit all of the code in one Python class, all you need is the qwak models deploy command. Of course, if you need to run custom preprocessing during training or inference, the project directory may contain such code too.

The project does three things that are very important to me. First, Qwak simplifies model testing. We can put tests in the tests directory like in any Python project and run them after the model is built. In addition to that, Qwak supports monitoring the model in production out of the box. It automatically logs metrics such as memory usage, request latency, and error rate. It also logs every request and model prediction. Last but not least, Qwak has extensive support for A/B testing, canary releases, and blue-green deployments. We can quickly test new model versions, switch between the versions, or deploy a new version only to a subset of users.

What if the model doesn’t work as expected? We can roll back to a previous version with one click of a button. Of course, we don’t need to use the web UI. Qwak is an API First project, so we can do everything using the command line.

How does it work?

Let’s define a Qwak-compatible model by implementing a Python class. We must create a new class by inheriting the QwakModelInterface class and implementing three methods.

In the build method, we train the model or load it from a file. We can do whatever we want, as long as the model ends up as an object field.

The schema method defines the expected input and the output of the model. This will help us use the model correctly.

Finally, the predict method calls the model and obtains the prediction. If we have any preprocessing code for the input parameters, we should put it in the predict function.

We use the qwak.analytics wrapper to track the request metrics and log the content. All of the tracked predictions end up in the Qwak Lake, and we can use any BI tool to extract and analyze them.

from qwak.model import hook
from qwak.model.base import QwakModelInterface
from qwak.model.schema import BatchFeature, ModelSchema, Entity, Prediction

qwak = hook()

class TestModel(QwakModelInterface):

    def __init__(self):
        self.model = None

    def build(self):
        ## Here, we can train the model or load the model from a file
        self.model = ...

    def schema(self):
        iris_id = Entity(name='iris_id', type=int)
        return ModelSchema(
            entities=[iris_id],
            features=[
                ExplicitFeature(name="sepal_length", type=int),
                ExplicitFeature(name="sepal_width", type=int),
                ExplicitFeature(name="petal_length", type=int),
                ExplicitFeature(name="petal_width", type=int),
            ],
            predictions=[Prediction(name="class", type=int)])

    @qwak.analytics
    def predict(self, df):
        return pd.DataFrame(self.model.predict(df), columns=['class'])

After defining the model, we have to do two more things. First, we build a Docker image and upload it to AWS ECR using the qwak models build --model-id "{model_name}" {directory_location} command. This command will also run all the tests defined in the tests directory. Of course, it uploads the Docker image only if all tests pass. In the response, the command returns a build identifier which we need during the deployment.

Finally, we deploy the model in Kubernetes using the deploy command: qwak models deploy --model-id {model_id} --build-id {build_id}.

Feature Store

In addition to deploying and monitoring models, we can use Qwak also as a Feature Store. It supports both batch and streaming data sources. It not only helps us document the data used to train the model, but the Qwak Feature Store also automates data extraction for inference.

If we wanted to include two additional features from the Feature Store in the previously defined model, we could add them to the ModelSchema in the schema method:

def schema(self):
        iris_id = Entity(name='iris_id', type=int)
        return ModelSchema(
            entities=[iris_id],
            features=[
                ExplicitFeature(name="sepal_length", type=int),
                ExplicitFeature(name="sepal_width", type=int),
                ExplicitFeature(name="petal_length", type=int),
                ExplicitFeature(name="petal_width", type=int),
            ],
            features=[
                BatchFeature(entity=iris_id, name="color_hue"),
                BatchFeature(entity=iris_id, name="color_saturation"),
            ],
            predictions=[Prediction(name="class", type=int)])

When we add the features to the schema, we can use feature extraction to pass those values to the predict function automatically:

@qwak.features_extraction
def predict(self, df, extracted_df):
    joined = df.merge(extracted_df, left_index=True, right_index=True)
    return pd.DataFrame(self.model.predict(joined), columns=['class'])

Plans for the future

Qwak founders see friction between data scientists and engineers as the main barrier for building ML-driven products. They aim to speed up the process of getting ML products into production by removing that friction.

The Qwak team wants to share that the next feature on their roadmap is integrated feedback loops, which will let ML engineers to monitor the model performance and improve it on the fly. If you want to learn more about Qwak, request a demo over here.

Older post

How to learn TDD

Learning Test-Driven Development is hard and there is nothing we can do about it

Newer post

Using AWS Deequ in Python with Python-Deequ

How to use Python-Deequ to validate Spark Dataframes

Are you looking for an experienced AI consultant? Do you need assistance with your RAG or Agentic Workflow?
Schedule a call, send me a message on LinkedIn. Schedule a call or send me a message on LinkedIn

>