Singapore and Bristol, May 11, 2021 -- BasisAI, in collaboration with MLOps Consulting, have released a new open source machine learning (ML) monitoring library, Boxkite, to help organisations easily monitor and understand the behaviour of ML models once they are deployed into production; the cornerstone of sound governance of AI.
There is no doubt that there is meaningful business value to be extracted from applying AI/ML techniques to an organisation’s data. But there remain significant organisational and technical challenges for most companies in doing so.
The foremost challenge is getting models into production and ensuring AI is running as intended. This involves taking the models that ML researchers have developed on training data, and getting them to run in real applications, where they are exposed to production data in the real world.
However, once models are in production, understanding the behavior of the models in production quickly becomes the next challenge. AI models, unlike normal software, are predictive in nature, and driven by data. This means that it is insufficient to only understand the speed (latency) with which it's doing it and whether it’s reporting any errors (error rate). When there’s an AI model involved, a key question to address is: Is the data that the model was trained on consistent with the data that the model is seeing in production?
Boxkite enables real-time ML model observability
To ensure models are performing well as intended, Boxkite tracks the statistical distribution of the data that the model is trained on, and packages that distribution along with the model as it’s being deployed into production. Once the model is running, it provides live updates and alerts to ML teams when the data that the model is exposed to in production drifts significantly from the data it was trained on. This is an indicator that the model might start to go “off the rails”. It serves as an early red flag for users before it causes real business damage such as amplifying systemic bias. Observing the model’s performance throughout the AI lifecycle is an important component of risk management.
“It is critical to ensure that the operating conditions of a live AI system matches the context under which it was originally developed. Model observability is not only good business sense to maximise value from AI, but also a cornerstone of responsible AI practices.” Said Mr Liu Feng-Yuan, CEO and Cofounder of BasisAI. “Boxkite is a model monitoring library that was designed to integrate with a cloud native stack. We are releasing this as an open source project to make it simpler for everyone to build trustworthy AI systems.”
Open source users can try Boxkite today, including a “test drive” feature on the boxkite.ml website which allows them to spin up a fully integrated Kubeflow + MLflow + Boxkite + Prometheus + Grafana stack today on a Kubernetes cluster.