After several months of active development, we want to present our first release of PredictKube - a solution for proactive management of a Kubernetes cluster (scaling, monitoring, security).
We are a geek team that has been developing and supporting Kubernetes clusters in a variety of environments for 6 years.
Over the years of practice, we have often come across the fact that the dynamic world challenges us faster than modern technologies can respond.
Since we have been actively working with MLOps tools for the past few years and have inside AI / ML expertise, we decided to create a tool that would work with multiple DataSources and use this data to predict events.
This way we can start preparing for incidents before they actually happen. The first direction of PredictKube was auto-scaling, since it was this industry that was the most relevant for us.
Most of our clients work with Blockchain, and blockchain nodes of such massive networks as Ethereum or Binance Smart Chain (each node requires a state greater than 1Tb) cannot be scaled instantly. Despite the fact that we have developed products such as pv-provisioner, which allow you to deploy PersistentVolume from prepared Cloud Snapshots, it can take from 2 to 4 hours to launch and synchronize 1 node.
Therefore, at the moment when the traffic has grown and the current number of replicas is not enough for us, we cannot simply scale out using the HPA rules.
The solution was to use AI and business metrics, which allows you to find out about the need for scaling in advance.
The PredictKube KEDA Scaler was born.
We chose KEDA as the foundation for integration, since we see it as the most promising product in the niche for automatic scaling of Kubernetes and implemented our own Scaler, which can work with Prometheus, as a Datasource.
All you need to do is define the standard parameters for Prometheus Scaler and a couple more settings - the planning horizon and the amount of historical data. This is enough to make your autoscaline predictive.
You can read more about how to configure this in our QuickStart