Skip to content

Commit

Permalink
feat(operator): add Model selector for scale subresource to enable HP…
Browse files Browse the repository at this point in the history
…A-based scaling (#5932)

* add(operator): Model selector for scale subresource to enable HPA-based scaling

- updates the Model CRD to contain a pod selector in the scale subresource
- sets the selector to a label `server=[inference-server-name]` matching no actual pods
- docs
  • Loading branch information
lc525 authored Sep 24, 2024
1 parent c7aa567 commit 1bd8d0f
Show file tree
Hide file tree
Showing 9 changed files with 326 additions and 2 deletions.
293 changes: 293 additions & 0 deletions docs/source/contents/kubernetes/autoscaling/hpa-rps-autoscaling.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,293 @@
# Autoscaling single-model serving based on model RPS, using HorizontalPodAutoscaler

The goal is to autoscale model and server replicas based on model inference RPS. This will require:

- Having a Seldon Core 2 install that publishes metrics to prometheus (default). In the following, we will assume that prometheus is already installed and configured in the `seldon-monitoring` namespace.
- Installing and configuring [Prometheus Adapter](https://github.com/kubernetes-sigs/prometheus-adapter), which allows prometheus queries on relevant metrics to be published as k8s custom metrics
- Configuring HPA manifests to scale Models and the corresponding Server replicas based on the custom metrics

### Installing and configuring the Prometheus Adapter

The role of the Prometheus Adapter is to expose queries on metrics in prometheus as k8s custom or external metrics. Those can then be accessed by HPA in order to take scaling decisions.

To install via helm:

```bash
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
helm install --set prometheus.url='http://seldon-monitoring-prometheus' hpa-metrics prometheus-community/prometheus-adapter -n seldon-monitoring
```

In the commands above, we install `prometheus-adapter` as a helm release named `hpa-metrics` in the same namespace as our prometheus install, and point to its service URL (without the port).

If running prometheus on a different port than the default 9090, you can also pass `--set prometheus.port=[custom_port]` You may inspect all the options available as helm values by running `helm show values prometheus-community/prometheus-adapter`

We now need to configure the adapter to look for the correct prometheus metrics and compute per-model RPS values. On install, the adapter has created a `ConfigMap` in the same namespace as itself, named `[helm_release_name]-prometheus-adapter`. In our case, it will be `hpa-metrics-prometheus-adapter`.

We want to overwrite this ConfigMap with the content below (please change the name if your helm release has a different one). The manifest contains embedded documentation, highlighting how we match the `seldon_model_infer_total` metric in Prometheus, compute a rate via a `metricsQuery` and expose this to k8s as the `infer_rps` metric, on a per (model, namespace) basis.

Other aggregations on per (server, namespace) and (pod, namespace) are also exposed and may be used in HPA, but we will focus on the (model, namespace) aggregation in the examples below.

You may want to modify some of the settings to match the prometheus query that you typically use for RPS metrics. For example, the `metricsQuery` below computes the RPS by calling [`rate()`](https://prometheus.io/docs/prometheus/latest/querying/functions/#rate) with a 1 minute window.

```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: hpa-metrics-prometheus-adapter
namespace: seldon-monitoring
data:
config.yaml: |-
"rules":
# Rule matching Seldon inference requests-per-second metrics and exposing aggregations for
# specific k8s models, servers, pods and namespaces
#
# Uses the prometheus-side `seldon_model_(.*)_total` inference request count metrics to
# compute and expose k8s custom metrics on inference RPS `${1}_rps`. A prometheus metric named
# `seldon_model_infer_total` will be exposed as multiple `[group-by-k8s-resource]/infer_rps`
# k8s metrics, for consumption by HPA.
#
# One k8s metric is generated for each k8s resource associated with a prometheus metric, as
# defined in the "Association" section below. Because this association is defined based on
# labels present in the prometheus metric, the number of generated k8s metrics will vary
# depending on what labels are available in each discovered prometheus metric.
#
# The resources associated through this rule (when available as labels for each of the
# discovered prometheus metrics) are:
# - models
# - servers
# - pods (inference server pods)
# - namespaces
#
# For example, you will get aggregated metrics for `models.mlops.seldon.io/iris0/infer_rps`,
# `servers.mlops.seldon.io/mlserver/infer_rps`, `pods/mlserver-0/infer_rps`,
# `namespaces/seldon-mesh/infer_rps`
#
# Metrics associated with any resource except the namespace one (models, servers and pods)
# need to be requested in the context of a particular namespace.
#
# To fetch those k8s metrics manually once the prometheus-adapter is running, you can run:
#
# For "namespaced" resources, i.e. models, servers and pods (replace values in brackets):
# ```
# kubectl get --raw
# "/apis/custom.metrics.k8s.io/v1beta1/namespaces/[NAMESPACE]/[RESOURCE_NAME]/[CR_NAME]/infer_rps"
# ```
#
# For example:
# ```
# kubectl get --raw
# "/apis/custom.metrics.k8s.io/v1beta1/namespaces/seldon-mesh/models.mlops.seldon.io/iris0/infer_rps"
# ```
#
# For the namespace resource, you can get the namespace-level aggregation of the metric with:
# ```
# kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/*/metrics/infer_rps"
# ```
-
# Metric discovery: selects subset of metrics exposed in Prometheus, based on name and
# filters
"seriesQuery": |
{__name__=~"^seldon_model.*_total",namespace!=""}
"seriesFilters":
- "isNot": "^seldon_.*_seconds_total"
- "isNot": "^seldon_.*_aggregate_.*"
# Association: maps label values in the Prometheus metric to K8s resources (native or CRs)
# Below, we associate the "model" prometheus metric label to the corresponding Seldon Model
# CR, the "server" label to the Seldon Server CR, etc.
"resources":
"overrides":
"model": {group: "mlops.seldon.io", resource: "model"}
"server": {group: "mlops.seldon.io", resource: "server"}
"pod": {resource: "pod"}
"namespace": {resource: "namespace"}
# Rename prometheus metrics to get k8s metric names that reflect the processing done via
# the query applied to those metrics (actual query below under the "metricsQuery" key)
"name":
"matches": "^seldon_model_(.*)_total"
"as": "${1}_rps"
# The actual query to be executed against Prometheus to retrieve the metric value
# Here:
# - .Series is replaced by the discovered prometheus metric name (e.g.
# `seldon_model_infer_total`)
# - .LabelMatchers, when requesting a metric for a namespaced resource X with name x in
# namespace n, is replaced by `X=~"x",namespace="n"`. For example, `model=~"iris0",
# namespace="seldon-mesh"`. When requesting the namespace resource itself, only the
# `namespace="n"` is kept.
# - .GroupBy is replaced by the resource type of the requested metric (e.g. `model`,
# `server`, `pod` or `namespace`).
"metricsQuery": |
sum by (<<.GroupBy>>) (
rate (
<<.Series>>{<<.LabelMatchers>>}[1m]
)
)
```
Apply the config, and restart the prometheus adapter deployment (this restart is required so that prometheus-adapter picks up the new config):
```bash
# Apply prometheus adapter config
kubectl apply -f prometheus-adapter.config.yaml
# Restart prom-adapter pods
kubectl rollout restart deployment hpa-metrics-prometheus-adapter -n seldon-monitoring
```

In order to test that the prometheus adapter config works and everything is set up correctly, you can issue raw kubectl requests against the custom metrics API, as described below.

If no inference requests were issued towards any model in the Seldon install, the metrics configured above will not be available in prometheus, and thus will also not appear when checking via the commands below. Therefore, please first run some inference requests towards a sample model to ensure that the metrics are available — this is only required for the testing of the install.


**Testing the prometheus-adapter install using the custom metrics API**

List available metrics

```bash
kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1/ | jq .
```

Fetching model RPS metric for specific (namespace, model) pair:

```bash
kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1/namespaces/seldon-mesh/models.mlops.seldon.io/irisa0/infer_rps
```

Fetching model RPS metric aggregated at the (namespace, server) level:

```bash
kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1/namespaces/seldon-mesh/servers.mlops.seldon.io/mlserver/infer_rps
```

Fetching model RPS metric aggregated at the (namespace, pod) level:

```bash
kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1/namespaces/seldon-mesh/pods/mlserver-0/infer_rps
```

Fetching the same metric aggregated at namespace level:

```bash
kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1/namespaces/*/metrics/infer_rps
```

### Configuring HPA manifests

For every (Model, Server) pair you want to autoscale, you need to apply 2 HPA manifests based on the same metric: one scaling the Model, the other the Server. The example below only works if the mapping between Models and Servers is 1-to-1 (i.e no multi-model serving).

Consider a model named `irisa0` with the following manifest. Please note we don’t set `minReplicas/maxReplicas` this is in order to disable the seldon-specific autoscaling so that it doesn’t interact with HPA.

```yaml
apiVersion: mlops.seldon.io/v1alpha1
kind: Model
metadata:
name: irisa0
namespace: seldon-mesh
spec:
memory: 3M
replicas: 1
requirements:
- sklearn
storageUri: gs://seldon-models/testing/iris1
```
Let’s scale this model when it is deployed on a server named `mlserver`, with a target RPS **per replica** of 3 RPS (higher RPS would trigger scale-up, lower would trigger scale-down):

```yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: irisa0-model-hpa
namespace: seldon-mesh
spec:
scaleTargetRef:
apiVersion: mlops.seldon.io/v1alpha1
kind: Model
name: irisa0
minReplicas: 1
maxReplicas: 3
metrics:
- type: Object
object:
metric:
name: infer_rps
describedObject:
apiVersion: mlops.seldon.io/v1alpha1
kind: Model
name: irisa0
target:
type: AverageValue
averageValue: 3
---
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: mlserver-server-hpa
namespace: seldon-mesh
spec:
scaleTargetRef:
apiVersion: mlops.seldon.io/v1alpha1
kind: Server
name: mlserver
minReplicas: 1
maxReplicas: 3
metrics:
- type: Object
object:
metric:
name: infer_rps
describedObject:
apiVersion: mlops.seldon.io/v1alpha1
kind: Model
name: irisa0
target:
type: AverageValue
averageValue: 3
```

In the two HPA manifests above, the scaling metric is exactly the same, and uses the exact same parameters: this is to ensure that both the Models and the Servers are scaled up/down at approximately the same time. Small variations in the scale-up time are expected because each HPA samples the metrics independently, at regular intervals. If a Model gets scaled up slightly before its corresponding Server, the model is currently marked with the condition ModelReady "Status: False" with a "ScheduleFailed" message until new Server replicas become available. However, the existing replicas of that model remain available and will continue to serve inference load.

In order to ensure similar scaling behaviour between Models and Servers, the number of minReplicas and maxReplicas defined in the HPA, as well as other scaling policies configured should be kept in sync across the HPA for the model and the server.

Please note that you **must** use a `target.type` of `AverageValue`. The value given in
`averageValue` is the threshold RPS per replica, and the new (scaled) number of replicas is computed by HPA according
to the following formula:

$$\texttt{targetReplicas} = \frac{\texttt{infer\_rps}/\texttt{modelReplicas}}{\texttt{thresholdPerReplicaRPS}}$$

Attempting other target types will not work under the current Seldon Core v2 setup, because they use the number of active Pods associated with the Model CR (i.e. the associated Server pods) in the `targetReplicas` computation. However, this also means that this set of pods becomes "owned" by the Model HPA. Once a pod is owned by a given HPA it is not available for other HPAs to use, so we would no longer be able to scale the Server CRs using HPA.


**Advanced settings:**

- Filtering metrics by other labels on the prometheus metric

The prometheus metric from which the model RPS is computed has the following labels:

```yaml
seldon_model_infer_total{**code**="200", **container**="agent", **endpoint**="metrics", **instance**="10.244.0.39:9006", **job**="seldon-mesh/agent", **method_type**="rest", **model**="irisa0", **model_internal**="irisa0_1", **namespace**="seldon-mesh", **pod**="mlserver-0", **server**="mlserver", **server_replica**="0"}
```

If we wanted the scaling metric to be computed based on inferences with a particular value for those metrics, we can add this in the HPA metric config, as in the example below (targeting `method_type="rest"`):

```yaml
metrics:
- type: Object
object:
describedObject:
apiVersion: mlops.seldon.io/v1alpha1
kind: Model
name: irisa0
metric:
name: infer_rps
selector:
matchLabels:
method_type: rest
target:
type: AverageValue
averageValue: "3"
```


- Customise scale-up / scale-down rate & properties by using scaling policies as described in the [HPA scaling policies docs](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#configurable-scaling-behavior)

For more resources, please consult the [HPA docs](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) and the [HPA walkthrough](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/)
6 changes: 6 additions & 0 deletions docs/source/contents/kubernetes/autoscaling/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,12 @@ For more details on HPA check this [Kubernetes walk-through](https://kubernetes.
Autoscaling of inference servers via `seldon-scheduler` is under consideration for the roadmap. This allow for more fine grained interactions with model autoscaling.
```

```{note}
Autoscaling via HPA for both Models and Servers using custom metrics from Prometheus is possible
for the special case of single model serving (i.e. single model per server). Check the detailed
documentation [here](hpa-rps-autoscaling.md)
```

## Model autoscaling

As each model server can serve multiple models, models can scale across the available replicas of the server according to load.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -341,12 +341,17 @@ spec:
description: Total number of replicas targeted by this model
format: int32
type: integer
selector:
type: string
required:
- selector
type: object
type: object
served: true
storage: true
subresources:
scale:
labelSelectorPath: .status.selector
specReplicasPath: .spec.replicas
statusReplicasPath: .status.replicas
status: {}
Expand Down
5 changes: 5 additions & 0 deletions k8s/yaml/crds.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -344,12 +344,17 @@ spec:
description: Total number of replicas targeted by this model
format: int32
type: integer
selector:
type: string
required:
- selector
type: object
type: object
served: true
storage: true
subresources:
scale:
labelSelectorPath: .status.selector
specReplicasPath: .spec.replicas
statusReplicasPath: .status.replicas
status: {}
Expand Down
6 changes: 6 additions & 0 deletions k8s/yaml/runtime.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -24,22 +24,28 @@ spec:
- name: hodometer
disable: false
replicas: 1
podSpec: null
- name: seldon-scheduler
disable: false
serviceType: LoadBalancer
podSpec: null
- name: seldon-envoy
disable: false
replicas: 1
serviceType: LoadBalancer
podSpec: null
- name: seldon-dataflow-engine
disable: false
replicas: 1
podSpec: null
- name: seldon-modelgateway
disable: false
replicas: 1
podSpec: null
- name: seldon-pipelinegateway
disable: false
replicas: 1
podSpec: null
config:
agentConfig:
rclone:
Expand Down
2 changes: 2 additions & 0 deletions k8s/yaml/servers.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ kind: Server
metadata:
name: mlserver
spec:
podSpec: null
replicas: 1
serverConfig: mlserver
---
Expand All @@ -14,5 +15,6 @@ kind: Server
metadata:
name: triton
spec:
podSpec: null
replicas: 1
serverConfig: triton
Loading

0 comments on commit 1bd8d0f

Please sign in to comment.