Version v1.4 of the documentation is no longer actively maintained. The site that you are currently viewing is an archived snapshot. For up-to-date documentation, see the latest version.
Adding Admission Webhooks to an Ansible-based Operator
For general background on what admission webhooks are, why to use them, and how to build them, please refer to the official Kubernetes documentation on Extensible Admission Controllers
This guide will assume that you understand the above content, and that you have an existing admission webhook server. You will likely need to make a few modifications to the webhook server container.
When integrating an admission webhook server into your Ansible-based Operator, we recommend that you deploy it as a sidecar container alongside your operator. This allows you to make use of the proxy server that the operator deploys, as well as the cache that backs it.
Ensuring the webhook server uses the caching proxy
When an Ansible-based Operator runs, it creates a Kubernetes proxy server and serves it on
http://localhost:8888. This proxy server does not require any authorization, so all you need to
do to make use of the proxy is ensure that your Kubernetes client is pointing at
and that it does not attempt to verify SSL. If you use the default in-cluster configuration, you will
be hitting the real API server and will not get caching for free.
Deploying the webhook server
Create a new file called
config/default/manager_webhook_patch.yaml with the following content
(making sure to replace the image reference placeholder string):
# This patch injects a sidecar container which is an admission webhook for the # controller manager. apiVersion: apps/v1 kind: Deployment metadata: name: controller-manager namespace: system spec: template: spec: containers: - name: webhook # Replace this with the built image name image: "REPLACE_WEBHOOK_IMAGE" volumeMounts: - mountPath: /etc/tls/ name: webhook-cert volumes: # This assumes there is a secret called webhook-cert containing TLS certificates # Projects like cert-manager can create these certificates - name: webhook-cert secret: secretName: webhook-cert
config/default/kustomization.yaml to include this patch:
patchesStrategicMerge: - manager_webhook_patch.yaml # Add this line
Now, when deploying the operator with
make deploy, your webhook server will run alongside the
operator, but Kubernetes will not yet call the webhooks before resources can be created. In order
to let Kubernetes know about your webhooks, you must create specific API resources.
Making Kubernetes call your webhooks
In order to make your webhooks callable at all, first you must create a
Service that points at your
webhook server. Below is a sample service that creates a
my-operator-webhook, that will
send traffic on port
443 to port
5000 in a
Pod that matches the selector
name=my-operator. Modify these
values to match your environment.
apiVersion: v1 kind: Service metadata: name: my-operator-webhook spec: ports: - name: webhook port: 443 protocol: TCP # Change targetPort to match the port your server is listening on targetPort: 5000 selector: # Change this selector to match the labels on your operator pod name: my-operator type: ClusterIP
Now that you have a
Service directing traffic to your webhook server, you will need to create
ValidatingWebhookConfiguration objects (depending on what type of webhook you have deployed), which will tell Kubernetes
to send certain API requests through your webhooks before writing to etcd.
Below are examples of both
which will tell Kubernetes to call the
my-operator-webhook service when
samples.example.com Example resources
are created. The mutating webhook is served on the
/mutating path in my example webhook server, and the validating webhook is served on
/validating. Update these values as needed to reflect your environment
and desired behavior. These objects are thoroughly documented in the official Kubernetes documentation on Extensible Admission Controllers
--- apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration metadata: name: mutating.example.com webhooks: - name: "mutating.example.com" rules: - apiGroups: ["samples.example.com"] apiVersions: ["*"] operations: ["CREATE"] resources: ["examples"] scope: "Namespaced" clientConfig: service: # Replace this with the namespace your service is in namespace: REPLACE_NAMESPACE name: my-operator-webhook path: /mutating admissionReviewVersions: ["v1"] sideEffects: None --- apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingWebhookConfiguration metadata: name: validating.example.com webhooks: - name: validating.example.com rules: - apiGroups: ["samples.example.com"] apiVersions: ["*"] operations: ["CREATE"] resources: ["examples"] scope: "Namespaced" clientConfig: service: # Replace this with the namespace your service is in namespace: REPLACE_NAMESPACE name: my-operator-webhook path: /validating admissionReviewVersions: ["v1"] failurePolicy: Fail sideEffects: None
If these resources are configured properly you will now have an admissions webhook that can reject or mutate incoming resources before they are written to the Kubernetes database.
To deploy an existing admissions webhook to validate or mutate your Kubernetes resources alongside an Ansible-based Operator, you must
- Configure your admissions webhook to use the proxy server running on
http://localhost:8888in the operator pod
- Add the webhook container to your operator deployment
- Create a
Servicepointing to your webhook
- Make sure your webhook is reachable via the
ValidatingWebhookConfigurationmapping the resource you want to mutate/validate to the