Identity Management
Overview
Each SLIM client needs to have a valid identity. In the SLIM Group Communication Tutorial, we used a simple shared secret to quickly set up identities for the clients. In a real-world scenario, you would typically use a more secure method, such as tokens or certificates, to authenticate clients and establish their identities.
SLIM supports JWT (JSON Web Tokens) for identity management. Tokens can come from an external identity provider or can be generated by the SLIM nodes directly if you provide the necessary private key for signing the tokens and public key for verification. Check the Identity Test for an example of how to use JWT tokens with SLIM if you have your own keys.
If you are running your SLIM clients in a Kubernetes environment, using SPIRE is a very common approach to give an identity to each client. SPIRE provides a way to issue SPIFFE IDs to workloads, in the form of JWT tokens, which SLIM can then use to authenticate clients. This allows for secure and scalable identity management in distributed systems.
Example: Using SPIRE with SLIM in Kubernetes (SPIRE / JWT)
SLIM integrates well with SPIRE, as it allows you to use the JWT tokens generated by SPIRE as client identities, and at the same time it can verify these tokens using the key bundle provided by SPIRE.
This section shows how to use SPIRE with SLIM to manage client identities. The following topics are covered:
- Creating a local KIND cluster (with an in-cluster image registry).
- Installing SPIRE (server and agents).
- Building and pushing SLIM images to the local registry.
- Deploying the SLIM node (control and rendezvous components).
- Deploying two distinct SLIM client workloads, each with its own ServiceAccount (and thus its own SPIFFE ID).
- Running the point-to-point example using JWT-based authentication derived from SPIRE.
If you already have a Kubernetes cluster or an existing SPIRE deployment, you can adapt only the relevant subsections.
This tutorial is based on the SLIM examples.
Prerequisites
The following prerequisites are required:
Clone the SLIM repository if you haven't already:
git clone https://github.com/agntcy/slim.git && cd slim/data-plane/python/bindings/examples
Creating a KIND Cluster with a Local Image Registry
The helper script below provisions a KIND cluster and configures a local registry (localhost:5001) that the cluster’s container runtime can pull from:
curl -L https://kind.sigs.k8s.io/examples/kind-with-registry.sh | sh
Installing SPIRE
To install SPIRE, you need to install the server, CRDs, and agents:
helm upgrade --install \
-n spire-server \
spire-crds spire-crds \
--repo https://spiffe.github.io/helm-charts-hardened/ \
--create-namespace
helm upgrade --install \
-n spire-server \
spire spire --repo \
https://spiffe.github.io/helm-charts-hardened/
Wait for the SPIRE components to become ready:
kubectl get pods -n spire-server
All pods should reach Running/READY status before proceeding.
SPIFFE ID Strategy
The default SPIRE server Helm chart installs a Cluster SPIFFE ID controller
object (spire-server-spire-default
) that issues workload identities following
the pattern:
spiffe://domain.test/ns/<namespace>/sa/<service-account>
We rely on this object by default. If you need more granular issuance (specific label selectors, different trust domain, etc.), consult the ClusterSPIFFEID documentation.
Building SLIM Images (node and examples)
You can use pre-built images if available; here we build and push fresh ones to the local registry:
REPO_ROOT=$(git rev-parse --show-toplevel)
pushd "${REPO_ROOT}"
IMAGE_REPO=localhost:5001 docker bake slim && docker push localhost:5001/slim:latest
IMAGE_REPO=localhost:5001 docker bake bindings-examples && docker push localhost:5001/bindings-examples:latest
popd
Deploying the SLIM Node
REPO_ROOT=$(git rev-parse --show-toplevel)
pushd "${REPO_ROOT}/charts"
helm install \
--create-namespace \
-n slim \
slim ./slim \
--set slim.image.repository=localhost:5001/slim \
--set slim.image.tag=latest
Confirm the pod is running:
kubectl get pods -n slim
Deploying Client Configuration (ConfigMap)
We first provide a config for spiffe-helper
, which retrieves SVIDs or JWTs from
the SPIRE agent and writes them to disk. The key fields are:
agent_address
: Path to the SPIRE agent API socket.cert_dir
: Where artifacts (cert, key, bundles, or JWTs) are written.jwt_svids
: Audience and output filename for requested JWT SVIDs.daemon_mode = true
: Run continuously to renew materials.
kubectl apply -f - <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
name: spire-helper-slim-client
labels:
app.kubernetes.io/name: slim-client
data:
helper.conf: |
agent_address = "/run/spire/agent-sockets/api.sock"
cmd = ""
cmd_args = ""
cert_dir = "/svids"
renew_signal = ""
svid_file_name = "tls.crt"
svid_key_file_name = "tls.key"
svid_bundle_file_name = "svid_bundle.pem"
jwt_bundle_file_name = "key.jwt"
cert_file_mode = 0600
key_file_mode = 0600
jwt_svid_file_mode = 0600
jwt_bundle_file_mode = 0600
jwt_svids = [{jwt_audience="slim-demo", jwt_svid_file_name="jwt_svid.token"}]
daemon_mode = true
EOF
Deploying Two Distinct Clients (separate ServiceAccounts = separate SPIFFE IDs)
Each Deployment:
- Has its own ServiceAccount (
slim-client-a
,slim-client-b
). - Mounts the SPIRE agent socket from the host (in KIND, agent runs as a DaemonSet).
- Runs
spiffe-helper
sidecar to continuously refresh identities. - Runs a placeholder
slim-client
container (sleep) you can exec into.
kubectl apply -f - <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: slim-client-a
labels:
app.kubernetes.io/name: slim-client
app.kubernetes.io/component: client-a
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: slim-client-b
labels:
app.kubernetes.io/name: slim-client
app.kubernetes.io/component: client-b
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: slim-client-a
labels:
app.kubernetes.io/name: slim-client
app.kubernetes.io/component: client-a
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: slim-client
app.kubernetes.io/component: client-a
template:
metadata:
labels:
app.kubernetes.io/name: slim-client
app.kubernetes.io/component: client-a
spec:
serviceAccountName: slim-client-a
securityContext: {}
containers:
- name: spiffe-helper
image: ghcr.io/spiffe/spiffe-helper:0.10.0
imagePullPolicy: IfNotPresent
args: [ "-config", "config/helper.conf" ]
volumeMounts:
- name: config-volume
mountPath: /config/helper.conf
subPath: helper.conf
- name: spire-agent-socket
mountPath: /run/spire/agent-sockets
readOnly: false
- name: svids-volume
mountPath: /svids
readOnly: false
- name: slim-client
securityContext: {}
image: "localhost:5001/bindings-examples:latest"
imagePullPolicy: Always
command: ["sleep"]
args: ["infinity"]
resources: {}
volumeMounts:
- name: svids-volume
mountPath: /svids
readOnly: false
- name: config-volume
mountPath: /config/helper.conf
subPath: helper.conf
volumes:
- name: spire-agent-socket
hostPath:
path: /run/spire/agent-sockets
type: Directory
- name: config-volume
configMap:
name: spire-helper-slim-client
- name: svids-volume
emptyDir: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: slim-client-b
labels:
app.kubernetes.io/name: slim-client
app.kubernetes.io/component: client-b
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: slim-client
app.kubernetes.io/component: client-b
template:
metadata:
labels:
app.kubernetes.io/name: slim-client
app.kubernetes.io/component: client-b
spec:
serviceAccountName: slim-client-b
securityContext: {}
containers:
- name: spiffe-helper
image: ghcr.io/spiffe/spiffe-helper:0.10.0
imagePullPolicy: IfNotPresent
args: [ "-config", "config/helper.conf" ]
volumeMounts:
- name: config-volume
mountPath: /config/helper.conf
subPath: helper.conf
- name: spire-agent-socket
mountPath: /run/spire/agent-sockets
readOnly: false
- name: svids-volume
mountPath: /svids
readOnly: false
- name: slim-client
securityContext: {}
image: "localhost:5001/bindings-examples:latest"
imagePullPolicy: Always
command: ["sleep"]
args: ["infinity"]
resources: {}
volumeMounts:
- name: svids-volume
mountPath: /svids
readOnly: false
- name: config-volume
mountPath: /config/helper.conf
subPath: helper.conf
volumes:
- name: spire-agent-socket
hostPath:
path: /run/spire/agent-sockets
type: Directory
- name: config-volume
configMap:
name: spire-helper-slim-client
- name: svids-volume
emptyDir: {}
EOF
Check that both pods are running:
kubectl get pods -l app.kubernetes.io/name=slim-client -o wide
You can inspect each pod’s SPIFFE ID with the following command:
POD_NAME=$(kubectl get pods -l app.kubernetes.io/component=client-a -o jsonpath="{.items[0].metadata.name}")
kubectl exec -c slim-client -it ${POD_NAME} -- ls -l /svids
Running the Point-to-Point Example (inside the cluster)
Enter the first client pod (receiver):
kubectl exec -c slim-client -it $(kubectl get pods -l app.kubernetes.io/component=client-a -o jsonpath="{.items[0].metadata.name}") -- /bin/bash
Verify the identity artifacts:
ls -l /svids
Run the receiver:
/app/bin/p2p --slim '{"endpoint": "http://slim.slim:46357", "tls": {"insecure": true}}' \
--jwt /svids/jwt_svid.token \
--spire-trust-bundle /svids/key.jwt \
--local agntcy/example/receiver \
--audience slim-demo
Open a second shell for the sender:
kubectl exec -c slim-client -it $(kubectl get pods -l app.kubernetes.io/component=client-b -o jsonpath="{.items[0].metadata.name}") -- /bin/bash
Run the sender:
/app/bin/p2p --slim '{"endpoint": "http://slim.slim:46357", "tls": {"insecure": true}}' \
--jwt /svids/jwt_svid.token \
--spire-trust-bundle /svids/key.jwt \
--audience slim-demo \
--local agntcy/example/sender \
--remote agntcy/example/receiver \
--enable-mls \
--message "hey there"
Sample output:
Agntcy/example/sender/... Created app
Agntcy/example/sender/... Connected to http://slim.slim:46357
Agntcy/example/sender/... Sent message hey there - 1/10:
Agntcy/example/sender/... received (from session ...): hey there from agntcy/example/receiver/...
At this point the two workloads are securely exchanging messages authenticated by SPIRE-issued identities and authorized via JWT claims (audience and expiration). The MLS flag demonstrates establishing an end-to-end encrypted channel.