Identity Management
Overview
Each SLIM client needs to have a valid identity. In the SLIM Group Communication Tutorial, we used a simple shared secret to quickly set up identities for the clients. In a real-world scenario, you would typically use a more secure method, such as tokens or certificates, to authenticate clients and establish their identities.
SLIM supports JWT (JSON Web Tokens) for identity management. Tokens can come from an external identity provider or can be generated by the SLIM nodes directly if you provide the necessary private key for signing the tokens and public key for verification. Check the Identity Test for an example of how to use JWT tokens with SLIM if you have your own keys.
If you are running your SLIM clients in a Kubernetes environment, using SPIRE is a very common approach to give an identity to each client. SPIRE provides a way to issue SPIFFE IDs to workloads, in the form of JWT tokens, which SLIM can then use to authenticate clients. This allows for secure and scalable identity management in distributed systems.
Example: Using SPIRE with SLIM in Kubernetes (SPIRE / JWT)
SLIM integrates well with SPIRE, as it allows you to use the JWT tokens generated by SPIRE as client identities, and at the same time it can verify these tokens using the key bundle provided by SPIRE.
This section shows how to use SPIRE with SLIM to manage client identities. The following topics are covered:
- Creating a local KIND cluster (with an in-cluster image registry).
- Installing SPIRE (server and agents).
- Building and pushing SLIM images to the local registry.
- Deploying the SLIM node (control and rendezvous components).
- Deploying two distinct SLIM client workloads, each with its own ServiceAccount (and thus its own SPIFFE ID).
- Running the point-to-point example using JWT-based authentication derived from SPIRE.
If you already have a Kubernetes cluster or an existing SPIRE deployment, you can adapt only the relevant subsections.
This tutorial is based on the SLIM examples.
Prerequisites
The following prerequisites are required:
Creating a KIND Cluster with a Local Image Registry
The helper script below provisions a KIND cluster and configures a local registry (localhost:5001) that the cluster’s container runtime can pull from:
curl -L https://kind.sigs.k8s.io/examples/kind-with-registry.sh | sh
Local Registry
While this tutorial doesn't use the local registry, it's available at localhost:5001 if you need to test with custom or unpublished container images.
Installing SPIRE
To install SPIRE, you need to install the server, CRDs, and agents:
helm upgrade --install \
-n spire-server \
spire-crds spire-crds \
--repo https://spiffe.github.io/helm-charts-hardened/ \
--create-namespace
helm upgrade --install \
-n spire-server \
spire spire --repo \
https://spiffe.github.io/helm-charts-hardened/
Wait for the SPIRE components to become ready:
kubectl get pods -n spire-server
All pods should reach Running/READY status before proceeding.
SPIFFE ID Strategy
The default SPIRE server Helm chart installs a Cluster SPIFFE ID controller
object (spire-server-spire-default) that issues workload identities following
the pattern:
spiffe://domain.test/ns/<namespace>/sa/<service-account>
We rely on this object by default. If you need more granular issuance (specific label selectors, different trust domain, etc.), consult the ClusterSPIFFEID documentation.
Deploying the SLIM Node
helm install \
--create-namespace \
-n slim \
slim oci://ghcr.io/agntcy/slim/helm/slim:v1.0.0
Confirm the pod is running:
kubectl get pods -n slim
Deploying Two Distinct Clients (separate ServiceAccounts = separate SPIFFE IDs)
Each Deployment:
- Has its own ServiceAccount (
slim-client-a,slim-client-b). - Mounts the SPIRE agent socket from the host (in KIND, agent runs as a DaemonSet).
- Runs a placeholder
slim-clientcontainer (sleep) you can exec into.
kubectl apply -f - <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: slim-client-a
labels:
app.kubernetes.io/name: slim-client
app.kubernetes.io/component: client-a
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: slim-client-b
labels:
app.kubernetes.io/name: slim-client
app.kubernetes.io/component: client-b
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: slim-client-a
labels:
app.kubernetes.io/name: slim-client
app.kubernetes.io/component: client-a
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: slim-client
app.kubernetes.io/component: client-a
template:
metadata:
labels:
app.kubernetes.io/name: slim-client
app.kubernetes.io/component: client-a
spec:
serviceAccountName: slim-client-a
securityContext: {}
containers:
- name: slim-client
securityContext: {}
image: "python:3"
imagePullPolicy: Always
command: ["/bin/sh", "-c"]
args:
- "pip install 'slim-bindings[examples]' && sleep infinity"
resources: {}
volumeMounts:
- name: spire-agent-socket
mountPath: /run/spire/agent-sockets
readOnly: false
volumes:
- name: spire-agent-socket
hostPath:
path: /run/spire/agent-sockets
type: Directory
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: slim-client-b
labels:
app.kubernetes.io/name: slim-client
app.kubernetes.io/component: client-b
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: slim-client
app.kubernetes.io/component: client-b
template:
metadata:
labels:
app.kubernetes.io/name: slim-client
app.kubernetes.io/component: client-b
spec:
serviceAccountName: slim-client-b
securityContext: {}
containers:
- name: slim-client
securityContext: {}
image: "python:3"
imagePullPolicy: Always
command: ["/bin/sh", "-c"]
args:
- "pip install 'slim-bindings[examples]' && sleep infinity"
resources: {}
volumeMounts:
- name: spire-agent-socket
mountPath: /run/spire/agent-sockets
readOnly: false
volumes:
- name: spire-agent-socket
hostPath:
path: /run/spire/agent-sockets
type: Directory
EOF
Check that both pods are running:
kubectl get pods -l app.kubernetes.io/name=slim-client -o wide
Running the Point-to-Point Example (inside the cluster)
Enter the first client pod (receiver):
kubectl exec -c slim-client -it $(kubectl get pods -l app.kubernetes.io/component=client-a -o jsonpath="{.items[0].metadata.name}") -- /bin/bash
Run the receiver:
slim-bindings-p2p \
--spire-socket-path /var/run/spire/agent-sockets/spire-agent.sock \
--spire-jwt-audience slim-demo \
--slim http://slim.slim:46357 \
--local agntcy/example/receiver
Open a second shell for the sender:
kubectl exec -c slim-client -it $(kubectl get pods -l app.kubernetes.io/component=client-b -o jsonpath="{.items[0].metadata.name}") -- /bin/bash
Run the sender:
slim-bindings-p2p \
--spire-socket-path /var/run/spire/agent-sockets/spire-agent.sock \
--spire-jwt-audience slim-demo \
--slim http://slim.slim:46357 \
--local agntcy/example/sender \
--remote agntcy/example/receiver \
--enable-mls \
--message "hey there"
Sample output:
2026-02-02T16:48:39.498136Z INFO slim slim_service::service: 402: client connected endpoint=http://slim.slim:46357 conn_id=0
Using SPIRE dynamic identity authentication.
2026-02-02T16:48:39.498558Z INFO ThreadId(16) slim_auth::spire: 277: Initializing spire identity manager
2026-02-02T16:48:39.505154Z INFO ThreadId(16) slim_auth::spire: 303: spire provider initialized successfully
2026-02-02T16:48:39.505166Z INFO ThreadId(16) slim_auth::spire: 277: Initializing spire identity manager
2026-02-02T16:48:39.506694Z INFO ThreadId(16) slim_auth::spire: 303: spire provider initialized successfully
14976724198355450732 Created app
14976724198355450732 Sent message hey there - 1/10
14976724198355450732 received (from session 643283326): hey there from 902113370376718484
...
At this point the two workloads are securely exchanging messages authenticated by SPIRE-issued identities and authorized via JWT claims (audience and expiration). The MLS flag demonstrates establishing an end-to-end encrypted channel.