OpenShift - General
Privileged Deployment - root
By default all pods in OpenShift run unprivileged (not root
). Thoughtfully allows root privileges on a per project and a case by case basis.
Warning
For security reasons it’s recommended to run as non-root (default) and update your container to work in this security context.
Option 1
Update the container to run as a non-root user at the bottom of your Dockerfile
USER 1001
Option 2
Use the oc adm policy
scc-subject-review
sub-command to list all the security context constraints that can overcome the limitations that hinder the container.
oc -n <namespace> get deployment <deployment-name> -o yaml | \
oc adm policy scc-subject-review -f -
Create a service account in the namespace of your container.
oc -n <namespace> create serviceaccount <service-account-name>
Associate the service account with a SCC
oc adm policy add-scc-to-user <scc-name> \
-z <service-account-name> \
-n <project>
Update existing deployment with newly created service account
oc set serviceaccount deployment/<deployment-name> \
<service-account-name> -n <project>
Option 3 (Not Recommended)
Update the privileged
Security Context Constraints by adding the projects default
service account.
oc edit scc privileged
Note
You can apply this to any project and any service account in use with the deployment. In the following example we’re using the default
project / namespace and the default
service account.
users:
- system:admin
- system:serviceaccount:openshift-infra:build-controller
- system:serviceaccount:default:default
Update deployment - changes highlighted below
apiVersion: apps/v1
kind: Deployment
metadata:
name: busybox
namespace: default
spec:
replicas: 2
selector:
matchLabels:
app: busybox
template:
metadata:
labels:
app: busybox
spec:
serviceAccountName: default
containers:
- image: docker.io/library/busybox:latest
command:
- sleep
- infinity
name: busybox
securityContext:
runAsUser: 0
privileged: true
allowPrivilegeEscalation: true
runAsNonRoot: false
seccompProfile:
type: RuntimeDefault
capabilities:
drop: ["ALL"]
ports:
- containerPort: 8080
protocol: TCP
List containers
List all images running in a cluster
oc get pods -A -o go-template --template='{{range .items}}{{range .spec.containers}}{{printf "%s\n" .image -}} {{end}}{{end}}' | sort -u | uniq
List all images stored on nodes
for node in $(oc get nodes -o name);do oc debug ${node} -- chroot /host sh -c 'crictl images -o json' 2>/dev/null | jq -r .images[].repoTags[]; done | sort -u
Rename a node
Drain workloads from node
oc adm drain < node > --ignore-daemonsets
Make the DNS / hostname change. If the hostname is not DNS name, you can use the following command on the node itself:
# vi /etc/hostname
hostnamectl set-hostname < hostname >
Delete old certificates (which are valid only for the old hostname) on the node:
sudo rm /var/lib/kubelet/pki/*
Reboot the server
sudo reboot
Delete the node
oc delete node < node >
Approve CSRs
# get csr
oc get csr
# approve all csr
oc get csr -o name | xargs oc adm certificate approve
Now you should see the node with the new name:
oc get nodes