Network reachability in k8

From time to time need to test connection between pods and instances.

Better way to run shell in pod and use telnet or traceroute

kubectl run -i --tty busybox --image=busybox --restart=Never -- sh

to run from specific pod shell:

kubectl exec -i -t -n prod-ns wallet-cf48545b6-9t5hd -- sh

to run command from specific pod:

kubectl exec -n prod-ns wallet-cf48545b6-9t5hd -- nc google.com 443

AWS EKS with assumed role and kubectl

Keep in my mind, that you’re will not be able to login to kubernetes cluster after creation EKS via terraform in role assumed.

need to have a profile which assumed role like this in ~/.aws/config

[management]
region = eu-central-1

[dev-eks]
role_arn = arn:aws:iam::84557222244:role/terraform
source_profile = management

then

export AWS_PROFILE=dev-eks
export KUBECONFIG=~/.kube/your_new_cluster_config.conf

After that you will be able to work with auth configmaps

kubectl describe configmap -n kube-system aws-auth   

as an option user can be added manually to section: mapUsers

apiVersion: v1
data:
  mapAccounts: |
    []
  mapRoles: |
    - "groups":
      - "system:bootstrappers"
      - "system:nodes"
      "rolearn": "arn:aws:iam::84557222244:role/dev2021080114300461520000000b"
      "username": "system:node:{{EC2PrivateDNSName}}"
  mapUsers: |
    - "groups":
      - "system:masters"
      "userarn": "arn:aws:iam::84557222244:user/eks_api_user"
      "username": "eks_api_user"

Kubernetes proxy connection to external service

If you need to connect to service with type: ExternalName, you need to build a tunnel, because not possible to use proxy directly to service.

How it should be?

For example Postgres with port 5432

kubectl -n prod run pg-tunnel -it --image=alpine/socat --tty --rm --expose=true --port=5432 tcp-listen:5432,fork,reuseaddr tcp-connect:yourdatabasehost:5432
kubectl -n prod port-forward svc/pg-tunnel 5432:5432

Vault in Kubernetes

High availability

I want to have more than 1 instance of vault in cluster, to avoid errors when node is unavailable + store secrets in database.

Earlier, i had a vault as a single instance in EC2 and DynamoDB as a HA storage.

So using RDS is a tradeoff and need to migrate secrets to Postgres by instructions on site. It’s easy to be honest.

Eventually i get the same data, but in Postgres.

What i do?

  1. create namespace vault
  2. use the following configuration in vault. I use AWS KMS to auto unseal
storage "postgresql" {
  connection_url = "postgres://{{vault.dbuser}}:{{vault.dbpassword}}@{{vault.dbhost}}:{{vault.dbport}}/{{vault.dbname}}?sslmode=disable"
  ha_enabled = "true"
}

seal "awskms" {
  region = "{{vault.seal.region}}"
  kms_key_id = "{{vault.seal.kms}}"
  access_key = "{{vault.seal.accesskey}}"
  secret_key = "{{vault.seal.secretkey}}"
}


or version with consul

storage "consul" {
  address = "consul-consul-server.consul:8500"
  path    = "vault"
}

seal "awskms" {
  region = "eu-central-1"
  kms_key_id = "aa"
  access_key = "aa"
  secret_key = "bb"
}

3. Create secret by configuration and install vault via helm in HA mode with 3 replicas

kubectl create secret generic vault-storage-config -n vault --from-file=config.hcl"

helm repo add hashicorp https://helm.releases.hashicorp.com
kubectl create ns vault
helm install vault hashicorp/vault -n vault --set='injector.enabled=true' --set 'server.ha.enabled=true' --set 'server.ha.replicas=3' --set='server.dataStorage.size=1Gi' --set='server.extraVolumes[0].type=secret' --set='server.extraVolumes[0].name=vault-storage-config' --set='server.extraArgs=-config=/vault/userconfig/vault-storage-config/config.hcl'


Auto unseal

I faced a problem when need to auto unseal vault in Kubernetes when node restarts. It happens for example if you use spot instances, otherwise you need to unseal each instance manually by kube-proxy or connecting directly to pod

The easiest way:

  1. create iam user with appropriate policy to encrypt/decrypt/describe key
  2. create key and link it to user
  3. update vault config to use seal:
seal "awskms" {
  region = "eu-west-1"
  kms_key_id = "xxx"
  access_key = "aaa"
  secret_key = "bbb"
}

4. unseal with special key: vault operator unseal -migrate

that’s it

Vault secrets

Basically it has a full explanation on official site, but again, the simplest way:

  1. create policy where we describe what path can be used by kubernetes service account:
vault policy write "app-policy" -<<EOF
path "kv*" {
capabilities = ["read"]
}
EOF

2. enable kubernetes authentication

vault auth enable kubernetes

3. connect vault config with kubernetes account credentials

vault write auth/kubernetes/config token_reviewer_jwt="$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" kubernetes_host=https://${KUBERNETES_PORT_443_TCP_ADDR}:443 kubernetes_ca_cert=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt

4. allow namespaces and service accounts to use vault (in this case allows all)

vault write auth/kubernetes/role/app-role bound_service_account_names=* bound_service_account_namespaces=* policies=app-policy ttl=1h

5. in deployment section of manifest need to add the following metadata

spec:
  selector:
    matchLabels:
      app: yourapp
  replicas: 3
  template:
    metadata:
      annotations:
        vault.hashicorp.com/agent-inject: true
        vault.hashicorp.com/agent-inject-secret-app: kv/data/yourapp
        vault.hashicorp.com/role: app-role

then vault mounts secrets as a file app by path /vault/secrets/app

app – because it has suffix agent-inject-secret-app

keep in mind that only values created by kv v.1 engine will be transformed directly to env property file like this:

AA=BB

otherwise need to use mapping by annotation:

vault.hashicorp.com/agent-inject-template-xxx

Vault secrets as env variables via ExternalSecret

Some applications unable to load parameters from file, so need connect Vault with secrets. We can do it via ExternalSecret: https://github.com/external-secrets/kubernetes-external-secrets

helm repo add external-secrets https://external-secrets.github.io/kubernetes-external-secrets/
helm install vault-secrets external-secrets/kubernetes-external-secrets --set env.VAULT_ADDR=http://vault:8200 -n vault

as result you will see:
NAME: vault-secrets
NAMESPACE: vault
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The kubernetes external secrets has been installed. Check its status by running:
  kubectl --namespace vault get pods -l "app.kubernetes.io/name=kubernetes-external-secrets,app.kubernetes.io/instance=vault-secrets"

Visit https://github.com/external-secrets/kubernetes-external-secrets for instructions on how to use kubernetes external secrets

Then need to create a resource with auto secret

apiVersion: "kubernetes-client.io/v1"
kind: ExternalSecret
metadata:
  name: nameofyoursecret
spec:
  backendType: vault
  vaultRole: app-role
  dataFrom:
   - kv/data/testsecret

After that you will see that secret has been created and can be linked with pod

Cloudflare + nginx ingress controller

To preserve client’s ip address, just need to add the following line into the ingress-nginx-controller:

kubectl edit configmap -n ingress-nginx ingress-nginx-controller

data:
  proxy-real-ip-cidr: "173.245.48.0/20,103.21.244.0/22,103.22.200.0/22,103.31.4.0/22,141.101.64.0/18,108.162.192.0/18,190.93.240.0/20,188.114.96.0/20,197.234.240.0/22,198.41.128.0/17,162.158.0.0/15,104.16.0.0/12,172.64.0.0/13,131.0.72.0/22,2400:cb00::/32,2606:4700::/32,2803:f800::/32,2405:b500::/32,2405:8100::/32,2a06:98c0::/29,2c0f:f248::/32"
  use-forwarded-headers: "true"
  forwarded-for-header: "CF-Connecting-IP"

How i use EKS in dev

I would like to share the notes how i build EKS cluster for development needs.

 

  1. I created shell script to deploy cluster via eksctl tool.
REGION=eu-west-1
VERSION=1.19
NODES=0
TYPE=t2.large

eksctl create cluster --version=$VERSION --name=inqud-dev-cluster --nodes=$NODES --region=$REGION --node-type $TYPE --node-labels="lifecycle=OnDemand" --asg-access

in above scenario i created 0 nodes, because i’m going to use spot nodes with the cheapest price.

2. Create nodegroup with the spot nodes

eksctl create nodegroup -f dev-spot-nodegroup.yml

#manifest
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
  name: dev-cluster
  region: eu-west-1
nodeGroups:
  - name: spot-dev-node-group
    minSize: 3
    maxSize: 5
    desiredCapacity: 3
    instancesDistribution:
      instanceTypes: ["t2.medium"]
      onDemandBaseCapacity: 0
      onDemandPercentageAboveBaseCapacity: 0
      spotAllocationStrategy: "capacity-optimized"
    labels:
      lifecycle: Ec2Spot
    iam:
      withAddonPolicies:
        autoScaler: true

#check:
kubectl get nodes --show-labels --selector=lifecycle=Ec2Spot

3. Install AWS Node Termination handler. It give us ability to gracefully

Details by link: https://github.com/aws/aws-node-termination-handler/

kubectl apply -f https://github.com/aws/aws-node-termination-handler/releases/download/v1.12.0/all-resources.yaml

#check. you should daemons on each node
kubectl get daemonsets --all-namespaces

4. Then we need to install autoscaler to be able to automatically add new nodes into the cluster:

Details by link: https://github.com/kubernetes/autoscaler

#download script
curl -LO https://raw.githubusercontent.com/kubernetes/autoscaler/master/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-multi-asg.yaml

#find nodegroup name
eksctl get nodegroup spot-dev-node-group --cluster dev-cluster -o json | jq '.[0].AutoScalingGroupName' | xargs

#edit nodegroup and run
kubectl apply -f cluster-autoscaler-multi-asg.yaml

5. Install and run network load balancer:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/aws/deploy.yaml

6. install let’s encrypt service to be able to use dynamic https endpoints:

https://amsone.tech.blog/2021/02/09/https-in-kubernetes/

Kubernetes: CI/CD User for Gitlab

We want to deploy our helm charts to k8 cluster and ofc want to do it automatically

I’ve prepared the script which allows to create cicd-user as a service account, build appropriate config file context and allows to work only in defined namespaces.

More secure (useful) to create serviceaccount in isolated namespace

set -e

#define service account name
NAMESPACE="cicd-ns"
SERVICE_ACCOUNT_NAME="gitlab-user"

#create namespace
kubectl create ns $NAMESPACE

#create service account
kubectl create sa ${SERVICE_ACCOUNT_NAME} -n $NAMESPACE

#extracting secret name and public key
SECRET_NAME=$(kubectl get sa ${SERVICE_ACCOUNT_NAME} -n $NAMESPACE -o jsonpath={.secrets..name})
kubectl get secret "${SECRET_NAME}" -n $NAMESPACE -o json | jq -r '.data["ca.crt"]' | base64 --decode > "ca.crt"

#fetch user token
USER_TOKEN=$(kubectl get secret "${SECRET_NAME}" -n $NAMESPACE -o json | jq -r '.data["token"]' | base64 --decode)

#create context
KUBECFG_FILE_NAME="k8s-${SERVICE_ACCOUNT_NAME}_${NAMESPACE}.conf"
CURRENT_CONTEXT=$(kubectl config current-context)
CLUSTER_NAME=$(kubectl config get-contexts "${CURRENT_CONTEXT}" | awk '{print $3}' | tail -n 1)
ENDPOINT=$(kubectl config view -o jsonpath="{.clusters[?(@.name == \"${CLUSTER_NAME}\")].cluster.server}")
CONTEXT_NAME=${SERVICE_ACCOUNT_NAME}-${CLUSTER_NAME}
kubectl config set-cluster ${CLUSTER_NAME} --kubeconfig="${KUBECFG_FILE_NAME}" --server="${ENDPOINT}" --certificate-authority="ca.crt" --embed-certs=true
kubectl config set-credentials $CONTEXT_NAME --kubeconfig="${KUBECFG_FILE_NAME}" --token="${USER_TOKEN}"
kubectl config set-context $CONTEXT_NAME --kubeconfig="${KUBECFG_FILE_NAME}" --cluster="${CLUSTER_NAME}" --user=$CONTEXT_NAME

#verify context
kubectl config use-context $CONTEXT_NAME --kubeconfig="${KUBECFG_FILE_NAME}"
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: cicd-role-binding
  namespace: your-working-ns
subjects:
  - kind: ServiceAccount
    name: gitlab-user
    namespace: cicd-ns
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io

Better to use Custom role instead of ClusterRole: cluster-admin, but in that way we have to define what resources should be available.

HTTPs in Kubernetes

The fastest way what i found is using lets encrypt service and route53 integration (in my case)

Integration with route 53 gives me ability to validate automatically domain.

Official documentation

There are few steps:

#1. create name space
kubectl create namespace cert-manager

#2. add helm repository and install cert manager
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install cert-manager jetstack/cert-manager --namespace cert-manager --version v1.1.0 --set installCRDs=true

#3. verify that cert-manager 3 pods are working fine
kubectl get pods -n cert-manager


#4. create secret with aws secret
kubectl create secret generic acme-route53 -n cert-manager --from-literal=secret-access-key=your secret here


#5. apply manifest

apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
  name: letsencrypt-issuer
  namespace: cert-manager
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: youremail
    privateKeySecretRef:
      name: letsencrypt-prod
    solvers:
      - selector:
          dnsZones:
            - "yourdomain"
        dns01:
          route53:
            region: eu-west-1
            accessKeyID: yourawsaccesskey
            secretAccessKeySecretRef:
              name: acme-route53
              key: secret-access-key



Ingress example

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: your-ingress
  annotations:
    nginx.ingress.kubernetes.io/use-regex: "true"
    cert-manager.io/cluster-issuer: "letsencrypt-issuer"
spec:
  tls:
    - hosts:
        - api.example.com
      secretName: nginx-tls
  rules:
    - host: api.example.com
      http:
        paths:
          - path: /
            backend:
              serviceName: api-gateway
              servicePort: 8080


TLS certificate will be automatically saved to nginx-tls secret 

EKS (fargate) + NLB

Official AWS NLB documentation

Here will try to deploy cluster based on Network load balancer, because we want to manage subdomains, urls inside of the cluster and it is much easier!

What needs?

  1. Install eksctl to local machine (it needs to setup cluster from terminal instead of UI)
  2. Prepare simple eksctl yaml file: same file as in post about ALB
  3. Apply ingress controller deployment. Pay attention, that helm’s chart does not work as expected (2021-01-12)
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-0.32.0/deploy/static/provider/aws/deploy.yaml

Example of the ingress part

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: yourapp-ingress
  annotations:
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
    nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
    nginx.ingress.kubernetes.io/rewrite-target: /

Design a site like this with WordPress.com
Get started