Skip to content

A First Hello World Python App Run on K8sΒ€

We create a hello world app, which we containerize, then deploy on k8s.

Note

There are thousands of "better" tutorials regarding how to do this already out there since long. But you have to start somewhere and I wanted an automatically1 reproducable baseline, for the more complex things, we'll eventually address in other tutorials around here.

kubernetes playground

If you can't install a cloud kubernetes server you can test most of the stuff below using this playground: https://labs.play-with-k8s.com/

First example is stateless:

RequirementsΒ€

$ kubectl get nodes                                       
NAME                 STATUS   ROLES    AGE   VERSION                                
app-pool-849ri       Ready    <none>   28m   v1.21.2                                
app-pool-849rv       Ready    <none>   28m   v1.21.2                                
default-pool-849j0   Ready    <none>   31m   v1.21.2                                
default-pool-849j1   Ready    <none>   32m   v1.21.2
$ kubectl get secrets                                     
NAME                  TYPE                                  DATA   AGE              
default-token-7jjc4   kubernetes.io/service-account-token   3      35m              
regcred               kubernetes.io/dockerconfigjson        1      27m
  • Have the following values present (we use pass to retrieve them):

    • reg/domain domain name of your registry
    • reg/user username for your registry
    • reg/passw your registry password
  • podman or docker and kubectl. Here we will use podman to build and push a container image.

PreparationΒ€

$ type kubectl podman # asserts: podman and kubectl
$ app="hello_app"       # app (container) name
$ D="$DT_PROJECT_ROOT/tmp/clusters/DO/k8s/$app"
$ mkdir -p "$D" || exit 1
$ cd "$D"
$ ls -a | xargs rm -rf 2>/dev/null
$ git init
$ podman rmi --force $app 2>/dev/null || true

$ type kubectl podman # asserts: podman and kubectl                                 
kubectl is hashed (/usr/local/bin/kubectl)                                          
podman is hashed (/usr/bin/podman)        
$ 
$ app="hello_app"       # app (container) name                                      
$ 
$ D="$DT_PROJECT_ROOT/tmp/clusters/DO/k8s/$app"
$ mkdir -p "$D" || exit 1
$ cd "$D"
$ ls -a | xargs rm -rf 2>/dev/null
$ git init      
hint: Using 'master' as the name for the initial branch. This default branch name   
hint: is subject to change. To configure the initial branch name to use in all      
hint: of your new repositories, which will suppress this warning, call:             
hint:                
hint:   git config --global init.defaultBranch <name>                               
hint:                
hint: Names commonly chosen instead of 'master' are 'main', 'trunk' and             
hint: 'development'. The just-created branch can be renamed via this command:       
hint:                
hint:   git branch -m <name>              
Initialized empty Git repository in /home/gk/repos/blog/tmp/clusters/DO/k8s/hello_app/.git/
$ podman rmi --force $app 2>/dev/null || true             
Untagged: localhost/devapps/hello_app:0.2

EnvironΒ€

As usual we create an environ file, which sets up our working environ, when sourced - convenient for new shells:

$ cat environ
set -ae
alias k=kubectl p=podman

[[ "$0" = *bash* ]] && s=bash || s=zsh
source <(kubectl completion $s)

D="/home/gk/repos/blog/tmp/clusters/DO/k8s/hello_app"
namespace="devapps"                                  # namespace of our app in registry
fn_reg_auth="$XDG_RUNTIME_DIR/containers/auth.json"  # where podman stores creds
app="hello_app"                                    # app (container) name
ver="0.2"
cd "$D"
set +ae
$ source ./environ

$ source ./environ

Tip

You can tell your shell to automatically source the environ file, when you cd into the folder, by overwritting the builtin cd function.

In your .bashrc or .zshrc:

function cd {
    local m f d="${1:-$HOME}"
    # useful as well: cd into dir when a file is given:
    test -d "$d" || {
        test "$d" != "-" && {
            f="$d"
            d="$(dirname "$d")"
            test -e "$f" && m="Is a file" || m="Not exists"
            echo -e "\x1b[38;5;245m$m: $f - going to \x1b[0m$d"
        }
    }
    builtin cd "$d"
    test -e "./.cd.rc" && {
        echo -e "\x1b[38;5;245m$(cat "./.cd.rc")\x1b[0m"
        source "./.cd.rc"
    }
    true
}
Now you can add source environ into a .cd.rc file within the same folder.

Server AppΒ€

A simple webserver, returning its process environ to the client and supports getting shut down from remote:

$ cat server.py
#!/usr/bin/env python
import os, sys, time, json as j
from http.server import HTTPServer, BaseHTTPRequestHandler as Handler

now = time.time
die = sys.exit

nfo = lambda: {'at': now(), 'env': dict(os.environ)}
rsp = lambda: j.dumps(nfo(), indent=4, sort_keys=True)


def get(h):
    h.send_response(200)
    h.send_header('Content-type', 'application/json')
    h.end_headers()
    w = lambda s, h=h: h.wfile.write(bytes(s, 'utf-8'))
    w(rsp())
    # we allow the client to stop the server via those URL paths:
    p = h.path.split('?', 1)[0]
    die(0) if p == '/stop' else die(1) if p == '/err' else 0


Handler.do_GET = get

def run(bind='0.0.0.0', port=28001):
    print(f'Starting httpd server on {bind}:{port}')
    HTTPServer((bind, port), Handler).serve_forever()

run() if __name__ == '__main__' else 0

ContainerizeΒ€

$ cat Dockerfile
FROM         docker.io/python:3.8
MAINTAINER   gk
RUN          mkdir -p /app
WORKDIR      /app
COPY         server.py /app/server.py
ENV          APP_ENV development
EXPOSE       28001
CMD          ["python", "server.py"]
$ p build --quiet -t $app  .
$ p tag "$app:latest" "$namespace/$app:$ver"

$ p build --quiet -t $app  .                              
084853fe41554ceb4094d20d6f202e23a14be169c1ec252abfd5111be62bc5d9
$ p tag "$app:latest" "$namespace/$app:$ver"

TestΒ€

$ p run -d --rm -p28001:28001 $namespace/$app:$ver
$ wget --retry-connrefused http://localhost:28001/stop -O - # lp: asserts=PATH

$ p run -d --rm -p28001:28001 $namespace/$app:$ver        
cd8c94247f5849f6c04e8db1049ebb0b8110b449bc22eca7ddc9ef8351ca8f7d
$ wget --retry-connrefused http://localhost:28001/stop -O -                    
--2021-08-16 14:40:37--  http://localhost:28001/stop                                
Resolving localhost (localhost)... ::1, 127.0.0.1                                   
Connecting to localhost (localhost)|::1|:28001... connected.                        
HTTP request sent, awaiting response... 200 OK                                      
Length: unspecified [application/json]    
Saving to: β€˜STDOUT’

-                        [<=>                    ]       0  --.-KB/s               {
    "at": 1629117637.1792567,             
    "env": {         
        "APP_ENV": "development",         
        "GPG_KEY": "E3FF2839C048B25C084DEBE9B26995E310250568",                      
        "HOME": "/root",                  
        "HOSTNAME": "cd8c94247f58",       
        "LANG": "C.UTF-8",                
        "PATH": "/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",           
        "PYTHON_GET_PIP_SHA256": "fa6f3fb93cce234cd4e8dd2beb54a51ab9c247653b52855a48dd44e6b21ff28b",     
        "PYTHON_GET_PIP_URL": "https://github.com/pypa/get-pip/raw/c20b0cfd643cd4a19246ccf204e2997af70f6b21/public/get-pip.py",                                         
        "PYTHON_PIP_VERSION": "21.2.3",   
        "PYTHON_VERSION": "3.8.11",       
        "TERM": "xterm",                  
        "container": "podman"             
    }                
-                        [ <=>                   ]     692  --.-KB/s    in 0s

2021-08-16 14:40:37 (1.36 MB/s) - written to stdout [692]

Commit AppΒ€

$ git add server.py Dockerfile environ
$ git commit -am 'feat: First version of hello world server'
$ git tag $ver || true

$ git add server.py Dockerfile environ
$ git commit -am 'feat: First version of hello world server'                   
[master (root-commit) cf30264] feat: First version of hello world server            
 3 files changed, 50 insertions(+)        
 create mode 100644 Dockerfile            
 create mode 100644 environ               
 create mode 100755 server.py
$ git tag $ver || true

Push to (Private) RegistryΒ€

$ p login "$(pass show reg/domain)" -u $(pass show reg/user) -p "$(pass show reg/passw)"
$ r="docker://$(pass show reg/domain)/docker-internal/$namespace"
$ p push --quiet --authfile=$fn_reg_auth $namespace/$app:$ver "$r/$app:$ver" && echo success # lp: assert=success

$ p login "$(pass show reg/domain)" -u $(pass show reg/user) -p "$(pass show reg/passw)"            
Login Succeeded!
$ r="docker://$(pass show reg/domain)/docker-internal/$namespace"
$ p push --quiet --authfile=$fn_reg_auth $namespace/$app:$ver "$r/$app:$ver" && echo success        
success

Cloud DeploymentΒ€

Lets deploy the container using K8s' builtin weapons:

The app is stateless, so we deploy, ...well..., a "Deployment":

$ cat << EOF > frontend-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 2
  selector:
    matchLabels:
      app: pyhello
      tier: frontend
  template:
    metadata:
      labels:
        app: pyhello
        tier: frontend
    spec:
      imagePullSecrets:
      - name: regcred
      containers:
      - name: py-hello1
        image: $(pass show reg/domain)/docker-internal/$namespace/$app:$ver
        imagePullPolicy: Always
        env:
        - name: FOO
          value: "BAR"
        ports:
        - containerPort: 28001
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
EOF
$ k apply -f frontend-deployment.yaml
$ sleep 1
$ k get pods -l app=pyhello -l tier=frontend

$ cat << EOF > frontend-deployment.yaml   
> apiVersion: apps/v1                     
> kind: Deployment   
> metadata:          
>   name: frontend   
> spec:              
>   replicas: 2      
>   selector:        
>     matchLabels:   
>       app: pyhello 
>       tier: frontend                    
>   template:        
>     metadata:      
>       labels:      
>         app: pyhello                    
>         tier: frontend                  
>     spec:          
>       imagePullSecrets:                 
>       - name: regcred                   
>       containers:  
>       - name: py-hello1                 
>         image: $(pass show reg/domain)/docker-internal/$namespace/$app:$ver       
>         imagePullPolicy: Always         
>         env:       
>         - name: FOO
>           value: "BAR"                  
>         ports:     
>         - containerPort: 28001          
>         resources: 
>           requests:
>             cpu: 100m                   
>             memory: 100Mi               
> EOF                
$ 
$ k apply -f frontend-deployment.yaml                     
deployment.apps/frontend unchanged
$ sleep 1
$ k get pods -l app=pyhello -l tier=frontend              
NAME                       READY   STATUS    RESTARTS   AGE                         
frontend-9d9756598-tgkxj   1/1     Running   0          14m                         
frontend-9d9756598-w2dck   1/1     Running   0          14m
$ cat << EOF > frontend-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: pyhello
    tier: frontend
spec:
  type: LoadBalancer
  ports:
  - port: 28001
  selector:
    app: pyhello
    tier: frontend
EOF
$ k apply -f frontend-service.yaml
$ sleep 1
$ k get service frontend

$ cat << EOF > frontend-service.yaml      
> apiVersion: v1     
> kind: Service      
> metadata:          
>   name: frontend   
>   labels:          
>     app: pyhello   
>     tier: frontend 
> spec:              
>   type: LoadBalancer                    
>   ports:           
>   - port: 28001    
>   selector:        
>     app: pyhello   
>     tier: frontend 
> EOF                
$ 
$ k apply -f frontend-service.yaml                        
service/frontend unchanged
$ sleep 1
$ k get service frontend                                  
NAME       TYPE           CLUSTER-IP      EXTERNAL-IP       PORT(S)           AGE   
frontend   LoadBalancer   10.245.204.64   104.248.103.121   28001:31927/TCP   14m

On DO the loadbalancer needs a few minutes when you first configure one, on a new k8s cluster (k describe service frontend shows details).

Let's wait until up:

$ while true; do sleep 2; k get service frontend | grep pending || break; done # lp: timeout=600
$ k -o json get service frontend | jq .
$ ip=$(k -o json get service frontend | jq -r .status.loadBalancer.ingress[0].ip)

$ while true; do sleep 2; k get service frontend | grep pending || break; done
$ k -o json get service frontend | jq .                   
{                    
  "apiVersion": "v1",
  "kind": "Service", 
  "metadata": {      
    "annotations": { 
      "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Service\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"pyhello\",\"tier\":\"frontend\"},\"name\":\"frontend\",\"namespace\":\"default\"},\"spec\":{\"ports\":[{\"port\":28001}],\"selector\":{\"app\":\"pyhello\",\"tier\":\"frontend\"},\"type\":\"LoadBalancer\"}}\n",           
      "kubernetes.digitalocean.com/load-balancer-id": "6781507a-afb4-49b0-b411-f0269483efb4"             
    },               
    "creationTimestamp": "2021-08-16T12:26:31Z",                                    
    "finalizers": [  
      "service.kubernetes.io/load-balancer-cleanup"                                 
    ],               
    "labels": {      
      "app": "pyhello",                   
      "tier": "frontend"                  
    },               
    "name": "frontend",                   
    "namespace": "default",               
    "resourceVersion": "3707",            
    "uid": "ac18b950-82cd-4b6b-8aa1-59f3dcf94af5"                                   
  },                 
  "spec": {          
    "clusterIP": "10.245.204.64",         
    "clusterIPs": [  
      "10.245.204.64"
    ],               
    "externalTrafficPolicy": "Cluster",   
    "ipFamilies": [  
      "IPv4"         
    ],               
    "ipFamilyPolicy": "SingleStack",      
    "ports": [       
      {              
        "nodePort": 31927,                
        "port": 28001,                    
        "protocol": "TCP",                
        "targetPort": 28001               
      }              
    ],               
    "selector": {    
      "app": "pyhello",                   
      "tier": "frontend"                  
    },               
    "sessionAffinity": "None",            
    "type": "LoadBalancer"                
  },                 
  "status": {        
    "loadBalancer": {
      "ingress": [   
        {            
          "ip": "104.248.103.121"         
        }            
      ]              
    }                
  }                  
}
$ ip=$(k -o json get service frontend | jq -r .status.loadBalancer.ingress[0].ip)

The pods are up now - we can access the service from the internet via $ip. Loadbalancer distributes requests over pods:

$ wget -q http://$ip:28001/ -O - | jq .                   
{                    
  "at": 1629117652.7117255,               
  "env": {           
    "APP_ENV": "development",             
    "FOO": "BAR",    
    "FRONTEND_PORT": "tcp://10.245.204.64:28001",                                   
    "FRONTEND_PORT_28001_TCP": "tcp://10.245.204.64:28001",                         
    "FRONTEND_PORT_28001_TCP_ADDR": "10.245.204.64",                                
    "FRONTEND_PORT_28001_TCP_PORT": "28001",                                        
    "FRONTEND_PORT_28001_TCP_PROTO": "tcp",                                         
    "FRONTEND_SERVICE_HOST": "10.245.204.64",                                       
    "FRONTEND_SERVICE_PORT": "28001",     
    "GPG_KEY": "E3FF2839C048B25C084DEBE9B26995E310250568",                          
    "HOME": "/root", 
    "HOSTNAME": "frontend-9d9756598-tgkxj",                                         
    "KUBERNETES_PORT": "tcp://10.245.0.1:443",                                      
    "KUBERNETES_PORT_443_TCP": "tcp://10.245.0.1:443",                              
    "KUBERNETES_PORT_443_TCP_ADDR": "10.245.0.1",                                   
    "KUBERNETES_PORT_443_TCP_PORT": "443",
    "KUBERNETES_PORT_443_TCP_PROTO": "tcp",                                         
    "KUBERNETES_SERVICE_HOST": "10.245.0.1",                                        
    "KUBERNETES_SERVICE_PORT": "443",     
    "KUBERNETES_SERVICE_PORT_HTTPS": "443",                                         
    "LANG": "C.UTF-8",                    
    "PATH": "/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",               
    "PYTHON_GET_PIP_SHA256": "fa6f3fb93cce234cd4e8dd2beb54a51ab9c247653b52855a48dd44e6b21ff28b",         
    "PYTHON_GET_PIP_URL": "https://github.com/pypa/get-pip/raw/c20b0cfd643cd4a19246ccf204e2997af70f6b21/public/get-pip.py",   
    "PYTHON_PIP_VERSION": "21.2.3",       
    "PYTHON_VERSION": "3.8.11"            
  }                  
}
$ for i in {1..10}; do wget -q http://$ip:28001/ -O - | jq -r .env.HOSTNAME; done | sort | uniq     
frontend-9d9756598-tgkxj                  
frontend-9d9756598-w2dck

Dashboard says:

K8s HA

We can watch K8s respawning pods when we cause the service to crash:

$ for i in 1 2; do wget -qt1T1 http://$ip:28001/err -O /dev/null; done

            ```bash
            $ sleep 2
            ```

            ```bash
            $ k get pods    
            NAME                       READY   STATUS    RESTARTS   AGE                         
            frontend-9d9756598-tgkxj   1/1     Running   1          14m                         
            frontend-9d9756598-w2dck   1/1     Running   1          14m
            ```

When crashing it a few times in a row, K8s reacts reasonably by default:

$ k get pods
NAME                        READY   STATUS   RESTARTS   AGE
frontend-67bc4f75c8-26prb   0/1     Error    2          2m54s
frontend-67bc4f75c8-d57vg   0/1     Error    2          2m54s
(...)
NAME                        READY   STATUS             RESTARTS   AGE
frontend-67bc4f75c8-26prb   0/1     CrashLoopBackOff   2          2m54s
frontend-67bc4f75c8-d57vg   0/1     CrashLoopBackOff   2          2m54s
(...)
NAME                        READY   STATUS    RESTARTS   AGE
frontend-67bc4f75c8-26prb   1/1     Running   3          3m35s
frontend-67bc4f75c8-d57vg   1/1     Running   3          3m35s

Bring it down again:

$ k delete -f frontend-service.yaml                       
service "frontend" deleted
$ k delete -f frontend-deployment.yaml                    
deployment.apps "frontend" deleted


  1. All the code in this tutorial is run by a markdown preprocessor. 

Back to top