Linux and Nagios

Using check_http to Monitor Cloudflare Websites

posted by Tristan Self   [ updated ]

If you try to monitor a cloudflare fronted website with NagiosXI check_http you may get this:

[root@wtgc-nagios-01 libexec]# ./check_http -H www.mysite.com -S
HTTP WARNING: HTTP/1.1 403 Forbidden - 378 bytes in 0.029 second response time |time=0.028586s;;;0.000000 size=378B;;;0

After much fiddling found that if you formulate the check string as:

./check_http -H www.mysite.com -S --sni
HTTP OK: HTTP/1.1 200 OK - 2175 bytes in 0.556 second response time |time=0.555568s;;;0.000000 size=2175B;;;0

You'll get a result like this, which is what we want!

You need to use this:

--sni
    Enable SSL/TLS hostname extension support (SNI)

The SNI is Server Name Indication, you can read about it here, https://www.cloudflare.com/en-gb/learning/ssl/what-is-sni/, basically a mechanism to improve security by stopping name mis-matching and SSL certificate matching. (put simply).

MicroK8s (Kubernetes) - Raspberry Pi Ubuntu Linux Basic Setup Guide - Part 4 (Image Repositories)

posted 6 Aug 2020, 14:13 by Tristan Self

https://microk8s.io/docs/registry-images


So images. A container is based on an image. You can pull these images in from the outside world, e.g. from a Public, Private registry, or from the built-in registry.


For this copy the "hello-python" directory we created in Part 2 into one called "hello-python3", you can delete the .tar file in there for now, its not needed.


Push Image into Kubernetes Image Repository


So in the earlier steps, we basically injected the image straight into the Kubernetes image cache, so not using the built-in registry.


Now we’ll do the same but this time working with the Kubernetes registry, so firstly lets enable it if not already done:


microk8s enable registry

Right, so the registry is now available at localhost:32000, when we upload an image we need to tag it accordingly with localhost:32000/your-image-name.


So let’s inject the image we used before, but this time tag with another name:


docker build . -t localhost:32000/hello-python:registry

Build the image:


root@k8s-master:/home/ubuntu/application/hello-python/app# docker build . -t localhost:32000/hello-python:registry

Sending build context to Docker daemon  99.71MB (updates….)


Wait for the image to upload into the docker images library.


docker images


root@k8s-master:/home/ubuntu/application/hello-python/app# docker images

REPOSITORY                     TAG                 IMAGE ID            CREATED             SIZE

localhost:32000/hello-python   registry            03e7eaf7b301        13 seconds ago      1.77GB

hello-python                   local               a5aba8ae8f7d        38 hours ago        874MB

python                         3.7                 42cd7c61ce0e        2 days ago          863MB


Now we want to push this into the Kubernetes registry:


docker push localhost:32000/hello-python


root@k8s-master:/home/ubuntu/application/hello-python/app# docker push localhost:32000/hello-python

The push refers to repository [localhost:32000/hello-python]

32963f3bbf70: Pushing [==================>                                ]  3.899MB/10.74MB

0059fe46108e: Pushing [>                                                  ]  9.997MB/897.8MB

8c51df724705: Pushed

fc5072a676ca: Pushing [==========================>                        ]  3.773MB/7.235MB

bfc730b5d6a1: Pushed

c2047c5c6a3d: Pushing [===>                                               ]  6.636MB/86.97MB

ad87fe9fbf86: Pushing [===============>                                   ]  5.625MB/18.08MB

de08a49275ea: Waiting

e6eb43d220d2: Waiting

a76bedae40bf: Waiting

9184b9a70c9e: Waiting

9276caf83dc1: Waiting


At this point the image is then within the Kubernetes image repository and ready to be used in your application deployments.


Deploy Container using Registry Image


So lets create a cheeky application, create a deployment.yaml and service.yaml as follows:


apiVersion: apps/v1

kind: Deployment

metadata:

  name: hello-python3

spec:

  selector:

    matchLabels:

      app: hello-python3

  replicas: 1

  template:

    metadata:

      labels:

        app: hello-python3

    spec:

      containers:

      - name: hello-python3

        image: localhost:32000/hello-python:registry

        ports:

        - containerPort: 5000


And an service.yaml:


apiVersion: v1

kind: Service

metadata:

  name: hello-python3-service

spec:

  selector:

    app: hello-python3

  ports:

  - port: 8080

    targetPort: 5000

  type: LoadBalancer



Then deploy the application and service with:


kubectl apply -f deployment.yaml

kubectl apply -f service.yaml


root@k8s-master:/home/ubuntu/application/hello-python3/app# kubectl get pods -o wide

NAME                                  READY   STATUS             RESTARTS   AGE   IP           NODE            NOMINATED NODE   READINESS GATES

hello-python-6bfc96894d-qhv8h         1/1     Running            1          38h   10.1.54.21   k8s-master      <none>           <none>

hello-python-redux-7849b9b844-55sxb   1/1     Running            0          19h   10.1.54.40   k8s-master      <none>           <none>

hello-python3-5fdcb55ccd-dvpnx        0/1     ImagePullBackOff   0          47s   10.1.49.2    k8s-worker-02   <none>           <none>


Oh look, we have an error, excellent. So let's look at why this happened.


So so far, we’ve deployed the image registry on the master node, the worker nodes have no configuration for them to reach this registry at the moment. Now if we were to have built and pushed this registry in when we would have only had one node, then the only place the pod could have run would have been on the master node, which can get to the Kubernetes image registry. To fix this we need to complete the next section, as you can see the “ImagePullBackOff” basically means that the pods has been scheduled to a node which is unable to download the image.

If we look at the log for the pod: "hello-python3-5fdcb55ccd-dvpnx" we see:


root@k8s-master:/home/ubuntu/application/hello-python3/app# kubectl logs hello-python3-7b6df7676b-sp2mf

Error from server (BadRequest): container "hello-python3" in pod "hello-python3-7b6df7676b-sp2mf" is waiting to start: image can't be pulled


So we make the change to the deployment.yaml:


apiVersion: apps/v1

kind: Deployment

metadata:

  name: hello-python3

spec:

  selector:

    matchLabels:

      app: hello-python3

  replicas: 1

  template:

    metadata:

      labels:

        app: hello-python3

    spec:

      containers:

      - name: hello-python3

        image: 192.168.1.164:32000/hello-python:registry

        ports:

        - containerPort: 5000


And then deploy again with:

kubectl apply -f deployment.yaml


We still are having problems.


If we run this on the node where the pod is trying to start:


tail -f /var/log/syslog  | grep hello

We see:


Aug  2 14:48:49 k8s-worker-02 microk8s.daemon-kubelet[1324]: E0802 14:48:49.023084    1324 pod_workers.go:191] Error syncing pod 9490b872-87cd-48c9-90b6-952c7dc7b452 ("hello-python3-7b6df7676b-vt4v5_default(9490b872-87cd-48c9-90b6-952c7dc7b452)"), skipping: failed to "StartContainer" for "hello-python3" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"192.168.1.164:32000/hello-python:registry\": failed to resolve reference \"192.168.1.164:32000/hello-python:registry\": failed to do request: Head \"https://192.168.1.164:32000/v2/hello-python/manifests/registry\": http: server gave HTTP response to HTTPS client"

So this exact error is described here: https://microk8s.io/docs/registry-private under the section “Configuring MicroK8s”. We need to make some changes to the configuration of the MicroK8s nodes.


Allowing Insecure Registry


Edit the file: /var/snap/microk8s/current/args/containerd-template.toml on ONLY the worker nodes, so in our case k8s-worker-01 and k8s-worker-02, right at the bottom of the file look for the following section:


[plugins] -> [plugins."io.containerd.grpc.v1.cri".registry] -> [plugins."io.containerd.grpc.v1.cri".registry.mirrors]:


Create a new section underneath to match the IP address of your master node where the image repository lies for example the below, in my case the k8s-master node is on 192.168.1.164.


[plugins."io.containerd.grpc.v1.cri".registry.mirrors."192.168.1.164:32000"]

          endpoint = ["http://192.168.1.164:32000"]


The Microk8s documentation then talks about just restarting microk8s with a “microk8s stop” followed by a “microk8s start”, however I found this was not enough, so rebooted the worker and master nodes, once restarted, i attempted the deployment again:


kubectl apply -f deployment.yaml

Then a quick check on the progress:


root@k8s-master:/home/ubuntu/application/hello-python3/app# kubectl get pods -o wide

NAME                                  READY   STATUS    RESTARTS   AGE     IP           NODE            NOMINATED NODE   READINESS GATES

hello-python-6bfc96894d-qhv8h         1/1     Running   4          40h     10.1.54.63   k8s-master      <none>           <none>

hello-python-redux-7849b9b844-55sxb   1/1     Running   3          21h     10.1.54.59   k8s-master      <none>           <none>

hello-python3-7b6df7676b-qk2mq        1/1     Running   0          8m11s   10.1.81.4    k8s-worker-01   <none>           <none>


As you can see it is there, the pod is running on the k8s-worker-01 node, lets have a quick look at the logs:


cat /var/log/syslog | grep hello-python3

Now this time we see all is well:


Aug  2 15:11:21 k8s-worker-01 microk8s.daemon-containerd[1303]: time="2020-08-02T15:11:21.254616306+01:00" level=info msg="CreateContainer within sandbox \"8f4012258d385007f7a6523b627ca1fa27f4ea21d01e3ac6bf3024b4ffe1fca0\" for container &ContainerMetadata{Name:hello-python3,Attempt:0,}"

Aug  2 15:11:25 k8s-worker-01 microk8s.daemon-containerd[1303]: time="2020-08-02T15:11:25.561390309+01:00" level=info msg="CreateContainer within sandbox \"8f4012258d385007f7a6523b627ca1fa27f4ea21d01e3ac6bf3024b4ffe1fca0\" for &ContainerMetadata{Name:hello-python3,Attempt:0,} returns container id \"bc93c29f081139990e98558d065e6b459fe26b34e3f17e49b7c7ecbc3768785e\""


Now lets create the service to access the application, so back on the Master node, we run:


kubectl apply -f service.yaml

And have a quick check:


root@k8s-master:/home/ubuntu/application/hello-python3/app# kubectl get services -o wide

NAME                         TYPE           CLUSTER-IP       EXTERNAL-IP    PORT(S)          AGE   SELECTOR

hello-python3-service        LoadBalancer   10.152.183.149   192.168.1.22   8080:31272/TCP   16s   app=hello-python3

kubernetes                   ClusterIP      10.152.183.1     <none>         443/TCP          40h   <none>


Yes, there it is: http://192.168.1.22:8080, so lets try to access it:


Good stuff, we’ve done it.


Doing it All Again! Well Why Not!?


Lets run this command:


microk8s kubectl create deployment hello-python4 --image=192.168.1.164:32000/hello-python:registry

That will just create a deployment of that image as a new container and then schedule it on one of the worker nodes:


root@k8s-master:/home/ubuntu/application/hello-python3/app# kubectl get pods -o wide

NAME                                  READY   STATUS    RESTARTS   AGE   IP           NODE            NOMINATED NODE   READINESS GATES

hello-python-6bfc96894d-qhv8h         1/1     Running   4          40h   10.1.54.63   k8s-master      <none>           <none>

hello-python-redux-7849b9b844-55sxb   1/1     Running   3          21h   10.1.54.59   k8s-master      <none>           <none>

hello-python3-7b6df7676b-qk2mq        1/1     Running   0          13m   10.1.81.4    k8s-worker-01   <none>           <none>

hello-python4-55bdb58666-sk8wg        1/1     Running   0          5s    10.1.81.5    k8s-worker-01   <none>           <none>


Cool, there it is “hello-python4-55bdb58666-sk8wg” working as expected on the k8s-worker-01 node, thats good that proves that the registry is working.


Verifying the Registry Image is Used


Okay so lets copy the existing directory:


cp -r hello-python hello-python5

Now within there, if there is an image .tar file delete it, we don’t need it anymore.


Let’s edit the application to show we are getting a new image.


cd hello-python5/app

Edit the “main.py”, and add something to the line saying “Hello from Python”, e.g. “This is a test page using Hello-python5 application!”


Build the image, and tag it as the image name for the Kubernetes image registry when put into the docker image library.


docker build . -t localhost:32000/hello-python5:registry


docker images

root@k8s-master:/home/ubuntu/application/hello-python5/app# docker images

REPOSITORY                      TAG                 IMAGE ID            CREATED             SIZE

localhost:32000/hello-python5   registry            e699191159f2        6 seconds ago       874MB


Okay so now get docker to push it into the Kubernetes image library.


docker push localhost:32000/hello-python5

Now edit the deployment.yaml and service.yaml, and where you have “hello-python3…” change it to “hello-python5”, including the image location in the deployment.yaml file, so we make sure we use our new image!


So now deploy the service and deployment with:


kubectl apply -f deployment.yaml

kubectl apply -f service.yaml


If we run a quick: “tail -f /var/log/syslog | grep hello-python5” we can see the container image being deployed and started.


Aug  2 15:44:32 k8s-worker-02 microk8s.daemon-containerd[1300]: time="2020-08-02T15:44:32.322560841+01:00" level=info msg="PullImage \"192.168.1.164:32000/hello-python5:registry\" returns image reference \"sha256:e699191159f229e8e6aeae167a32c37a830d7c238182ee3f8825e8d5000f0d3d\""

Aug  2 15:44:32 k8s-worker-02 microk8s.daemon-containerd[1300]: time="2020-08-02T15:44:32.328968944+01:00" level=info msg="CreateContainer within sandbox \"35c4ee9ebdd4b90744f276d066ba7c75abe1f856cfe6e2e795100c39411789f4\" for container &ContainerMetadata{Name:hello-python5,Attempt:0,}"

Aug  2 15:44:33 k8s-worker-02 microk8s.daemon-containerd[1300]: time="2020-08-02T15:44:33.337282018+01:00" level=info msg="CreateContainer within sandbox \"35c4ee9ebdd4b90744f276d066ba7c75abe1f856cfe6e2e795100c39411789f4\" for &ContainerMetadata{Name:hello-python5,Attempt:0,} returns container id \"59ff7a8553651222ed6f3e8a13801c2ae850f995d133fa5e79bb37da5b297db9\""


kubectl get services


Gives us:

NAME                         TYPE           CLUSTER-IP       EXTERNAL-IP    PORT(S)          AGE

hello-python5-service        LoadBalancer   10.152.183.154   192.168.1.23   8080:32278/TCP   23s


There it is running at http://192.168.1.23:8080, so let’s give it a connect in a browser:


And you can see we really are running the expected image, because its the hello-python5 one as we wanted.


Using an External Public Registry (e.g. DockerHub)


https://microk8s.io/docs/registry-public


Create an account at Docker Hub.


You’ll also need to create a repository. On your master node run:


Docker login

Enter your logon details.


Now lets create a new image and push it up into the Docker Hub.


First make a copy of the existing application image definition.


cp -r hello-python5 hello-python6


cd hello-python6/app


Make a change to the image’s main.py, so you know its something different for when you run it. Build the image locally, then we’ll push it up.


docker build . -t geekmungus/private:hello-python6

Where “geekmungus” is my DockerHub username, “private” is the repository name and “hello-python6” is the tag (in this case the name of the application).


root@k8s-master:/home/ubuntu/application/hello-python6/app# docker images

REPOSITORY                      TAG                 IMAGE ID            CREATED             SIZE

geekmungus/private              hello-python6       1dfc3b05a3ca        5 seconds ago       874MB


Okay now we can push that repository up.


docker push geekmungus/private

Cool, and there it is:



Let’s delete the local copy to ensure we only ever use the pulled version, when we now do a deployment.


docker images

root@k8s-master:/home/ubuntu/application/hello-python6/app# docker images

REPOSITORY                      TAG                 IMAGE ID            CREATED             SIZE

geekmungus/private              hello-python6       1dfc3b05a3ca        4 minutes ago       874MB


docker image rm 1dfc3b05a3ca 

So now that image has gone and is not available locally, so it should pull from DockerHub now when we refer to it.


Change the deployment.yaml and service.yaml in the hello-python6/app folder, so everywhere where it did say “hello-python5” it now says “hello-python6”.


Then ensure you put the image name in as: geekmungus/private:hello-python6


kubectl apply -f deployment.yaml


kubectl apply -f service.yaml


Watching the logs I got this error:


Aug  2 20:59:45 k8s-worker-01 microk8s.daemon-containerd[1303]: time="2020-08-02T20:59:45.554847607+01:00" level=info msg="PullImage \"geekmungus/private:hello-python6\""

Aug  2 20:59:47 k8s-worker-01 microk8s.daemon-containerd[1303]: time="2020-08-02T20:59:47.366927939+01:00" level=error msg="PullImage \"geekmungus/private:hello-python6\" failed" error="failed to pull and unpack image \"docker.io/geekmungus/private:hello-python6\": failed to resolve reference \"docker.io/geekmungus/private:hello-python \": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed"


https://stackoverflow.com/questions/56642311/microk8s-cannot-pull-from-private-registry


https://github.com/ubuntu/microk8s/issues/512


So run this to create a secret to be used when you connect to the external public repository.


kubectl create secret docker-registry regcred --docker-username=<yourdockerhubusername> --docker-password=<password> --docker-email=<youremailaddress>

And then we see:

root@k8s-master:/home/ubuntu/application/hello-python6/app# kubectl create secret docker-registry regcred --docker-username=<yourdockerhubusername> --docker-password=<password> --docker-email=<youremailaddress> --docker-server=https://index.docker.io/v1/

secret/regcred created


We now have a secret called "regcred" we can use in our deployments.


Now within the deployment.yaml, you need to refer to it so your deployment.yaml will end up looking like this:


apiVersion: apps/v1

kind: Deployment

metadata:

  name: hello-python6

spec:

  selector:

    matchLabels:

      app: hello-python6

  replicas: 1

  template:

    metadata:

      labels:

        app: hello-python6

    spec:

      containers:

      - name: hello-python6

        image: geekmungus/private:hello-python6

        ports:

        - containerPort: 5000

      imagePullSecrets:

      - name: regcred


Now redeploy with:

Kubectl apply -f deployment.yaml

It might take a little while because it will be downloading the image from dockerhub, but then:


root@k8s-master:/home/ubuntu/application/hello-python6/app# kubectl get pods -o wide

NAME                                  READY   STATUS              RESTARTS   AGE     IP           NODE            NOMINATED NODE   READINESS GATES

hello-python6-7ccb5c4bf5-4z9sf        0/1     ContainerCreating   0          8s      <none>       k8s-worker-01   <none>           <none>


root@k8s-master:/home/ubuntu/application/hello-python6/app# kubectl get pods -o wide

NAME                                  READY   STATUS    RESTARTS   AGE     IP           NODE            NOMINATED NODE   READINESS GATES

hello-python6-7ccb5c4bf5-4z9sf        1/1     Running   0          67s     10.1.81.11   k8s-worker-01   <none>           <none>


Finally deploy the service:


kubectl apply -f service.yaml

root@k8s-master:/home/ubuntu/application/hello-python6/app# kubectl get services

NAME                         TYPE           CLUSTER-IP       EXTERNAL-IP    PORT(S)          AGE

hello-python6-service        LoadBalancer   10.152.183.229   192.168.1.24   8080:32475/TCP   6s

kubernetes                   ClusterIP      10.152.183.1     <none>         443/TCP          46h



And we are done! We can see the application running on that port in our browser and it is the image we expected to see.


MicroK8s (Kubernetes) - Raspberry Pi Ubuntu Linux Basic Setup Guide - Part 3 (Further Tasks)

posted 6 Aug 2020, 13:56 by Tristan Self

Well Part 2 was very long, so lets have a shorter one and cover how you can adjust your application on the fly and add the other worker nodes to the cluster.

Updating an Application or Service on the Fly

Kubernetes is a system whereby you declare what you want the “world” to look like, if you make a change to the file you can then say I want the “world” to look like this now, and Kubernetes will adjust the “world” to look like that.

Let’s change the service file so we’re adverting the port on 8080 rather than 8000, change the service.yaml file:

apiVersion: v1
kind: Service
metadata:
  name: hello-python-service
spec:
  selector:
    app: hello-python
  ports:
  - port: 8080
    targetPort: 5000
  type: LoadBalancer

Then apply the changes.

microk8s kubectl apply -f service.yaml

And run a:

microk8s kubectl get services

And as you can see, its now on port 8080.

root@k8s-master:/home/ubuntu/application/hello-python/app# kubectl get services
NAME                         TYPE           CLUSTER-IP       EXTERNAL-IP    PORT(S)          AGE
hello-python-service         LoadBalancer   10.152.183.34    192.168.1.20   8080:30740/TCP   37h
kubernetes                   ClusterIP      10.152.183.1     <none>         443/TCP          38h

Now go to http://192.168.1.20:8080, and you’ll find that is now where the site is accessible from.

Adding a Worker Nodes to the Cluster

So currently we only have a single node within the cluster (the master node). We want to add some more nodes to act as worker nodes, we’ll call these k8s-worker-01 and k8s-worker-02. Once you have them built and on the network, we can then continue with adding them into the Microk8s cluster. 

First we need to install microk8s on each of the two new worker nodes.
 
snap install microk8s --classic
 
Once installed on all the nodes,you now need to run this command on our master node (k8s-master).
 
microk8s.add-node
 
You'll see a token on the screen, make a note of this, you'll need it for the next worker nodes, so lets logon to each worker node and run:
 
microk8s.join <master_ip>:<port>/<token>
 
You may also need to enable your firewall rules and exceptions on the Master and all the Worker nodes to allow communication to work properly.
 
sudo ufw allow in on cni0 && sudo ufw allow out on cni0
sudo ufw default allow routed

Now let’s take a look at our new 3 node cluster with:

microk8s kubectl get nodes

root@k8s-master:/home/ubuntu/application/hello-python/app# microk8s kubectl get nodes
NAME            STATUS     ROLES    AGE   VERSION
k8s-master      Ready      <none>   38h   v1.18.6-1+b4f4cb0b7fe3c1
k8s-worker-01   NotReady   <none>   66s   v1.18.6-1+b4f4cb0b7fe3c1
k8s-worker-02   NotReady   <none>   7s    v1.18.6-1+b4f4cb0b7fe3c1

After a few minutes, all being well you should see:

root@k8s-master:/home/ubuntu/application/hello-python/app# microk8s kubectl get nodes
NAME            STATUS   ROLES    AGE   VERSION
k8s-master      Ready    <none>   38h   v1.18.6-1+b4f4cb0b7fe3c1
k8s-worker-01   Ready    <none>   11m   v1.18.6-1+b4f4cb0b7fe3c1
k8s-worker-02   Ready    <none>   10m   v1.18.6-1+b4f4cb0b7fe3c1

Our 3 node cluster is now ready for action. Now when we deploy applications we will find the pods spreading out across all the available nodes. 

For now go and redeploy the application from Part 2, and see what happens when you run the below:

microk8s kubectl get pods -o wide

What do you notice about the column reporting the host node for the pod?

MicroK8s (Kubernetes) - Raspberry Pi Ubuntu Linux Basic Setup Guide - Part 2 (Build Your Own Image and Deploy It)

posted 6 Aug 2020, 13:39 by Tristan Self   [ updated 6 Aug 2020, 13:48 ]

Log onto your master node via SSH.

We're going to build an image and then deploy it, so as a developer you'd probably be developing on your own machine possibly running docker locally to test how your containers work before deploying the code to a repository where it can then be applied to a production Kubernetes deployment for consumption by users.

So we're going to install Docker, and Python 3 to allow us to quickly create an image, test it works, then deploy to our MicroK8s cluster.

WARNING! The Raspberry Pi is an ARM based processor architecture, therefore you can only run images that are compiled for ARM, you can't run x86/x64 architecture images.

Install Docker and Python

apt install docker.io

apt install python3

apt install python3-pip

Get a Sample Application

For this example we're going to be using Jason Haley's hello world Python Flask Application because its quite a neat little application to show how requirements can be used and how you can build an application with some dependencies.

Create a directory for it first under the "ubuntu" home directory.

mkdir ~/application

cd application

Let's clone Jason's Git Repository into the directory:

git clone https://github.com/JasonHaley/hello-python.git

cd hello-python/app

cd app

So now let's install the requirements for Python as per what is in the requirements.txt file from the Git Repo.

pip3 install -r requirements.txt

Now run the application:

python3 main.py

Then you can either run the above as a background task by adding & to the end or open another SSH session to your master node then run the following, you get "Hello from Python!" back, the application is working!

ubuntu@k8s-master:~$ curl http://127.0.0.1:5000

Hello from Python!ubuntu@k8s-master:~$

So that looks to be working!

Create the Docker File

Okay now we know the application works, lets create a docker file to start to build our image.

cd hello-python/app

Create a file called "Dockerfile", and put this in it:

FROM python:3.7

RUN mkdir /app
WORKDIR /app
ADD . /app/
RUN pip install -r requirements.txt

EXPOSE 5000
CMD ["python", "/app/main.py"]

Now create the image:

docker build -f Dockerfile -t hello-python:local .

Wait for it to build, once it has been built, run the following to list the Docker images:

docker image ls

As you can see there it is:

root@k8s-master:/home/ubuntu/apps/hello-python/app# docker image ls
REPOSITORY            TAG                 IMAGE ID            CREATED             SIZE
hello-python          local             b3d4b07093ba        5 seconds ago       874MB

Right quick explanation of the above, we created Docker image that includes the Python application wrapped up in Flask (so its a web site essentially), we then build the Docker file and add the tag "hello-python:local" (more about tags in a later guide). Also notice that the application is set within the Dockerfile to expose port 5000, what that means is that when the image is deployed in Kubernetes it will present port 5000 to within Kubernetes, we can then decide when presenting out the application what port we want used for the outside world.

docker run -p 5001:5000 hello-python:local

Now its running you should see something like:

# docker run -p 5001:5000 hello-python
 * Serving Flask app "main" (lazy loading)
 * Environment: production
   WARNING: This is a development server. Do not use it in a production deployment.
   Use a production WSGI server instead.
 * Debug mode: off
 * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)

Okay so now from another terminal run:

curl http://127.0.0.1:5001

And we see:

ubuntu@k8s-master:~$ curl http://127.0.0.1:5001
Hello from Python!

So what have we done, well, we've run the Docker image locally and got it to present on port 5001 to the application container's port on 5000, now we're ready to push this image into Kubernetes.

Push Docker Image into Kuerbetes Repository (Local Image Repository Method)

The image we created is known to Docker. However, Kubernetes is not aware of the newly built image. This is because your local Docker daemon is not part of the MicroK8s Kubernetes cluster. We can export the built image from the local Docker daemon and “inject” it into the MicroK8s image cache.

root@k8s-master:/home/ubuntu/apps/hello-python# docker image ls
REPOSITORY            TAG                 IMAGE ID            CREATED             SIZE
hello-python          latest              b3d4b07093ba        About an hour ago   874MB

(You don't need to include the ":local", it will always take the latest otherwise)

So first let's export the docker image out:

# docker save hello-python > hello-python.tar
# microk8s ctr image import hello-python.tar

And push it directly into the image cache:

root@k8s-master:/home/ubuntu/apps/hello-python/app# microk8s ctr image import hello-python.tar
unpacking docker.io/library/hello-python:local (sha256:d84775f8b2406071344ceeb6a3007705dab7f7dae4b12727d26708902d007ab7)...done

Now check the image cache with the following and you should see it there ready for use:

microk8s ctr images ls

Deploying the Application into Kubernetes

To run on Kubernetes you can then create a file called deployment.yaml  in the directory /app that you were in earlier and put in these contents:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-python
spec:
  selector:
    matchLabels:
      app: hello-python
  replicas: 4
  template:
    metadata:
      labels:
        app: hello-python
    spec:
      containers:
      - name: hello-python
        image: hello-python:local
        imagePullPolicy: Never
        ports:
        - containerPort: 5000

Then create a file called service.yaml and put the following contents in:

apiVersion: v1
kind: Service
metadata:
  name: hello-python-service
spec:
  selector:
    app: hello-python
  ports:
  - port: 5000
  type: LoadBalancer

Here's another example service.yaml file for you to experiement with, see if you can see what the difference is it makes!

apiVersion: v1
kind: Service
metadata:
  name: hello-python-service
spec:
  selector:
    app: hello-python
  ports:
  - port: 8080
    targetPort: 5000
  type: LoadBalancer

You can also run the creation of a service from the command line, infact you can do the same thing with the deployment of the image if you want, you don't need to do these steps, if you've created the yaml files though, these are here just for reference, now we have declared the application and service definitions we need to apply them.

microk8s kubectl expose deploy hello-python --port 8080 --target-port 5000 --type LoadBalancer
microk8s kubectl expose deployment hello-python --type=LoadBalancer --name hello-python-service

So now let's deploy our application to Kubernetes with:

kubectl apply -f deployment.yaml
kubectl apply -f service.yaml

Okay so that it is deployed, check with:

microk8s kubectl get pods

And we see our 4 replica pods running, after a few minutes for them to appear!

root@k8s-master:/home/ubuntu/application/hello-python/app# microk8s kubectl get pods
NAME                            READY   STATUS    RESTARTS   AGE
hello-python-6bfc96894d-bt7rg   1/1     Running   0          11s
hello-python-6bfc96894d-jx7f9   1/1     Running   0          11s
hello-python-6bfc96894d-rqmjm   1/1     Running   0          11s
hello-python-6bfc96894d-zcxhg   1/1     Running   0          11s

Let's check out the service too so run:

microk8s kubectl get services

And there it is: 

root@k8s-master:/home/ubuntu/application/hello-python/app# microk8s kubectl get services
NAME                   TYPE           CLUSTER-IP       EXTERNAL-IP    PORT(S)          AGE
hello-python-service   LoadBalancer   10.152.183.204   192.168.1.20   5000:30212/TCP   26s
kubernetes             ClusterIP      10.152.183.1     <none>         443/TCP          25m

What you can see is that our image/application is running and is exposed on IP address: 192.168.1.20 on port 5000.

You should now be able to access it on http://192.168.1.20:5000 from your web browser (i.e. from your workstation not from the Raspberry Pi itself!)

Remove the Application (Deployment and Service)

Let's clean up what we've deployed, so run the following:

microk8s kubectl delete -f deployment.yaml
microk8s kubectl delete -f service.yaml


MicroK8s (Kubernetes) - Raspberry Pi Ubuntu Linux Basic Setup Guide - Part 1 (Getting Started)

posted 6 Aug 2020, 13:11 by Tristan Self

Kuberntes is a platform for the managing of containerised workloads and services, that is akin to a declarative infrastructure as code type platform. It allows developers to deploy containerised applications quickly and easily and in such a way that infrastructure system administrators can manage the infrastructure for the applications to be as adaptable and flexible as the applications require.

To avoid repeating what is already there read the What is Kubernetes? article on the Kubernetes website for more information.

Now, MicroK8s is a small easy to deploy version of Kubernetes and in this guide we'll be installing it onto Raspberry Pi computers. You can install onto a single Raspberry Pi, but for this example we'll be installing on 3 x Raspberry Pi 4's with 2GB RAM each, 4GB is recommended but not essential.

Install Ubuntu 20.04 onto Raspberry Pi

 

Firstly download the Ubuntu 20.04 Raspberry Pi image x64 version. Flash the image to the memory card, you’ll need at least 16GB memory card, ideally 32GB, a tool like Rufus is ideal.

 

Set Hostname

 

Once the device has booted, determine its IP address, normally the easiest way is to find this from your DHCP server.

 

SSH to the device username: ubuntu, password: ubuntu, you'll be prompted to change the password and then be disconnected. Reconnect via SSH using the new password you just set.

 

Edit the hostname file:

 

vi /etc/hostname

 

Set the hostname as per one the following:

  • k8s-master

  • k8s-worker-01

  • k8s-worker-02

  • k8s-worker-03

Edit the hosts file:

 

vi /etc/hosts

 

Replace the "127.0.0.1 localhost" line with the new hostname so: "172.0.0.1 k8s-master localhost" for example.

 

Reboot the server.

 

Set Timezone and NTP

sudo dpkg-reconfigure tzdata

 

sudo timedatectl

 

timedatectl show-timesync

 

https://www.linuxuprising.com/2019/07/how-to-set-timezone-and-enable-network.html

 

Perform Updates

apt update

 

apt upgrade

 

reboot

 

Preparation for Microk8s


We need to configure the memory settings for Kubernetes to operate correctly, node this needs to be done to all the nodes that are going to run MicroK8s.

vi /boot/firmware/cmdline.txt

 

Then at the end of the line add: 


cgroup_enable=memory cgroup_memory=1

 

Reboot and repeat for each of your nodes if you're setting up a cluster for MicroK8s.

 

Install MicroK8s

 

Run the following on all nodes to install Microk8s:


snap install microk8s --classic

 

Once installed on all the nodes, run on the node you want to be master:

 

microk8s.add-node

 

You'll see a token on the screen, make a note of this, you'll need it for the next worker nodes:


microk8s.join <master_ip>:<port>/<token>

 

You may also need to enable your firewall to allow communication:

 

sudo ufw allow in on cni0 && sudo ufw allow out on cni0

sudo ufw default allow routed

 

Enable Kubernetes Add-Ons


We want to enable a load of add-ons, you can see which are enabled with:


microk8s status


Lets turn some things on, the main ones to explain are "storage" that gives you persistent volumes, "metallb" that provides a load balancer for accessing your pods services e.g. HTTPS, and the "registry" for the storage and management of images.


microk8s enable dns 

microk8s enable storage 

microk8s enable metallb (You’ll be prompted for a range of IP addresses here, pick one on the same subnet as the Kubenetes nodes)

microk8s enable ingress 

microk8s enable dashboard

microk8s enable registry

 

You may need to wait a while for all the pods to be created. You can check on progress with:


microk8s kubectl get pods --all-namespaces -o wide

 

When all are done then you are ready to continue to the next part.


Raspberry Pi Quick VPN Server

posted 25 Jul 2020, 01:22 by Tristan Self   [ updated 25 Jul 2020, 01:59 ]

I've always needed to be able to connect into my home network remotely for one reason or another, so wanted a quick, secure and cheap VPN solution to do the trick. If you're only going to have a handful of users connected at one time something like a Raspberry Pi is ideal for the job, you're going to be leaving it running permanently, so something quiet and low power like a Raspberry Pi is ideal.

I've used a Raspberry Pi 3 in this case and installed Ubuntu Linux 20.04 on it, other than that that's all you need to get started.


Much of what is shown below is based on the above links and the hwdsl2 build that can be found here: https://github.com/hwdsl2/setup-ipsec-vpn, great work. I'm summarising the below as a record of my own steps to install, but making them public so others can see how to do it and some additional considerations on the log monitoring.

Generate the Password and Keys

The VPN server will use IPSec and L2TP, so we need two secrets. The first 10 character one will be the password for the VPN credentials. And the second one, a 16 character one, will be the PSK (Pre-Shared-Key).

So logon to the Raspberry Pi and run these two commands:

openssl rand -base64 10
openssl rand -base64 16

Install the VPN Server

Next we run the following command on the Raspberry Pi to install the VPN and substitute in the two secrets we generated above, the 16 character one first, and then the 10 character one for the password, for the VPN_USER put in whatever you feel is suitable.

wget https://git.io/vpnsetup -O vpnsetup.sh && VPN_IPSEC_PSK='<16_char_secret>' VPN_USER='<VPN_Username>' VPN_PASSWORD='<10_char_secret>' sudo sh vpnsetup.sh

Once installed, you'll be presented with the following:

IPsec VPN server is now ready for use!

Connect to your new VPN with these details:

Server IP: <Your IP Address>
IPsec PSK: <16_char_secret>
Username: <VPN_Username>
Password: <10_char_secret>

Write these down. You'll need them to connect!

Important notes:   https://git.io/vpnnotes
Setup VPN clients: https://git.io/vpnclients
IKEv2 guide:       https://git.io/ikev2

Configure Your Firewall

The configuration of your firewall to allow the traffic in from the outside world could be a complete article in itself, essentially you need to allow in IPSEC and IKE through to your Raspberry Pi internal IP address, "port forwarding" is typically what it is called.

Connecting Your Device to VPN

So let's say you want to use an Android device to connect to your VPN, you'd do this on your phone (some steps may be slightly different), but the general idea is the same.

1. Launch the Settings application.
2. Tap "Network & internet". Or, if using Android 7 or earlier, tap More... in the Wireless & networks section.
3. Tap VPN.
4. Tap Add VPN Profile or the + icon at top-right of screen.
5. Enter anything you like in the Name field.
6. Select L2TP/IPSec PSK in the Type drop-down menu.
7. Enter Your VPN Server IP in the Server address field.
8. Leave the L2TP secret field blank.
9. Leave the IPSec identifier field blank.
10. Enter Your VPN IPsec PSK in the IPSec pre-shared key field, i.e. the <16_char_secret>
11. Tap Save.
12. Tap the new VPN connection.
13. Enter Your VPN Username in the Username field, i.e. the <VPN_Username>
14. Enter Your VPN Password in the Password field, i.e. the <10_char_secret>
15. Check the Save account information checkbox.
16. Tap Connect.

Assuming you've got connected up, you can verify it is working correctly by trying to contact something on your home network which you would otherwise not be able to reach if you were not connected to the VPN, also if you try to access the Internet from your Android device, you should find that it shows your home IP address via the VPN, rather than some mobile Internet IP of your service provider.

Log Monitoring

You can find the VPN logging in the normal places, on Ubuntu Libreswan logs into /var/log/auth.log, and xl2tpd logs into /var/log/syslog.

If you want to log the IPSec and L2TP usernames, add the line "debug" to the /etc/ppp/options.xl2tpd file, and run service xl2tpd restart, all the future connections will show with debug logging so you can see the individual username in there too.

Now I know that i'm the only user of my home network VPN, so it would be good to know when someone is trying to connect or has connected, so some sort of quick and dirty monitoring of the logs will be handy.

So something like the below will pull out the time, date and IP address of connections being made successful or otherwise:

cat /var/log/auth.log | grep l2tp-psk | grep "responding to Main Mode from" | awk '{print $1,$2,$3,$7}'

Useful to retrospectively look at the connections attempted so you know if something was trying to connect, hopefully it should just be you!

Log Monitoring (SSH and/or VPN)

Let's make it automatic, so now if you create a file called say: 52-sshlogging.conf and put it in /etc/rsyslog.d/ with the following contents:

$ModLoad ommail
$ActionMailSMTPServer yourmailserver.domain.com
$ActionMailFrom rsyslog@whatever.com
$ActionMailTo you@youremail.com
$template mailSubject,"Login Alert on %hostname%"
$template mailBody,"\n\n%msg%"
$ActionMailSubject mailSubject
$ActionExecOnlyOnceEveryInterval 1
# the if ... then ... mailBody mus be on one line!
if $msg contains 'session opened for user' then :ommail:;mailBody

The bounce the rsyslog daemon with: systemctl restart rsyslog, that will send you an email whenever someone logs onto the Raspberry Pi with SSH. So for the VPN logs you can add another file, i found trying to add it to the same file didn't seem to work!

$ModLoad ommail
$ActionMailSMTPServer yourmailserver.domain.com
$ActionMailFrom rsyslog@whatever.com
$ActionMailTo you@youremail.com
$template mailSubject,"Login Alert on %hostname%"
$template mailBody,"\n\n%msg%"
$ActionMailSubject mailSubject
$ActionExecOnlyOnceEveryInterval 1
# the if ... then ... mailBody mus be on one line!
if $msg contains 'responding to Main Mode from' then :ommail:;mailBody

Now restart rsyslog again, and now when you login via VPN or SSH, you'll get an email sent to you, so you'll know something is up!



Yum show the contents of a package

posted 8 Jan 2018, 01:53 by Tristan Self   [ updated 8 Jan 2018, 01:57 ]

Let's say you've identified a dependency that is needed on your machine, but you don't know what package will provide the required file, you can use the "yum search <name> | grep <more granular name>" to narrow it down, but to see inside the package before install you can run the below, in my example I was looking for what provided the "gettext.pm" file, the package "perl-gettext" looked like it might, to be sure I ran:

# repoquery -l perl-gettext
/usr/lib64/perl5/vendor_perl/Locale
/usr/lib64/perl5/vendor_perl/Locale/gettext.pm
/usr/lib64/perl5/vendor_perl/auto/Locale
/usr/lib64/perl5/vendor_perl/auto/Locale/gettext
/usr/lib64/perl5/vendor_perl/auto/Locale/gettext/gettext.so
/usr/share/doc/perl-gettext-1.05
/usr/share/doc/perl-gettext-1.05/README
/usr/share/man/man3/Locale::gettext.3pm.gz

And as you can see its listed, so this is the right package to install.

Install HP LaserJet 1020 USB Driver on Centos 7

posted 9 Dec 2017, 12:58 by Tristan Self

yum install hplip-gui

hp-plugin

hp-setup

Follow the wizard through and setup the USB printer.

https://www.centos.org/forums/viewtopic.php?t=62564

"Unknown Display" on Centos 7 (Desktop)

posted 2 Dec 2017, 02:13 by Tristan Self   [ updated 2 Dec 2017, 02:32 ]

In a continued bid to build a desktop Linux machine, I was working on getting the three monitors working. One of the monitors uses the onboard graphics, the other two are plugged into a PCI-E riser card. The issue was that the monitor type is not detected by the onboard graphics, so reports the incorrect resolution back. Changing this was proving very difficult. I'm using GNOME on Centos 7 desktop.

To fix I did the following to get the correct modeline I needed:

cvt 1440 900 60

Modeline "1440x900_60.00"  106.50  1440 1528 1672 1904  900 903 909 934 -hsync +vsync

All my monitors will be running at 1440x900_60.00.

Now I ran an: xrandr to get the list of all the monitor/adapter names. In my case these were VGA-1-1 (the unknown display, on the motherboard graphics), VGA-2 and DVI-I-1 (the latter two were on the riser PCI-E card).

Finally I created a file called: /etc/X11/xorg.conf.d/10-monitor.conf and adding the following contents:
 Section "Monitor"
    Identifier "VGA-1-1"
    Modeline "1440x900_60.00"  106.50  1440 1528 1672 1904  900 903 909 934 -hsync +vsync
    Option "PreferredMode" "1440x900_60.00"
EndSection
Section "Monitor"
    Identifier "VGA-2"
    Modeline "1440x900_60.00"  106.50  1440 1528 1672 1904  900 903 909 934 -hsync +vsync
    Option "PreferredMode" "1440x900_60.00"
EndSection
Section "Monitor"
    Identifier "DVI-I-1"
    Modeline "1440x900_60.00"  106.50  1440 1528 1672 1904  900 903 909 934 -hsync +vsync
    Option "PreferredMode" "1440x900_60.00"
EndSection


Once this was done I rebooted, but the third monitor was not using the correct resolution.

Using "Settings"->"Display" I set the problem third monitor (VGA-1-1) to 1440x900 which was now an available option and ensured the monitors were in the correct positions.

One more reboot and all was then in order.

"Unknown Display" on Ubuntu

posted 29 Nov 2017, 12:14 by Tristan Self   [ updated 2 Dec 2017, 01:33 ]

Whilst building a new home machine I came across this issue where the third screen in my three screen setup was only seen as an "Unknown Display". My machine has an on-board graphics card and a 2 port PCI-E riser graphics card, the monitors connected to the 2 port riser were working correctly, and the OS saw them as the LG Monitors, the third was showing as "Unknown Display". The problem this caused was that the screen's resolution should be 1440x900@60Hz, but the unknown display was only giving me the option of 1024x768@60 or lower.

So to fix I used xrandr to set the monitor to a resolution I know it supported as follows:

First run cvt to generate a mode, where the 1440 is the Width, 900 the Height and 60 the frequency.
# cvt 1440 900 60

This returned the following:
# 1440x900 59.89 Hz (CVT 1.30MA) hsync: 55.93 kHz; pclk: 106.50 MHz
Modeline "1440x900_60.00"  106.50  1440 1528 1672 1904  900 903 909 934 -hsync +vsync


The modeline is what we want, so copy this line and create the following line and hit enter to create a new mode based on these settings:

# xrandr --newmode "1440x900_60.00"  106.50  1440 1528 1672 1904  900 903 909 934 -hsync +vsync

Now it is all set you just need to run randr to find out the name of the graphics output you are looking to set to this new mode.

# xrandr
... output omitted.....
VGA-1-1 connected 1024x768+0+0 (normal left inverted right x axis y axis) 0mm x 0mm
   1024x768      60.00*
   800x600       60.32    56.25 
   848x480       60.00 
   640x480       59.94 
....output omitted....


In my case it was VGA-1-1, so now I have this information I can apply the new mode to the monitor as follows:

# xrandr --addmode VGA-1-1 1440x900_60.00

Now if you go into the "Settings" and "Display" and click on the unknown monitor you'll notice a new resolution available from the drop down. Select it and click on "Apply" and hey presto, your monitor should now be showing the correct resolution.

The only issue is that the next time you reboot, the changes will be gone. To resolve this create a file called .xprofile in your home directory and add the following contents all on a single line:
xrandr --newmode "1440x900_60.00"  106.50  1440 1528 1672 1904  900 903 909 934 -hsync +vsync && xrandr --addmode VGA-1-1 1440x900_60.00

Then finally make it executable with:

# chmod +x .xprofile

https://askubuntu.com/questions/860735/unknown-display-in-ubuntu-16-04
https://askubuntu.com/questions/754231/how-do-i-save-my-new-resolution-setting-with-xrandr
https://wiki.archlinux.org/index.php/xrandr#Adding_undetected_resolutions
https://www.centos.org/forums/viewtopic.php?t=62575

1-10 of 31