Samstag, 10. April 2021

Installation Ubuntu on a encryptet system

 Challenge:

  • I have an encryptet system
    • 2 disks encryptet with LUKS
  • I want to install a fresh Ubuntu distro

How to go...

  • First boot the system with a Live-CD of Ubuntu -> try Ubuntu
  • open a terminal in this session
  • become root
    sudo su -
  • have a look to all available disks and search the ones you need
    lsblk --all
    NAME                  MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
    loop0                   7:0    0   9,1M  1 loop  /snap/kubectl/655
    ...
    loop15                  7:15   0         0 loop
    sda                     8:0    0 465,8G  0 disk
    ├─sda1                  8:1    0   512M  0 part  /boot/efi
    ├─sda2                  8:2    0   732M  0 part  /boot
    └─sda3                  8:3    0 464,6G  0 part
      └─sda3_crypt        253:0    0 464,6G  0 crypt
        ├─neon--vg-root   253:1    0 463,6G  0 lvm   /
        └─neon--vg-swap_1 253:2    0   980M  0 lvm   [SWAP]
    sdb                     8:16   0 238,5G  0 disk
    ├─sdb1                  8:17   0   512M  0 part
    ├─sdb2                  8:18   0   732M  0 part
    └─sdb3                  8:19   0 237,3G  0 part
  • In my case the necessary disk were sda and sdb
  • Next step search the partition of the disks which are encrypted
    lsblk -f /dev/sdb
    NAME   FSTYPE      LABEL UUID                                 MOUNTPOINT
    sdb
    ├─sdb1 vfat              9BCC-475E
    ├─sdb2 ext4              ad2fcd47-3725-4a8a-8ea6-90943b5914d2
    └─sdb3 crypto_LUKS       a740ff78-58b7-4ccd-87a1-92ba8715edcf
  • Open the encryptet partition
    cryptsetup open /dev/sdb3 rootdisk
    • rootdisk is here a free name which will be important at the end of the Ubuntu installation
  • After open both of my encrypted disks I could start the Ubuntu installation as usual
  • Before restart the fresh Ubuntu installation you have to clicked "Continue Testing" and return to the terminal
  • First get the UUIDs of the encrypted partitions and note them
    sudo blkid </dev/DEV_ROOTFS>
    sudo blkid </dev/DEV_HOME>
  • Then mount the Ubuntu OS
    sudo mount /dev/mapper/vgroot-lvroot /mnt
    sudo mount </dev/DEV_BOOT> /mnt/boot
    sudo mount /dev/mapper/vghome-lvhome /mnt/home
    sudo mount --bind /dev /mnt/dev
    sudo chroot /mnt
    mount -t proc proc /proc
    mount -t sysfs sys /sys
    mount -t devpts devpts /dev/pts
  • Create the file /etc/crypttab
    sudo nano /etc/crypttab
  • Add the following lines
    # <target name> <source device> <key file> <options>
    rootdisk UUID=<UUID_ROOTFS> none luks,discard
    homedisk UUID=<UUID_HOME> none luks,discard
    • IMPORTANT: rootdisk and homedisk are the names you used during open the encrypted disks at the beginning of this description
  • After editing /etc/crypttab execute the following command
    update-initramfs -k all -c
    • Here have a look at the output. If you using the wrong target name of the disks, you can see it here
  • Leave the terminal and reboot the system. During the reboot you now should be asked for the password for the encrypted disks

Freitag, 9. April 2021

Evolution unter Ubuntu Mate 20.04 einrichten

  • Assistent zur Einrichtung eines neuen Kontos in Evolution aufrufen
  • Email Adresse eingeben
  • Für Posteingang:


    • Port 993 auswählen
    • TLS auf dedizierten Port 
    • OAuth2 als Authentifizierung wählen
  • Für Posteingang


    • Port 587 auswählen
    • Haken bei "Server erfordert Legitimation" muss gesetzt sein 
    • STARTTLS nach Verbinden
    • OAuth2 als Authentifizierung
Konto, welche per Passwort Authentifizierung agiert, habe ich noch nicht zum laufen bekommen.

Montag, 25. Januar 2021

OpenShift 4 Deploy ein Standalone Grafana ohne Grafana Operator

Die Herausforderung besteht daraus, ein Standalone Grafana in einem OpenShift 4.5 Cluster zu deployen, welches einen OpenShift Login hat und auf den internen Prometheus des Cluster Monitorings zugreift. Somit wird es möglich, das vorhanden Cluster Monitoring um eigene Dashboards zu erweitern.

Eine weitere Herausforderung, die sich mir stellte war, dass ich nicht nur den Grafana Operator nicht verwenden konnte, sondern auch die von Red Hat bereitgestellten Images verwenden musste. Damit war das Bitnami Helm Chart raus ;-).

Nach einigen Versuchen bin ich dazu übergegangen, die Grafana Komponente aus dem "openshift-monitoring" Projekt zu kopieren. Dabei bin ich über ein paar Besonderheiten gestolpert, die ich hier erläutere:

  • Der ServiceAccount "grafana" wird als oauth Client verwendet
  • Wie generiere ich ein session secret für den outh-proxy? -> Auflösung unten
  • Und das Secret "grafana-tls" ist ein generiertes Secret, welches direkt über den Service "grafana" erzeugt wird.
  • Ach ja und die spezielle ConfigMap "grafana-trusted-ca-bundle"
Einen Teil meiner Lösungen habe ich dann in dem Artikel von Red Hat gefunden.

Wie habe ich nun das Projekt aufgebaut?

(Alle Dateien werden mit oc apply -f <datei.yaml> an das Projekt übergeben)

oc new-project grafana -> Erzeugt ein neues Projekt

ServiceAccount "grafana"

apiVersion: v1
kind: ServiceAccount
metadata:
  annotations:
    serviceaccounts.openshift.io/oauth-redirectreference.grafana: '{"kind":"OAuthRedirectReference","apiVersion":"v1","reference":{"kind":"Route","name":"grafana"}}'
  name: grafana
  namespace: grafana

ConfigMap "grafana-trusted-ca-bundle"

apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    config.openshift.io/inject-trusted-cabundle: "true"
  name: grafana-trusted-ca-bundle
  namespace: grafana

Service "grafana", der über die Annotaion auch das Secret "grafana-tls" erzeugt

apiVersion: v1
kind: Service
metadata:
  annotations:
    service.alpha.openshift.io/serving-cert-secret-name: grafana-tls
  labels:
    app: grafana
  name: grafana
spec:
  ports:
  - name: https
    port: 3000
    protocol: TCP
    targetPort: https 
  selector:
    app: grafana
  type: ClusterIP

Session Secret für den oauth Proxy erzeugen
oc create secret generic grafana-proxy --from-literal=session_secret=$(head /dev/urandom | tr -dc A-Za-z0-9 | head -c43)

Datei "grafana.ini" erzeugen, aus der dann das Secret "grafana-config" erzeugt wird

[analytics]
check_for_updates = false
reporting_enabled = false
[auth]
disable_login_form = true
disable_signout_menu = true
[auth.basic]
enabled = false
[auth.proxy]
auto_sign_up = true
enabled = true
header_name = X-Forwarded-User
[paths]
data = /var/lib/grafana
logs = /var/lib/grafana/logs
plugins = /var/lib/grafana/plugins
provisioning = /etc/grafana/provisioning
[security]
admin_user = admin
admin_password = password
cookie_secure = true
[server]
http_addr = 127.0.0.1
http_port = 3001

oc create secret generic grafana-config --from-file=grafana.ini

Und nun das Herzstück: Das eigentliche Deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: grafana
  name: grafana
spec:
  replicas: 1
  selector:
    matchLabels:
      app: grafana
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: grafana
    spec:
      restartPolicy: Always
      schedulerName: default-scheduler
      terminationGracePeriodSeconds: 30
      securityContext: {}
      serviceAccount: grafana
      containers:
        - resources:
            requests:
              cpu: 4m
              memory: 100Mi
          terminationMessagePath: /dev/termination-log
          name: grafana
          ports:
            - name: http
              containerPort: 3001
              protocol: TCP
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - name: grafana-storage
              mountPath: /var/lib/grafana
            - name: grafana-log
              mountPath: /var/log/grafana
#            - name: grafana-datasources
#              mountPath: /etc/grafana/provisioning/datasources
#            - name: grafana-dashboards
#              mountPath: /etc/grafana/provisioning/dashboards
#            - name: grafana-dashboard-cluster-overview 
#              mountPath: /grafana-dashboard-definitions/0/cluster-overview
            - name: grafana-config
              mountPath: /etc/grafana
          terminationMessagePolicy: File
          image: registry.redhat.io/openshift4/ose-grafana 
          args:
            - '-config=/etc/grafana/grafana.ini'
        - resources:
            requests:
              cpu: 1m
              memory: 20Mi
          readinessProbe:
            httpGet:
              path: /oauth/healthz
              port: https
              scheme: HTTPS
            timeoutSeconds: 1
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 3
          terminationMessagePath: /dev/termination-log
          name: grafana-proxy
          env:
            - name: HTTP_PROXY
            - name: HTTPS_PROXY
            - name: NO_PROXY
          ports:
            - name: https
              containerPort: 3000
              protocol: TCP
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - name: secret-grafana-tls
              mountPath: /etc/tls/private
            - name: secret-grafana-proxy
              mountPath: /etc/proxy/secrets
            - name: grafana-trusted-ca-bundle
              readOnly: true
              mountPath: /etc/pki/ca-trust/extracted/pem/
          terminationMessagePolicy: File
          image: registry.redhat.io/openshift4/ose-oauth-proxy
          args:
            - '-provider=openshift'
            - '-https-address=:3000'
            - '-http-address='
            - '-email-domain=*'
            - '-upstream=http://localhost:3001'
            - '-openshift-sar={"resource": "namespaces", "verb": "get"}'
            - >-
              -openshift-delegate-urls={"/": {"resource": "namespaces", "verb":
              "get"}}
            - '-tls-cert=/etc/tls/private/tls.crt'
            - '-tls-key=/etc/tls/private/tls.key'
            - >-
              -client-secret-file=/var/run/secrets/kubernetes.io/serviceaccount/token
            - '-cookie-secret-file=/etc/proxy/secrets/session_secret'
            - '-openshift-service-account=grafana'
            - '-openshift-ca=/etc/pki/tls/cert.pem'
            - '-openshift-ca=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
            - '-skip-auth-regex=^/metrics'
      serviceAccount: grafana
      volumes:
        - name: grafana-storage
          emptyDir: {}
        - name: grafana-log
          emptyDir: {}
#        - name: grafana-datasources
#          secret:
#            secretName: grafana-datasources
#            defaultMode: 420
#        - name: grafana-dashboards
#          configMap:
#            name: grafana-dashboards
#            defaultMode: 420
#        - name: grafana-dashboard-cluster-overview 
#          configMap:
#            name: grafana-dashboard-cluster-overview
#            defaultMode: 420
        - name: grafana-config
          secret:
            secretName: grafana-config
            defaultMode: 420
        - name: secret-grafana-tls
          secret:
            secretName: grafana-tls
            defaultMode: 420
        - name: secret-grafana-proxy
          secret:
            secretName: grafana-proxy
            defaultMode: 420
        - name: grafana-trusted-ca-bundle
          configMap:
            name: grafana-trusted-ca-bundle
            items:
              - key: ca-bundle.crt
                path: tls-ca-bundle.pem
            defaultMode: 420
            optional: true
      dnsPolicy: ClusterFirst
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 25%
      maxSurge: 25%
  revisionHistoryLimit: 10
  progressDeadlineSeconds: 600

Nachdem das Deployment erfolgreich durch geführt wurde, müssen wir noch eine Route bauen

apiVersion: route.openshift.io/v1
kind: Route
metadata:
  name: grafana
  namespace: grafana 
spec:
#  host: <host-name>
  port:
    targetPort: https
  tls:
    insecureEdgeTerminationPolicy: Redirect
    termination: reencrypt
  to:
    kind: Service
    name: grafana
    weight: 100
  wildcardPolicy: None

Jetzt sollte das nackte Grafana erreichbar sein (wenn wir alle Router, Host-Dateien und sonstiges richtig gefüttert haben)

Hat das funktioniert, können wir die Dashboards und die Datasource einfügen. Da ich für meinen Anwendungsfall den schon vorhandenen Prometheus einbinden wollte, führte ich folgenden Befehl aus:

oc get secret grafana-datasources -o yaml -n openshift-monitoring > secret-grafana-datasources.yaml

Die Datei secret-grafana-datasources.yaml muss noch an den Namespace "grafana" angepasst werden, und die ganzen unnötigen Zeilen entfernt werden. Danach mit oc apply -f übergeben.
Für ein Dashboard müsst ihr ein Dashboard in Json Format in eine ConfigMap packen. In meinem Fall habe ich die ConfigMap "grafana-dashboard-cluster-overview" genannt. Und damit es die übergordnete Struktur, wie auch im Cluster-Monitoring gibt, habe ich noch die ConfigMap grafana-dashboards aus dem "openshift-monitoring" Projekt kopiert und in dem Namespace "grafana" erzeugt. 
Dann nehmen wir die Kommentare aus der Deployment-Datei raus, übergeben die Änderungen an OpenShift, et voilá.... wir haben ein Grafana, welches auf den internen Prometheus zugreift und die Daten für eigene Dashboards verwendet.

Samstag, 4. April 2020

Disaster Recovery for my ProLiant DL160 G6 Ubuntu 18.04

Because my server is my playground I searched for a possibility to rollback the installation to start. A realy easy way is REAR Relax-and-Recover https://github.com/rear/rear

My first try with REAR starts with the shipped version on Ubuntu 18.04 following the description https://linuxwissen.blogspot.com/2018/12/sicheres-backup-mit-rear-relax-and.html

Installation: 

First I installed REAR on the ProLiant server

$ sudo apt-get install rear

Then I created the described configuration files

/etc/rear/site.conf
TIMESYNC=NTP # since all your computers use NTP for time synchronisation
### Achtung ! Den NFS-Pfad OHNE den : vor dem / angeben !!!! ###
BACKUP_URL=nfs://192.168.178.20//srv/nfs/rear
BACKUP=NETFS
OUTPUT=ISO
USE_CFG2HTML=y
BACKUP_PROG=tar
BACKUP_PROG_COMPRESS_OPTIONS="--gzip"


/etc/rear/local.conf
###### Die folgende Zeile muss individuell an das jeweilige System angepasst werden ####
BACKUP_PROG_EXCLUDE=( '/dev/shm/*' '/home/[username]/.cache/*' '/root/.cache/*' '/tmp/*' '/var/cache/*' '/var/tmp/*' "$VAR_DIR/output/\*" )
LOGFILE="/var/log/rear/$HOSTNAME.log"
# Behalte ISO-BUILD in $TMPDIR/rear.XXXXXXXXXXXXXXX/rootfs/
KEEP_BUILD_DIR="true"
NETFS_KEEP_OLD_BACKUP_COPY=yes

/etc/rear/os.conf
OS_VENDOR=Ubuntu 
OS_VERSION=18.04

Backup

I store the created files of REAR on a NFS Mount on my Laptop. For that I have to mount the NFS share on the server:

$ sudo mount -t nfs 192.xxx.xxx.xxx:/srv/nfs/rear /mnt

After that there are two ways to create the backup:

1. Create the rescue ISO and the backup in two steps
$ rear -v -D mkrescue
$ rear -v -D mkbackuponly

2. Create the rescue ISO and the backup in one step
$ rear mkbackup

After the backup is finished you can find the created files on the Laptop under /srv/rear/<name_server>/ . The important files are the <name_server>.iso and the backup.tar.gz

Restore

For automatic restore of the backup through REAR it very important to mount the location of the backup on the laptop on the same place. Also it is necessary, that a active connection exists between the device running the virtual console of the LO and the ProLiant server itself.
In my case I running the virtual console in a VM on my laptop. For that I have to create a bridge connecting to the VM, so that the VM can communicate actively with the server.

1. Step after starting the virtual console of the LO is to mount the ISO as virtual media.

2. In the next step connect the virtual media to the server. Klick on the mounted media and then click "Connect". If the connection is successfully established, the entry contains the IP of the  bridge in the VM.

3. Reboot the server
4. The server boots now in  the start menu of REAR. Select the automated REAR Recovery


5. REAR then makes the rest
6. Klick 3 to reboot after the successful recovery

Challenges with the REAR version 2.3

My first try to recover the backup shows, that the keyboard not works. If you don't choose "Automatic Recover <name_server>" I can't do anything, cause I can't make any input.
Running the Automatic Recover works, but I can't reboot with the REAR menu showing at the end of the recover. I have to reboot the server over the LO. That works for me.

After one recover I decided to update REAR to the actual version 2.5.
$ wget http://download.opensuse.org/repositories/Archiving:/Backup:/Rear:/Snapshot/xUbuntu_18.04/amd64/rear_2.5-0git.0.dff3f6a.unknown_amd64.deb
$ sudo dpkg -i rear_2.5-0git.0.dff3f6a.unknown_amd64.deb

Create the same config files like described above.
Create a new complete backup with the ISO file

After this I want to know, if it is functional and try directly the recover like described for version 2.3. The procedure is exactly the same. With the different that the keyboard is working.

Sonntag, 26. Januar 2020

OpenShift 4.3 nightly on my home server

I want to install a OpenShift Cluster on my HP Proliant D160 G6 server at home. For me the biggest challenge was to organize all the required network settings.

Related to 2 great posts I get the cluster running and want to write a description as reminder for me .

https://itnext.io/install-openshift-4-2-on-kvm-1117162333d0

https://medium.com/@zhimin.wen/preparing-openshift-4-1-environment-dc40ecb7a763

I'm installing a OpenShift 4.3 nightly builds Cluster with 3 Master-Nodes and 2 Worker-Nodes. The VMs are build with KVM on Ubuntu 18.04
This description also works with OpenShift 4.2. I tested it many times ;-)

Following steps are neccessary:

  • KVM Setup
  • Dnsmasq set up and common configuration
  • Configure Ubuntu to use local dnsmasq 
  • dnsmasq cluster specific configuration
  • Loadbalancer and Gobetween configuration
  • Webserver for installation files
  • Create VMs with CoreOs Image
  • Installation of OpenShift
  • Traefik configuration to route the APi

KVM-Setup

Install required packages

$sudo apt -y install qemu qemu-kvm libvirt-bin bridge-utils virt-manager libosinfo-bin
Make sure the libvirtd service is enabled and running

Install the uvtool to create Ubuntu-VMs in an easy way

$sudo apt-get -y uvtool
$sudo uvt-simplestreams-libvirt --verbose sync release=bionic arch=amd64
   Adding: com.ubuntu.cloud:server:18.04:amd64 20200107
$ssh-keygen -b 4096 -t rsa -f ~/.ssh/id_rsa -N ""

Create a KVM-network disabling DNS and DHCP in this network

net-ocp.xml
<network>
  <name>ocp</name>
  <forward mode='nat'/>
  <bridge name='br-ocp' stp='on' delay='0'/>
  <dns enable="no"/> 
  <ip address='192.168.10.1' netmask='255.255.255.0'>
  </ip>
</network>
 
$virsh net-define net-ocp.xml
  Network ocp defined from net-ocp.xml
$virsh net-autostart ocp
  Network ocp marked as autostarted
$virsh net-start ocp
  Network ocp started
 
$sudo systemctl restart libvirt-bin
 
The bridge ist created and configured:
$ifconfig br-ocp
br-ocp: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.10.1  netmask 255.255.255.0  broadcast 192.168.10.255
        ether 52:54:00:dc:f2:c6  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Dnsmasq setup and common configurations

If you haven't already install dnsmasq do that and enable the service
$sudo apt-get install -y dnsmasq

Create a file /etc/dnsmaq.d/common.conf
# Listen on lo and br-ocp only
bind-interfaces
interface=lo,br-ocp 
 
# DHCP
dhcp-option=option:router,192.168.10.1
dhcp-option=option:dns-server,192.168.10.1
dhcp-range=192.168.10.10,192.168.10.254,12h 
 
# forward, use original DNS server
server=10.0.xxx.xxx
server=10.0.xxx.xxx
 
address=/webserver.oc.io/192.168.10.20
 

Configure Ubuntu to use local dnsmasq

Rename the /etc/resolve.conf (some application first ping this file)
$ sudo mv /etc/resolv.conf /etc/resolv.conf.bac
 
Backup /etc/systemd/resolved.conf and change the DNS entry to localhost
$sudo cp /etc/systemd/resolved.conf /etc/systemd/resolved.conf.bac 
/etc/systemd/resolved.conf (new): 
[Resolve]
DNS=127.0.0.1
 
$sudo systemctl restart systemd-resolved.service
$sudo systemd-resolve --status
  Global
         DNS Servers: 127.0.0.1
          DNSSEC NTA: 10.in-addr.arpa
          ...
The KVM host now use the local dnsmasq as DNS server
 

DNS configuration for OpenShift Installation

Create a file /etc/dnsmasq.d/test-ocp4.conf
Inside this file define for each node of the cluster following entries:
  1. DHCP IP address by MAC address
  2. DNS A record
  3. DNS PTR record
My test-ocp4.conf looks like here:
# Configuration for OCP Cluster *.test-ocp4.oc.io

# Bootstrap
dhcp-host=52:54:00:76:34:04,192.168.10.40
address=/bootstrap.test-ocp4.oc.io/192.168.10.40
ptr-record=40.10.168.192.in-addr.arpa,bootstrap.test-ocp4.oc.io

# Master1
dhcp-host=52:54:00:28:4e:09,192.168.10.41
address=/master1.test-ocp4.oc.io/192.168.10.41
ptr-record=41.10.168.192.in-addr.arpa,master1.test-ocp4.oc.io

# Master2
dhcp-host=52:54:00:07:45:14,192.168.10.42
address=/master2.test-ocp4.oc.io/192.168.10.42
ptr-record=42.10.168.192.in-addr.arpa,master2.test-ocp4.oc.io

# Master3
dhcp-host=52:54:00:78:05:d9,192.168.10.43
address=/master3.test-ocp4.oc.io/192.168.10.43
ptr-record=43.10.168.192.in-addr.arpa,master3.test-ocp4.oc.io

# Worker1
dhcp-host=52:54:00:e4:67:14,192.168.10.51
address=/worker1.test-ocp4.oc.io/192.168.10.51
ptr-record=51.10.168.192.in-addr.arpa,worker1.test-ocp4.oc.io

# Worker2
dhcp-host=52:54:00:e5:c5:38,192.168.10.52
address=/worker2.test-ocp4.oc.io/192.168.10.52
ptr-record=52.10.168.192.in-addr.arpa,worker2.test-ocp4.oc.io

# LoadBalancer
dhcp-host=52:54:00:41:e5:45,192.168.10.49
address=/lb.test-ocp4.oc.io/192.168.10.49
ptr-record=49.10.168.192.in-addr.arpa,lb.test-ocp4.co.io

# LB settings for the API
address=/api.test-ocp4.oc.io/192.168.10.49
address=/api-int.test-ocp4.oc.io/192.168.10.49
address=/.apps.test-ocp4.oc.io/192.168.10.49

# ETCD instance A and SRV records
address=/etcd-0.test-ocp4.oc.io./192.168.10.41
srv-host=_etcd-server-ssl._tcp.test-ocp4.oc.io,etcd-0.test-ocp4.oc.io,2380

address=/etcd-1.test-ocp4.oc.io/192.168.10.42
srv-host=_etcd-server-ssl._tcp.test-ocp4.oc.io,etcd-1.test-ocp4.oc.io,2380

address=/etcd-2.test-ocp4.oc.io/192.168.10.43
srv-host=_etcd-server-ssl._tcp.test-ocp4.oc.io,etcd-2.test-ocp4.oc.io,2380 

 
Before restart of the dnsmasq we may need to clear the lease chache, to make sure the DHCP IP allocation is not cached
$sudo rm -rf /var/lib/misc/dnsmasq.leases
$sudo touch /var/lib/misc/dnsmasq.leases
 
$sudo systemctl restart dnsmasq.service
 

 Loadbalancer and Gobetween configuration

For Loadbalancer we install Gobetween in a minimal Ubuntu VM. So first create the VM
$uvt-kvm create lb release=bionic --memory 4096 --cpu 4 --disk 50 --bridge br-ocp --password password 
$virsh list --all
 Id    Name                           State
----------------------------------------------------
 1     lb                             running

$virsh dumpxml lb | grep 'mac address' | cut -d\' -f 2
52:54:00:8e:1d:32

Change the MAC address in the /etc/dnsmasq.d/test-ocp4.conf file.
Clear the lease cache and restart dnsmasq.
Restart the VM to make sure it gets the assigned IP
$virsh destroy lb
Domain lb destroyed

$virsh start lb
Domain lb started

Install the gobetween software 
$mkdir gobetween
$cd gobetween
$curl -LO https://github.com/yyyar/gobetween/releases/download/0.7.0/gobetween_0.7.0_linux_amd64.tar.gz
$tar xzvf gobetween_0.7.0_linux_amd64.tar.gz
  AUTHORS
  CHANGELOG.md
  LICENSE
  README.md
  config/
  config/gobetween.toml
  gobetween
$sudo cp gobetween /usr/local/bin/

Create a systemd service (copy file beyond to /etc/systemd/system/gobetween.service)
gobetween.service
[Unit]
Description=Gobetween - modern LB for cloud era
Documentation=https://github.com/yyyar/gobetween/wiki
After=network.target
 
[Service]
Type=simple
PIDFile=/run/gobetween.pid
#ExecStartPre=prestart some command
ExecStart=/usr/local/bin/gobetween -c /etc/gobetween/gobetween.toml
ExecReload=/bin/kill -s HUP $MAINPID
ExecStop=/bin/kill -s QUIT $MAINPID
PrivateTmp=true
 
[Install]
WantedBy=multi-user.target
 
Create a configuration file for gobetween
/etc/gobetween/gobetween.toml
[servers]
[servers.api]
protocol = "tcp"
bind = "0.0.0.0:6443" 

[servers.api.discovery]
  kind = "static"
  static_list = [ 
"192.168.10.40:6443","192.168.10.41:6443","192.168.10.42:6443","192.168.10.43:6443" ] 

[servers.api.healthcheck]
  kind = "ping"
  fails = 1
  passes = 1
  interval = "2s"
  timeout="1s"
  ping_timeout_duration = "500ms" 

[servers.mcs]
protocol = "tcp"
bind = "0.0.0.0:22623" 

[servers.mcs.discovery]
  kind = "static"
  static_list = [ "192.168.10.40:22623","192.168.10.41:22623","192.168.10.42:22623","192.168.10.43:22623" ] 

[servers.mcs.healthcheck]
  kind = "ping"
  fails = 1
  passes = 1
  interval = "2s"
  timeout="1s"
  ping_timeout_duration = "500ms"

[servers.http]
protocol = "tcp"
bind = "0.0.0.0:80"

[servers.http.discovery]
  kind = "static"
  static_list = [ "192.168.10.44:80","192.168.10.45:80","192.168.10.46:80" ]

[servers.http.healthcheck]
  kind = "ping"
  fails = 1
  passes = 1
  interval = "2s"
  timeout="1s"
  ping_timeout_duration = "500ms" 

[servers.https]
protocol = "tcp"
bind = "0.0.0.0:443"

[servers.https.discovery]
  kind = "static"
  static_list = [ "192.168.10.44:443","192.168.10.45:443","192.168.10.46:443" ]

[servers.https.healthcheck]
  kind = "ping"
  fails = 1
  passes = 1
  interval = "2s"
  timeout="1s"
  ping_timeout_duration = "500ms" 

Start the gobetween service
 

Webserver for installation files

Back to my server I have to install a webserver which will host the files which are necessary for the installation of the CoreOS VMs
$sudo apt-get install -y apache2

Check if the webserver is online
$ curl http://playground:80

There should be a output ;-) (playground is the name of my server)

Create VMs with CoreOs Image

To create CoreOS VMs we need first the ISO image and additional Ignition Files and an initramfs Image. Everything can be get from the Red Hat page with a Red Hat Account
Here you get the installer for the stable release and also the intaller for the nightly builds. 
 
Download the installer
$wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp-dev-preview/latest-4.3/openshift-install-linux-4.3.0-0.nightly-2020-01-25-074537.tar.gz
 
Download your pull secret

Download the RHCOS image
$wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/pre-release/latest/rhcos-43.81.202001142154.0-installer.x86_64.iso

Download compressed metal BIOS
$wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/pre-release/latest/rhcos-43.81.202001142154.0-metal.x86_64.raw.gz

Download the oc-client
$wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp-dev-preview/latest-4.3/openshift-install-linux-4.3.0-0.nightly-2020-01-25-074537.tar.gz
To get the Ignition Files we have to take some manual steps:
1. unpack the installer in a new directory "install"
    $tar xzvf openshift-install-linux-4.3.0-0.nightly-2020-01-25-074537.tar.gz
2. Create a install-config.yaml file (see also the documentation)
    apiVersion: v1
    baseDomain: oc.io
    compute:
    - hyperthreading: Enabled
      name: worker
      replicas: 0
    controlPlane:
      hyperthreading: Enabled
      name: master
      replicas: 3
    metadata:
      name: test-ocp4
    networking:
      clusterNetwork:
      - cidr: 10.128.0.0/14
         hostPrefix: 23
      networkType: OpenShiftSDN
      serviceNetwork:
      - 172.30.0.0/16
   platform:
      none: {}
   pullSecret: '<pull-secret>'
   sshKey: '<ssh-key>'

3. Create the Kubernetes manifest
    $./openshift-install create manifests --dir=.
    After this command, there are a lot of manifests created in the new directory "manifests"

4. Modify the manifest/cluster-scheduler-02-config.yml to prevent Pods from being scheduled on the control plane nodes. For this change the entry "masterSchedulable" from true to false
5. Create the Ignistion files
   $./openshift-install create ignition-configs --dir=. 
   Now the Ignition Files for the bootstrap-, master-, and worker-nodes are present. Also the kubeconfig file and the kubeadmin password file is created under auth directory
6. Upload the Ignition-Files to the webserver
    $ sudo cp bootstrap.ign master.ign worker.ign /var/www/html/
    $ sudo chmod +r ./*
7. Upload the BIOS image to the webserver

    $ sudo cp ../rhcos-43.81.202001142154.0-metal.x86_64.raw.gz /var/www/html/rhcos-43-metal.raw.gz

For creation of the VM I copied the ISO to the directory /var/lib/libvirt/images/
$ sudo mv ../rhcos-43.81.202001142154.0-installer.x86_64.iso /var/lib/libvirt/images/rhcos-43.iso


Installation of OpenShift

The installation of OpenShift starts with the creation of the Bootstrap VM. After the 3 Control Plane Nodes are online the Kube-Api-Server starts to create all required Operators on the Control Plane Nodes.

Create the Bootstrap-VM:
$virt-install --connect=qemu:///system --name bootstrap --memory 15258 --vcpus 4 --os-type=linux --os-variant=rhel7.5 --disk /var/lib/libvirt/images/bootstrap.qcow2,device=disk,bus=virtio,size=120 --disk /var/lib/libvirt/images/rhcos-43.iso,device=cdrom --network network=ocp,model=virtio,mac=52:54:00:76:34:04

Open the console in the virt-manager and start the installation. The system will boot in an emergency mode, where you can also start the boot-mode with the kernel parameter. It's a little bit more comfortable like the kernel command line
in the emergency console fill in this command:
#/usr/libexe/coreos-installer -d vda -i https://<webserver>/bootstrap.ign -b https://<webserver>/rhcos-43-metal.raw.gz -p qemu 
Then unmount the ISO image and reboot the VM. Check if the VM is reachable via ssh:
$ssh core@bootstrap.test-ocp4.oc.io

Create 3 Control Plane Nodes
$virt-install --connect=qemu:///system --name master1 --memory 15258 --vcpus 4 --os-type=linux --os-variant=rhel7.5 --disk /var/lib/libvirt/images/master1.qcow2,device=disk,bus=virtio,size=120 --disk /var/lib/libvirt/images/rhcos-43.iso,device=cdrom --network network=ocp,model=virtio,mac=52:54:00:28:4e:09 --graphics=vnc --boot hd,cdrom

Again in the emergency console:

#/usr/libexe/coreos-installer -d vda -i https://<webserver>/master.ign -b https://<webserver>/rhcos-43-metal.raw.gz -p qemu 

Repeat this for all 3 Nodes with the different MAC addresses and names.

Create 2 Worker Nodes
$virt-install --connect=qemu:///system --name worker1 --memory 15258 --vcpus 4 --os-type=linux --os-variant=rhel7.5 --disk /var/lib/libvirt/images/worker1.qcow2,device=disk,bus=virtio,size=120 --disk /var/lib/libvirt/images/rhcos-43.iso,device=cdrom --network network=ocp,model=virtio,mac=52:54:00:e4:67:14 --boot hd,cdrom
 

In emergency console:

#/usr/libexe/coreos-installer -d vda -i https://<webserver>/worker.ign -b https://<webserver>/rhcos-43-metal.raw.gz -p qemu 

If you are looking to the logs on the bootstrap server have some patience... some things needs their time to bevome ready ;-)