1 - Docker

Commands

  • build: docker build . -t <tag_name>
  • connect to a container: docker exec -it <container_name> bash
  • delete
    • delete all volumes: docker volume rm $(docker volume ls -q) - docker volume rm - docker volume ls
    • delete all: docker stop $(docker ps -aq) && docker rm $(docker ps -aq) && docker rmi $(docker images -q)
  • list containers - docker ps
    • list running containers: docker ps
    • list all containers: docker ps -a

Dockerfile

  • set variables: ARG variable=value
  • use variables example: WORKDIR /home/$variable

docker-compose

2 - Duply with Windows

Installation

  • install cygwin: https://cygwin.com/
  • first just install base packages
  • install the following additional packages (by just starting setup-x86_64.exe again):
    • python3
    • python36-pip
    • python3-devel
    • gcc-core
    • librsync-devel
    • gnupg2
    • nano
  • update binutils to newest (test) version (2.31.1-1)
  • update pip (optional): pip3 install --upgrade pip
  • install duplicity: pip3 install duplicity
  • create bin dir: mkdir bin
  • change to that dir: cd bin
  • create link to gpg2: ln -s /usr/bin/gpg2.exe gpg.exe
  • download duply: https://duply.net/
  • unpack duply
  • copy duply script to bin dir: cp /cygdrive/c/Users/<your_username>/Downloads/<duply_dir>/duply .
  • change back to home dir cd

Configuration of .bashrc

  • edit .bashrc: nano .bashrc
  • add export PATH=/home/<your_username>/bin:$PATH
  • switch language to english (optional): export LANG='en_US.UTF-8'
  • add ulimit -n 1024

Check Installation and configuration

The following commands should execute without error or warning: - duplicity --version - duply --version - gpg --version

Generate GPG Key

  • run gpg --full-gen-key
  • select default values but 4096 Bit
  • select a password for the key
  • copy the public key id to somewhere else for later use - it is a sring like 7A6E4278E2CAF3FA16240DADC94F3BEAB276F92D

Configure Duply

  • create a profile: duply <profile_name> create
  • edit config: nano .duply/<profile_name>/conf
    • enter your gpg public key it to GPG_KEY
    • enter the password to GPG_PW
    • enter the TARGET like a cloud space or something else
    • for SOURCE just enter / - details will be configured in an other file later
    • remove comment infront of GPG_OPTS and write GPG_OPTS='--pinentry-mode loopback'
  • edit exclude file: nano .duply/<profile_name>/exclude

This is how you can add your Cygwin home folder and your Windows pictures folder to backup and ignore evenrything else - **

+ /home/<your_username>
+ /cygdrive/c/Users/<your_username>/Pictures
- **

Edit gpg-agent.conf

  • edit gpg-agent.conf: nano .gnupg/gpg-agent.conf
  • add this line: allow-loopback-pinentry

Using Backblaze

  • install client: pip3 install b2sdk
  • use this as TARGET: b2://[keyID]:[application key]@[B2 bucket name]

Start Backup

  • start the first backup: duply test backup

3 - Freifunk

  • Hardware: TP-Link TL-WR841N
  • Hardware Version: 9.2
  • Gekauft: gebraucht - April 2019
  • Software: Freifunk Braunschweig installiert und eingerichtet
  • Node Name: may-01
  • Geo: 52° 13,495’ N 010° 30,985’ E
  • Modifikation: in outdoor Gehäuse
  • Map Link: https://w.freifunk-bs.de/map/#!/de/map/14cc20702702
  • Hardware: TP-Link TL-WR841N
  • Hardware Version: 9.1
  • Gekauft: gebraucht - April 2019
  • Modifikation: keine
  • Hardware: TP-Link TL-WR841N
  • Hardware Version: 9.0
  • Gekauft: gebraucht - April 2019
  • Modifikation: keine
  • Hardware: TP-Link TL-WR841N
  • Hardware Version: 11.1
  • Gekauft: gebraucht - April 2019
  • Modifikation: keine
  • Hardware: TP-Link Archer C7 (EU)
  • Hardware Version: 4.0
  • Gekauft: gebraucht - April 2019
  • Software: Freifunk Braunschweig installiert und eingerichtet
  • Node Name: may-parker-07
  • Modifikation: keine

Nummer 8 - AVM FRITZ!Box 4020

Nummer 9 - AVM FRITZ!Box 4020

  • Hardware: AVM FRITZ!Box 4020
  • Hardware Version: -
  • Hardware Besonderheit: 2 Antennen parellel, eine orthogonal - siehe auch hier: https://openwrt.org/toh/avm/fritz.box.4020?s%5C#different_antenna_layouts
  • Gekauft: gebraucht - April 2019
  • Modifikation:
    • Passiv POE Umbau - siehe Foto unten
    • USB Buchse ausgelötet - siehe Foto unten
    • WPS und WLAN Schalter abgekniffen - siehe Foto unten
Photo of the hardware modification

Photo of the hardware modification

Nummer 11 - AVM FRITZ!Box 4020

Nummer 13 - Ubiquiti UniFi AC MESH - UAP-AC-M

Router

Warning: Devices with ≤4MB flash and/or ≤32MB ram will work but they will be very limited (usually they can’t install or run additional packages) because they have low RAM and flash space. Consider this when choosing a device to buy, or when deciding to flash OpenWrt on your device because it is listed as supported. Also see: https://openwrt.org/supported_devices/432_warning

Archer C7 AC1750

AVM FRITZ!Box 4020

AVM FRITZ!Box 4040

Unifi

Unifi AC Lite

UniFi AC Mesh

Outdoor Box

  • Platine:
    • Tief: 17 mm
    • Breit: 125 mm
    • Breit (mit unterboden): 132 mm
    • Hoch (ohne Stecker und mit abgetrennten Schaltern): 98 mm
    • Hoch (mit Steckern leicht gequetscht): 130 mm

4 - GIT

Basics

  • show log
    • git log
    • show only n messages: git log -n
    • one line format: git log --pretty=oneline
    • one line format and show only n messages: git log --pretty=oneline -n
  • initial checkout: git clone <remote_repo_url>
  • clone a specific branch: git clone -b <branch_name> <remote_repo_url>
  • rename local master branch to main: git branch -m master main

Branch handling

  • create and change Branch: git checkout -b <new_branch_name>
  • show all branches: git branch -a
  • delete branch
    • delete a local branch: git branch -d <local_branch>
    • delete a remote branch git push origin --delete <remote_branch>

Avanced

  • add remote after git init
    • add remote: git remote add origin <git_url>
    • set upstream: git branch --set-upstream-to=origin/main main

Empty Commit to trigger CI

git commit --allow-empty -m "empty commit to trigger CI"
git push

Stash Usage

  • stash changes: git stash
  • list stashed changes: git stash list
    • example:
git stash list
# output:
stash@{0}: WIP on master: 049d078 Create index file
stash@{1}: WIP on master: c264051 Revert "Add file_size"
stash@{2}: WIP on master: 21d80a5 Add number to log
  • reapply stash
    • apply newest (last) stash: git stash apply
    • apply selected stash: git stash apply <number>

Special Commands

Undo things

  • unstage files staged with git add: git reset
  • revert local uncommitted changes
    • should be executed in repo root: git checkout .
    • longer to type, but works from any subdirectory: git reset --hard HEAD
  • revert pushed commit:
git reset --hard '<commit_id>'
git clean -f -d
git push -f
  • change last commit message: git commit --amend

Work with a forked Repository

  • add original repository (has to be done once): git remote add upstream <original_repository_url>
  • fetch changes form forked repository:
# fetch changes
git fetch upstream

# change to locale branch
git checkout master
# or
git checkout main

# merge upstream
git merge upstream/master
# or
git merge upstream/main

# push changes
git push

Rebase changes form forked repository into development branch:

git checkout <dev_branch>
git rebase upstream/master
# or
git rebase upstream/main

Rebase into development branch:

git checkout <dev_branch>
git rebase master
# or
git rebase main

Conflicts look like this:

Resolve all conflicts manually, mark them as resolved with
"git add/rm <conflicted_files>", then run "git rebase --continue".
You can instead skip this commit: run "git rebase --skip".
To abort and get back to the state before "git rebase", run "git rebase --abort".

Squash: Clean dirty commit History

To clean a dirty commit history (before doing a pull request) you can do a squash.

Warning: Do not rebase commits that exist outside of your repository. At least do not rebase branches where others are working on.

Lets say you want to fix up the last 5 commits you do this:

git rebase -i HEAD~5

Change first commit:

git rebase -i --root

Then you get an editor window where you have to do the changes. Here you can rename the top commit by writing “r” (for reword) and change the commit text. If you want to discard all other commits you write “f” (for fixup) infront of them. Now you save the file and the GIT magic is happening.

Here is an overview of all options:

- p, pick = use commit
- r, reword = use commit, but edit the commit message
- e, edit = use commit, but stop for amending
- s, squash = use commit, but meld into previous commit
- f, fixup = like “squash”, but discard this commit’s log message
- x, exec = run command (the rest of the line) using shell
- d, drop = remove commit

If something bad happens after saving where you have to fix up something first, you can continue the rebase with: git rebase --continue

When everyhing is ok you have to do a forced push: git push -f

If you have already done a pull request (on GitHub) this squash still works afterwards. The “dirty” commit history of the PR will also be changed.

Configuration

  • always rebase on pull (is is best practice): git config --global pull.rebase true
  • remember username and password: git config --global credential.helper store
  • set username:
  • set username
    • local (for single repository): git config user.name "<username>"
    • global: git config --global user.name "<username>"
  • global ignore Settings

6 - GnuPG

Get Infos

  • list all keys: gpg --list-keys
  • list all secret keys: gpg --list-secret-keys

8 - kubectl

Display Resources

  • all: kubectl get all -A -o wide
  • custom resource definitions: kubectl get crd
  • ingressroutes (custom resource definition from Traefik): kubectl get ingressroutes -A
  • component statuses: kubectl get cs
  • list Longhorn replica: kubectl get replica -A

Create Resources

  • expose deployment: kubectl expose deploy <deployment_name> --port <port_number> - more

Delete Resources

  • delete all from namespace: kubectl delete all --all -n <namespace>

Special Commands

  • ececute bash on pod: kubectl exec --stdin --tty <pod_name> -- /bin/bash
  • stop / start a pod: kubectl scale --replicas=<0/1> <deployment_name>
  • schedule Pods on the control-plane: kubectl taint nodes --all node-role.kubernetes.io/master-
  • write yaml for kubectl command to file: kubectl <command> --dry-run=client -o yaml > <file>.yaml
  • convert config file to configmap: kubectl create configmap <config_map_name> --from-file=<config_file_name> --dry-run=client -o yaml > <filename>.yaml

9 - Kubernetes

Commands

  • Minikube commands: https://minikube.sigs.k8s.io/docs/commands/
  • kubectl Commands: https://kubernetes.io/docs/reference/kubectl/overview/
    • get info about the cluster: kubectl cluster-info
    • Get version of k8s: kubectl version
    • display all pods across all namespaces: kubectl get pods -A
    • display state of resource: kubectl describe service <resource_name>
    • display infos of resource: kubectl get services <resource_name>
    • delete deployment: kubectl delete deployment <deployment_name>
    • namespace commands
      • List namespaces: kubectl get namespace
      • Create namespace: kubectl create namespace <namespace_name>

Installation

First starts looks like this:

$ minikube start
😄  minikube v1.13.0 on Ubuntu 18.04
✨  Automatically selected the kvm2 driver
💾  Downloading driver docker-machine-driver-kvm2:
    > docker-machine-driver-kvm2.sha256: 65 B / 65 B [-------] 100.00% ? p/s 0s
    > docker-machine-driver-kvm2: 13.81 MiB / 13.81 MiB  100.00% 1.13 MiB p/s 1
💿  Downloading VM boot image ...
    > minikube-v1.13.0.iso.sha256: 65 B / 65 B [-------------] 100.00% ? p/s 0s
    > minikube-v1.13.0.iso: 173.73 MiB / 173.73 MiB  100.00% 1.61 MiB p/s 1m48s
👍  Starting control plane node minikube in cluster minikube
💾  Downloading Kubernetes v1.19.0 preload ...
    > preloaded-images-k8s-v6-v1.19.0-docker-overlay2-amd64.tar.lz4: 486.28 MiB
🔥  Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.19.0 on Docker 19.03.12 ...
🔎  Verifying Kubernetes components...
🌟  Enabled addons: default-storageclass, storage-provisioner
💡  kubectl not found. If you need it, try: 'minikube kubectl -- get pods -A'
🏄  Done! kubectl is now configured to use "minikube" by default

Helm

The package manager for Kubernetes

If install fails with Error: cannot re-use a name that is still in use the --replace flag can be used.

Post Setup Examples

After setup:

$ kubectl get pods -A
NAMESPACE              NAME                                        READY   STATUS    RESTARTS   AGE
kube-system            coredns-f9fd979d6-r2vhj                     1/1     Running   2          3h49m
kube-system            etcd-minikube                               1/1     Running   2          3h49m
kube-system            kube-apiserver-minikube                     1/1     Running   2          3h49m
kube-system            kube-controller-manager-minikube            1/1     Running   2          3h49m
kube-system            kube-proxy-tnk8g                            1/1     Running   2          3h49m
kube-system            kube-scheduler-minikube                     1/1     Running   2          3h49m
kube-system            storage-provisioner                         1/1     Running   5          3h49m
kubernetes-dashboard   dashboard-metrics-scraper-c95fcf479-b92v2   1/1     Running   2          3h45m
kubernetes-dashboard   kubernetes-dashboard-5c448bc4bf-tttfg       1/1     Running   2          3h45m

10 - Minecraft

Permissions

  • list all groups: lp listgroups
  • give permission to user: lp user <username> permission set <permission_name> true
  • list permissions of a user: lp user <username> permission info
  • also see LuckPerms: https://luckperms.net/wiki/Usage

11 - PostgreSQL

Config Files

  • general config (Ubuntu): /etc/postgresql/10/main/postgresql.conf
  • general config (Archlinux): /var/lib/postgres/data/postgresql.conf
    • to allow access from everywhere: listen_addresses = '*'
  • who can access what from where and how (Ubuntu): /etc/postgresql/10/main/pg_hba.conf
  • who can access what from where and how: (archlinux)/var/lib/postgres/data/pg_hba.conf
    • example: host <database> <user> 0.0.0.0/0 md5
    • example: hostssl <database> <user> 0.0.0.0/0 md5

Commands (prompt)

  • change the user from root to postgres: su -l postgres
  • init the db: initdb --locale=en_US.UTF-8 -E UTF8 -D /var/lib/postgres/data
  • enter db tool psql: psql
  • create user: createuser --interactive
  • create database
    • createdb <db_name>
    • create and set owner: createdb <db_name> -O <role_name>
  • restart the db: systemctl restart postgresql.service

Commands (psql)

  • connect: psql -h <host_or_ip> -p <port> -d <database> -U <username>
  • set password: \password <role_name>
  • list user (roles): \du
  • list user (roles) with passwords to check if they are set: select * from pg_shadow;
  • list databases: \l
  • detele db: DROP DATABASE <db_name>;
  • delete role: DROP ROLE <role_name>;

Create User / Database

12 - Regular Expressions

13 - Sphinx

Extensions and Themes

MyST Syntax

  • add a link to a locale PDF or other file - source
{download}`text <_static/reference.pdf>`

May.la Installation

  • create repo on GitHub and clone it
  • chane into the repo directory
  • run sphinx-quickstart - say yes here:
You have two options for placing the build directory for Sphinx output.
Either, you use a directory "_build" within the root path, or you separate
"source" and "build" directories within the root path.
> Separate source and build directories (y/n) [n]:
html_theme_options = {
    "prev_next_buttons_location": None,
}

Commands

  • convert reStructuredText to Markdown: pandoc -s -t commonmark -o <target>.md <source>.rst

14 - Tor

Commands

  • see log: journalctl -e -u tor@default
  • restart tor: systemctl restart tor@default
  • command-line Tor monitor: nyx

Config

Config is stored at /etc/tor/torrc.

Example middle / guard relay config

Nickname    my_nickname
ContactInfo mail _at_ host.com
ORPort      443
ExitRelay   0
SocksPort   0

# this does not work with AccountingMax
DirPort 9030

RelayBandwidthRate     9 MB
RelayBandwidthBurst    10 MB

MyFamily identity_key_fingerprint_01,identity_key_fingerprint_02

Bridge config

A bridge helps censored users connect to the Tor network. Do not specify MyFamily for bridge configs.

15 - Visual Studio Code

Settings

  • disable minimap: Editor -> Minimap
  • do not open last project when VSCode is opened:
  • trim trailing whitespace: Text Editor -> Files -> Trim Trailing whitespace
  • debug all code - not just your own: Extensions -> Python -> Debug Just My Code
  • change font: Editor: Font Family -> prepend "'Source Code Pro', " for example
  • change textsize of UI: Window: Zoom Level
  • auto save: Text Editor -> Files -> Auto Save -> "onFocusChange"
  • show modified settings: open settings -> click "..." (top right) -> select "Show modifies settings"

Extensions