This is the multi-page printable view of this section. Click here to print.
IT Topics
- 1: Docker
- 2: Duply with MinIO
- 3: Duply with Windows
- 4: Freifunk
- 5: GIT
- 6: GitHub
- 7: GnuPG
- 8: Hugo
- 9: kubectl
- 10: Kubernetes
- 11: Minecraft
- 12: PostgreSQL
- 13: Regular Expressions
- 14: Sphinx
- 15: Tor
- 16: Visual Studio Code
- 17: YubiKey
1 - Docker
Commands
- build:
docker build . -t <tag_name>
- connect to a container:
docker exec -it <container_name> bash
- delete
- delete all volumes:
docker volume rm $(docker volume ls -q)
- docker volume rm - docker volume ls - delete all:
docker stop $(docker ps -aq) && docker rm $(docker ps -aq) && docker rmi $(docker images -q)
- delete all volumes:
- list containers - docker ps
- list running containers:
docker ps
- list all containers:
docker ps -a
- list running containers:
- prune
- prune all images:
docker image prune --all
- system:
docker system prune
- system with images:
docker system prune --all
- prune all images:
Dockerfile
- set variables:
ARG variable=value
- use variables example:
WORKDIR /home/$variable
- optimization tools and how-tos:
docker-compose
- https://docs.docker.com/compose/
- store config in
docker-compose.yml
by defaut - start (build images before starting containers):
docker-compose up --build
- docker-compose up- add
-d
for detached mode
- add
- stop
- stop and remove containers:
docker-compose down
- docker-compose down - stop and remove containers and volumes:
docker-compose down -v
- docker-compose down
- stop and remove containers:
- validate and view compose file:
docker-compose config
- docker-compose config
2 - Duply with MinIO
MinIO Configuration
Step 1: Create bucket without versioning and locking. Quota is also not needed.
Step 2: Create “read only policy” for backup account. Replace BUCKET-NAME with your bucket name from step 2.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation",
"s3:GetObject",
"s3:ListBucket",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::BUCKET-NAME/*"
]
}
]
}
Step 3: Create user. Same name as BUCKET-NAME. Assign an password and the policy from step 2.
Create GnuPG Key for Encryption
gpg --expert --full-generate-key
- use your password manager to generate a passphrase so you have it for the last step
- select “ECC (sign and encrypt)” (which is the default) for kind of key
- select “Curve 25519” (which is the default) for elliptic curve
- select “0” for “key does not expire”
- Real Name: “Duply Backup BUCKET-NAME”
- Mail: none
- Comment: none
- provide passphrase from first step
Install and Config of Duply
- Mac:
brew install duply
- execute
duply BUCKET-NAME create
- edit exclude
- edit conf
GPG_KEY='KEY-FINGERPRINT'
GPG_PW='KEY-PASSPHRASE'
GPG_OPTS='--pinentry-mode loopback --no-throw-keyids'
TARGET='boto3+s3:///BUCKET-NAME/'
SOURCE='/'
export AWS_ACCESS_KEY_ID='BUCKET-NAME'
export AWS_SECRET_ACCESS_KEY='MINIO-USER-PASSWORD'
MAX_FULL_BACKUPS=12
MAX_FULLS_WITH_INCRS=6
MAX_FULLBKP_AGE=1M
DUPL_PARAMS="$DUPL_PARAMS --full-if-older-than $MAX_FULLBKP_AGE "
DUPL_PARAMS="$DUPL_PARAMS --s3-endpoint-url https://s3.MINIO-URL"
- add
ulimit -n 1024
to.bash_profile
- edit
.gnupg/gpg-agent.conf
and addallow-loopback-pinentry
Backup
- copy revoke key
cp ~/.gnupg/openpgp-revocs.d/FINGERPRINT.rev ???
- fix group
- zip or tgz and backup
3 - Duply with Windows
Installation
- install cygwin: https://cygwin.com/
- first just install base packages
- install the following additional packages (by just starting
setup-x86_64.exe
again):- python3
- python36-pip
- python3-devel
- gcc-core
- librsync-devel
- gnupg2
- nano
- update
binutils
to newest (test) version (2.31.1-1) - update pip (optional):
pip3 install --upgrade pip
- install duplicity:
pip3 install duplicity
- create bin dir:
mkdir bin
- change to that dir:
cd bin
- create link to gpg2:
ln -s /usr/bin/gpg2.exe gpg.exe
- download duply: https://duply.net/
- unpack duply
- copy duply script to bin dir:
cp /cygdrive/c/Users/<your_username>/Downloads/<duply_dir>/duply .
- change back to home dir
cd
Configuration of .bashrc
- edit .bashrc:
nano .bashrc
- add
export PATH=/home/<your_username>/bin:$PATH
- switch language to english (optional):
export LANG='en_US.UTF-8'
- add
ulimit -n 1024
Check Installation and configuration
The following commands should execute without error or warning:
duplicity --version
duply --version
gpg --version
Generate GPG Key
- run
gpg --full-gen-key
- select default values but 4096 Bit
- select a password for the key
- copy the public key id to somewhere else for later use - it is a
sring like
7A6E4278E2CAF3FA16240DADC94F3BEAB276F92D
Configure Duply
- create a profile:
duply <profile_name> create
- edit config:
nano .duply/<profile_name>/conf
- enter your gpg public key it to
GPG_KEY
- enter the password to
GPG_PW
- enter the
TARGET
like a cloud space or something else - for
SOURCE
just enter/
- details will be configured in an other file later - remove comment infront of
GPG_OPTS
and writeGPG_OPTS='--pinentry-mode loopback'
- enter your gpg public key it to
- edit exclude file:
nano .duply/<profile_name>/exclude
This is how you can add your Cygwin home folder and your Windows
pictures folder to backup and ignore evenrything else - **
+ /home/<your_username>
+ /cygdrive/c/Users/<your_username>/Pictures
- **
Edit gpg-agent.conf
- edit gpg-agent.conf:
nano .gnupg/gpg-agent.conf
- add this line:
allow-loopback-pinentry
Using Backblaze
- install client:
pip3 install b2sdk
- use this as
TARGET
:b2://[keyID]:[application key]@[B2 bucket name]
Start Backup
- start the first backup:
duply test backup
4 - Freifunk
Links
- Freifunk
- Freifunk Forum
- Pico Peering Agreement
- Braunschweig
- Freifunk Braunschweig
- Freifunk Firmware Braunschweig
- Freifunk Braunschweig Karte
- GitLab Stratum 0
- Status des aktuellen Freifunk Routers (nur wenn verbunden):
Nummer 1 - TP-Link TL-WR841N
- Hardware: TP-Link TL-WR841N
- Hardware Version: 9.2
- Gekauft: gebraucht - April 2019
- Software: Freifunk Braunschweig installiert und eingerichtet
- Node Name: may-01
- Geo: 52° 13,495’ N 010° 30,985’ E
- Modifikation: in outdoor Gehäuse
- Map Link: https://w.freifunk-bs.de/map/#!/de/map/14cc20702702
Nummer 2 - TP-Link TL-WR841N
- Hardware: TP-Link TL-WR841N
- Hardware Version: 9.1
- Gekauft: gebraucht - April 2019
- Modifikation: keine
Nummer 5 - TP-Link TL-WR841N
- Hardware: TP-Link TL-WR841N
- Hardware Version: 9.0
- Gekauft: gebraucht - April 2019
- Modifikation: keine
Nummer 6 - TP-Link TL-WR841N
- Hardware: TP-Link TL-WR841N
- Hardware Version: 11.1
- Gekauft: gebraucht - April 2019
- Modifikation: keine
Nummer 7 - TP-Link Archer C7 (EU)
- Hardware: TP-Link Archer C7 (EU)
- Hardware Version: 4.0
- Gekauft: gebraucht - April 2019
- Software: Freifunk Braunschweig installiert und eingerichtet
- Node Name: may-parker-07
- Modifikation: keine
Nummer 8 - AVM FRITZ!Box 4020
- Hardware: AVM FRITZ!Box 4020
- Hardware Version: -
- Hardware Besonderheit: 3 Antennen parellel, keine orthogonal - siehe auch hier: https://openwrt.org/toh/avm/fritz.box.4020?s%5C#different_antenna_layouts
- Gekauft: gebraucht - April 2019
- Software: Freifunk Braunschweig installiert und eingerichtet
- Node Name: may-08
- Konfiguration: kein Mesh-VPN
- Geo: 52° 13,502’ N 010° 30,996’ E
- Modifikation:
- Gehäuse wurde geöffnet aber wieder geschlossen
- hintere Schrauben wurden sehr unsauber ausgebohrt
- unten wurden “Gitterstäbe” entfernt
- Map Link: https://w.freifunk-bs.de/map/#!/de/map/5c49793fe8f4
Nummer 9 - AVM FRITZ!Box 4020
- Hardware: AVM FRITZ!Box 4020
- Hardware Version: -
- Hardware Besonderheit: 2 Antennen parellel, eine orthogonal - siehe auch hier: https://openwrt.org/toh/avm/fritz.box.4020?s%5C#different_antenna_layouts
- Gekauft: gebraucht - April 2019
- Modifikation:
- Passiv POE Umbau - siehe Foto unten
- USB Buchse ausgelötet - siehe Foto unten
- WPS und WLAN Schalter abgekniffen - siehe Foto unten

Photo of the hardware modification
Nummer 11 - AVM FRITZ!Box 4020
- Hardware: AVM FRITZ!Box 4020
- Hardware Version: -
- Hardware Besonderheit: 3 Antennen parellel, keine orthogonal - siehe auch hier: https://openwrt.org/toh/avm/fritz.box.4020?s%5C#different_antenna_layouts
- Gekauft: gebraucht - Mai 2019
- Node Name: may-11
- Modifikation: einige Rippen oben am Gehäuse waren gebrochen und wurden entfernt
Nummer 13 - Ubiquiti UniFi AC MESH - UAP-AC-M
- Hardware: Ubiquiti UniFi AC MESH - UAP-AC-M
- Gekauft: gebraucht (wie neu) - September 2022
- Node Name: may-13
- Map link: https://freifunk-bs.de/map/#!/de/map/68d79a0b15ce
Router
Warning: Devices with ≤4MB flash and/or ≤32MB ram will work but they will be very limited (usually they can’t install or run additional packages) because they have low RAM and flash space. Consider this when choosing a device to buy, or when deciding to flash OpenWrt on your device because it is listed as supported. Also see: https://openwrt.org/supported_devices/432_warning
TP-Link TL-WR841N/ND
- Speicher: 4MB Flash, 32MB RAM :!:
- Aufgrund des relativ kleinen 4MB Flash-Speichers ist die weitere Kompabilität zu zukünftigen Updates offen
- Unterstützung zur Zeit nur bis Version 12 - Version 13 oder höher geht noch nicht
- OpenWRT Link: https://openwrt.org/toh/tp-link/tl-wr841nd
- Versionen von FF BS unterstützt: 3, 5, 7 bis 12
- Original Firmware Download: https://www.tp-link.com/no/support/download/tl-wr841nd/
Archer C7 AC1750
- Speicher
- Version 1: 8MB Flash, 128MB RAM
- Version 2 bis 5: 16MB Flash, 128MB RAM @ 720MHz bis 775MHz
- Achtung: Für FFBS gibt es nur die Firmware in Version 2, 4 und 5
- Empfehlungen
- OpenWRT Seite: https://openwrt.org/toh/tp-link/archer-c7-1750
- TP-Link Support: https://www.tp-link.com/de/support/download/archer-c7/
AVM FRITZ!Box 4020
- Speicher: 16MB Flash, 128MB RAM @ 750MHz
- OpenWRT Seite: https://openwrt.org/toh/avm/fritz.box.4020
AVM FRITZ!Box 4040
- Speicher: 32MB Flash, 256MB RAM
- OpenWRT Seite: https://openwrt.org/toh/avm/avm_fritz_box_4040
TP-Link CPE210
- Achtung: Nur Version 1 und 2 - nicht Version 3
- Speicher: 8MB Flash, 64MB RAM
- Empfehlungen
- OpenWRT Seite: https://openwrt.org/toh/tp-link/cpe210
Unifi
- OpenWRT Link: https://openwrt.org/toh/ubiquiti/unifiac
- Unifi Vergleich: https://help.ubnt.com/hc/en-us/articles/360008036574-UniFi-Access-Point-Comparison-Charts
- Router flashen: http://www.netz39.de/wiki/freifunk:anleitungen:ubiquitigeraete
Unifi AC Lite
UniFi AC Mesh
TP-Link TL-WR1043N/ND
- Speicher
- Version 1: 8MB Flash, 32MB RAM :!:
- Version 2 bis 5: 8MB bis 16MB Flash, 64MB RAM
- Empfehlungen
- OpenWRT Seite: https://openwrt.org/toh/tp-link/tl-wr1043nd
- Maximale Anzahl Clients (Erfahrungswert): > 45 -Quelle
- Softwarefehler beim Booten - sichere Trennung zwischen lokalen Netzwerk und externen Netzwerk nicht gegeben -Quelle
TP-Link TL-WDR4300
- Speicher: 8MB Flash, 128MB RAM
- OpenWRT Link: https://openwrt.org/toh/tp-link/tl-wdr4300
TP-Link TL-WDR3600
- Speicher: 8MB Flash, 128MB RAM
- Probleme mit Version 1.5: https://dev.archive.openwrt.org/ticket/21593#ticket
- OpenWRT Link: https://openwrt.org/toh/tp-link/tl-wdr3600
- sehr alt - wird nicht mehr hergestellt
Outdoor Box
- https://wiki.freifunk.net/Outdoorf%C3%A4higen_Router_basteln
- https://wiki.freifunk.net/Outdoor_Box
- https://wiki.freifunk.net/DIY_Halterung
- https://wiki.darmstadt.freifunk.net/DIY_TL-WR842ND_Outdoor_Box
- https://wiki.darmstadt.freifunk.net/DIY_TL-WR841N_Outdoor_Box
- http://wiki.leipzig.freifunk.net/Gehaeuse
- https://forum.freifunk.net/t/umbau-tl-wr841nd-fuer-outdoor-einsatz/2077
- https://www.youtube.com/watch?v=v1fI3JdK8gg
TP-Link TL-WR841N & TL-WR841ND Abmessungen
- Platine:
- Tief: 17 mm
- Breit: 125 mm
- Breit (mit unterboden): 132 mm
- Hoch (ohne Stecker und mit abgetrennten Schaltern): 98 mm
- Hoch (mit Steckern leicht gequetscht): 130 mm
5 - GIT
Links
- Pro Git book: https://git-scm.com/book/en/
- Ry’s Git Tutorial: https://www.smashwords.com/books/view/498426
Basics
- show log
git log
- show only
n
messages:git log -n
- one line format:
git log --pretty=oneline
- one line format and show only
n
messages:git log --pretty=oneline -n
- initial checkout:
git clone <remote_repo_url>
- clone a specific branch:
git clone -b <branch_name> <remote_repo_url>
- rename local master branch to main:
git branch -m master main
Ignore Things
- put directory or file into
.gitignore
- for private ignore put it into
.git/info/exclude
- good Python ignore temnplate: https://github.com/github/gitignore/blob/main/Python.gitignore
Branch handling
- create and change Branch:
git checkout -b <new_branch_name>
- show all branches:
git branch -a
- delete branch
- delete a local branch:
git branch -d <local_branch>
- delete a remote branch
git push origin --delete <remote_branch>
- delete a local branch:
Advanced
- change upstream url:
git remote set-url origin new.git.url/here
- see here - add remote after
git init
- add remote:
git remote add origin <git_url>
- set upstream:
git branch --set-upstream-to=origin/main main
- add remote:
Empty Commit to trigger CI
git commit --allow-empty -m "empty commit to trigger CI"
git push
Stash Usage
- stash changes:
git stash
- list stashed changes:
git stash list
- example:
git stash list
# output:
stash@{0}: WIP on master: 049d078 Create index file
stash@{1}: WIP on master: c264051 Revert "Add file_size"
stash@{2}: WIP on master: 21d80a5 Add number to log
- reapply stash
- apply newest (last) stash:
git stash apply
- apply selected stash:
git stash apply <number>
- apply newest (last) stash:
Special Commands
- show history of last ref updates:
git reflog
- list tracked repositories:
git remote -v
- signoff last (5) commits:
git rebase --signoff HEAD~5
- see https://git-scm.com/docs/git-commit#Documentation/git-commit.txt---signoff
Undo things
- unstage files staged with git add:
git reset
- revert local uncommitted changes
- should be executed in repo root:
git checkout .
- longer to type, but works from any subdirectory:
git reset --hard HEAD
- should be executed in repo root:
- revert pushed commit:
git reset --hard '<commit_id>'
git clean -f -d
git push -f
- change last commit message:
git commit --amend
Work with a forked Repository
- add original repository (has to be done once):
git remote add upstream <original_repository_url>
- fetch changes form forked repository:
# fetch changes
git fetch upstream
# change to locale branch
git checkout master
# or
git checkout main
# merge upstream
git merge upstream/master
# or
git merge upstream/main
# push changes
git push
Rebase changes from forked repository into development branch:
git checkout <dev_branch>
git rebase upstream/master
# or
git rebase upstream/main
Rebase into development branch:
git checkout <dev_branch>
git rebase master
# or
git rebase main
Conflicts look like this:
Resolve all conflicts manually, mark them as resolved with
"git add/rm <conflicted_files>", then run "git rebase --continue".
You can instead skip this commit: run "git rebase --skip".
To abort and get back to the state before "git rebase", run "git rebase --abort".
Squash: Clean dirty commit History
To clean a dirty commit history (before doing a pull request) you can do a squash.
Warning: Do not rebase commits that exist outside of your repository. At least do not rebase branches where others are working on.
Lets say you want to fix up the last 5 commits you do this:
git rebase -i HEAD~5
Change first commit:
git rebase -i --root
Then you get an editor window where you have to do the changes. Here you can rename the top commit by writing “r” (for reword) and change the commit text. If you want to discard all other commits you write “f” (for fixup) infront of them. Now you save the file and the GIT magic is happening.
Here is an overview of all options:
- p, pick = use commit
- r, reword = use commit, but edit the commit message
- e, edit = use commit, but stop for amending
- s, squash = use commit, but meld into previous commit
- f, fixup = like “squash”, but discard this commit’s log message
- x, exec = run command (the rest of the line) using shell
- d, drop = remove commit
If something bad happens after saving where you have to fix up something
first, you can continue the rebase with: git rebase --continue
When everyhing is ok you have to do a forced push: git push -f
If you have already done a pull request (on GitHub) this squash still works afterwards. The “dirty” commit history of the PR will also be changed.
Configuration
- always rebase on pull (is is best practice):
git config --global pull.rebase true
- remember username and password:
git config --global credential.helper store
- set username
- local (for single repository):
git config user.name "<username>"
- global:
git config --global user.name "<username>"
- local (for single repository):
- set mail
- local:
git config user.email "<mail>"
- global:
git config --global user.email "<mail>"
- local:
- change editor to nano (global):
git config --global core.editor "nano"
- global ignore Settings
- create
~/.gitignore_global
file with ignore settings - execute
git config --global core.excludesfile ~/.gitignore_global
- also see: https://jayeshkawli.ghost.io/using-global-gitignore-on-mac/
- create
- set VSCode as editor:
git config --global core.editor "code --wait"
Mac specific
- system config file is here:
/Library/Developer/CommandLineTools/usr/share/git-core/gitconfig
- credentials helper
osxkeychain
is enabled by default (see system config) - add
.DS_Store
to global ignore settings (see above)
6 - GitHub
- Sign Commits with GnuPG
- https://docs.github.com/en/github/authenticating-to-github/managing-commit-signature-verification
- https://micropipes.com/blog/2016/08/31/signing-your-commits-on-github-with-a-gpg-key/
- Warning: the
email
in.gitconfig
must match the mail adress in the signing key
- SSH commit signature verification
- GitHub Pages
7 - GnuPG
Links
- The GNU Privacy Handbook: https://gnupg.org/gph/en/manual.html
- Man page: https://www.gnupg.org/documentation/manpage.html
- Options
- agent options: https://www.gnupg.org/documentation/manuals/gnupg/Agent-Options.html
- configuration options: https://www.gnupg.org/documentation/manuals/gnupg/GPG-Configuration-Options.html
- protocol specific options: https://www.gnupg.org/documentation/manuals/gnupg/OpenPGP-Options.html
- GnuPG files: https://www.gnupg.org/documentation/manuals/gnupg/GPG-Configuration.html
- YubiKey-Guide: https://github.com/drduh/YubiKey-Guide
Commands
Keyserver Commands
- download key from keyserver:
gpg --recv-keys <key_id>
- update all keys from keyserver:
gpg --refresh-keys
- import key with all signatures:
gpg --recv-key --verbose --keyserver-options no-import-clean --keyserver-options no-self-sigs-only <key_id>
Get Key Infos
- list all keys:
gpg --list-keys
- list all secret keys:
gpg --list-secret-keys
Key Card Commands (YubiKey)
gpg --card-edit
- also see https://www.gnupg.org/howtos/card-howto/en/ch03s02.html
Keyserver Links
Mac install
- install with
brew install gnupg pinentry-mac
- set
pinentry-program /opt/homebrew/bin/pinentry-mac
in~/.gnupg/gpg-agent.conf
Important Keys
- CAcert
A31D 4F81 EF4E BD07 B456 FA04 D2BB 0D01 65D0 FD58
: http://www.cacert.org/index.php?id=3 - Pierre Schmitz (Arch Linux packager)
3E80 CA1A 8B89 F69C BA57 D98A 76A5 EF90 5444 9A5C
: https://archlinux.org/download/ - Debian CD signing key:
DF9B 9C49 EAA9 2984 3258 9D76 DA87 E80D 6294 BE9B
: https://www.debian.org/CD/verify - Tails:
A490 D0F4 D311 A415 3E2B B7CA DBB8 02B2 58AC D84F
9 - kubectl
Links
Display Resources
- all:
kubectl get all -A -o wide
- custom resource definitions:
kubectl get crd
- ingressroutes (custom resource definition from Traefik):
kubectl get ingressroutes -A
- component statuses:
kubectl get cs
- list Longhorn replica:
kubectl get replica -A
Create Resources
- expose deployment:
kubectl expose deploy <deployment_name> --port <port_number>
- more
Delete Resources
- delete all from namespace:
kubectl delete all --all -n <namespace>
Special Commands
- ececute bash on pod:
kubectl exec --stdin --tty <pod_name> -- /bin/bash
- stop / start a pod:
kubectl scale --replicas=<0/1> <deployment_name>
- schedule Pods on the control-plane:
kubectl taint nodes --all node-role.kubernetes.io/master-
- write yaml for kubectl command to file:
kubectl <command> --dry-run=client -o yaml > <file>.yaml
- convert config file to configmap:
kubectl create configmap <config_map_name> --from-file=<config_file_name> --dry-run=client -o yaml > <filename>.yaml
10 - Kubernetes
Links
- Kubernetes: https://kubernetes.io/
- kubectl
- Doc: https://kubernetes.io/docs/reference/kubectl/overview/
- Install and Set Up kubectl: https://kubernetes.io/docs/tasks/tools/install-kubectl/
- minikube
Commands
- Minikube commands: https://minikube.sigs.k8s.io/docs/commands/
- Start local
Kubernetes:
minikube start
- Stop local Kubernetes
cluster:
minikube stop
- Get status of local
Kubernetes:
minikube status
- Start old version (
1.15.0 for example
):minikube start -p aged --kubernetes-version=v1.15.0
- list services:
minikube service list
- get Kubernetes URL for a service:
minikube service <resource_name>
- start the dashboard:
minikube dashboard
- get minikube version:
minikube version
- Start local
Kubernetes:
- kubectl Commands:
https://kubernetes.io/docs/reference/kubectl/overview/
- get info about the cluster:
kubectl cluster-info
- Get version of k8s:
kubectl version
- display all pods across all namespaces:
kubectl get pods -A
- display state of resource:
kubectl describe service <resource_name>
- display infos of resource:
kubectl get services <resource_name>
- delete deployment:
kubectl delete deployment <deployment_name>
- namespace commands
- List namespaces:
kubectl get namespace
- Create namespace:
kubectl create namespace <namespace_name>
- List namespaces:
- get info about the cluster:
Installation
- install Minikube: https://minikube.sigs.k8s.io/docs/start/
- install KVM: https://help.ubuntu.com/community/KVM/Installation
First starts looks like this:
$ minikube start
😄 minikube v1.13.0 on Ubuntu 18.04
✨ Automatically selected the kvm2 driver
💾 Downloading driver docker-machine-driver-kvm2:
> docker-machine-driver-kvm2.sha256: 65 B / 65 B [-------] 100.00% ? p/s 0s
> docker-machine-driver-kvm2: 13.81 MiB / 13.81 MiB 100.00% 1.13 MiB p/s 1
💿 Downloading VM boot image ...
> minikube-v1.13.0.iso.sha256: 65 B / 65 B [-------------] 100.00% ? p/s 0s
> minikube-v1.13.0.iso: 173.73 MiB / 173.73 MiB 100.00% 1.61 MiB p/s 1m48s
👍 Starting control plane node minikube in cluster minikube
💾 Downloading Kubernetes v1.19.0 preload ...
> preloaded-images-k8s-v6-v1.19.0-docker-overlay2-amd64.tar.lz4: 486.28 MiB
🔥 Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
🐳 Preparing Kubernetes v1.19.0 on Docker 19.03.12 ...
🔎 Verifying Kubernetes components...
🌟 Enabled addons: default-storageclass, storage-provisioner
💡 kubectl not found. If you need it, try: 'minikube kubectl -- get pods -A'
🏄 Done! kubectl is now configured to use "minikube" by default
Helm
The package manager for Kubernetes
- https://helm.sh/
- Install: https://docs.helm.sh/docs/intro/install/
- Commands
- add a chart repository (example):
$ helm repo add stable https://kubernetes-charts.storage.googleapis.com/
- list the charts you can install:
helm search repo stable
- add a chart repository (example):
If install fails with Error: cannot re-use a name that is still in use
the --replace
flag can be used.
Post Setup Examples
After setup:
$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-f9fd979d6-r2vhj 1/1 Running 2 3h49m
kube-system etcd-minikube 1/1 Running 2 3h49m
kube-system kube-apiserver-minikube 1/1 Running 2 3h49m
kube-system kube-controller-manager-minikube 1/1 Running 2 3h49m
kube-system kube-proxy-tnk8g 1/1 Running 2 3h49m
kube-system kube-scheduler-minikube 1/1 Running 2 3h49m
kube-system storage-provisioner 1/1 Running 5 3h49m
kubernetes-dashboard dashboard-metrics-scraper-c95fcf479-b92v2 1/1 Running 2 3h45m
kubernetes-dashboard kubernetes-dashboard-5c448bc4bf-tttfg 1/1 Running 2 3h45m
11 - Minecraft
Links
- Paper MC: https://docs.papermc.io/paper/admin
- LuckPerms: https://luckperms.net/wiki/Usage
- EssentialsX: https://essentialsx.net/wiki/Home.html
- Commands: https://essinfo.xeya.me/commands.html
- Permissions: https://essinfo.xeya.me/permissions.html
Permissions
- list all groups:
lp listgroups
- give permission to user:
lp user <username> permission set <permission_name> true
- list permissions of a user:
lp user <username> permission info
- also see LuckPerms: https://luckperms.net/wiki/Usage
Docker
- marctv/minecraft-papermc-server
- connect to console:
docker attach <container_name>
- disconnect from console:
ctrl + p
+ctrl + q
12 - PostgreSQL
Config Files
- general config (Ubuntu):
/etc/postgresql/10/main/postgresql.conf
- general config (Archlinux):
/var/lib/postgres/data/postgresql.conf
- to allow access from everywhere:
listen_addresses = '*'
- to allow access from everywhere:
- who can access what from where and how (Ubuntu):
/etc/postgresql/10/main/pg_hba.conf
- who can access what from where and how:
(archlinux)
/var/lib/postgres/data/pg_hba.conf
- example:
host <database> <user> 0.0.0.0/0 md5
- example:
hostssl <database> <user> 0.0.0.0/0 md5
- example:
Commands (prompt)
- change the user from root to postgres:
su -l postgres
- init the db:
initdb --locale=en_US.UTF-8 -E UTF8 -D /var/lib/postgres/data
- enter db tool psql:
psql
- create user:
createuser --interactive
- create database
createdb <db_name>
- create and set owner:
createdb <db_name> -O <role_name>
- restart the db:
systemctl restart postgresql.service
Commands (psql)
- connect:
psql -h <host_or_ip> -p <port> -d <database> -U <username>
- set password:
\password <role_name>
- list user (roles):
\du
- list user (roles) with passwords to check if they are set:
select * from pg_shadow;
- list databases:
\l
- detele db:
DROP DATABASE <db_name>;
- delete role:
DROP ROLE <role_name>;
Create User / Database
CREATE USER <username> WITH PASSWORD '<password>';
- https://www.postgresql.org/docs/8.0/sql-createuser.htmlCREATE DATABASE <db_name> OWNER <username>;
- https://www.postgresql.org/docs/9.0/sql-createdatabase.html
Links
14 - Sphinx
Links
- Sphinx: https://www.sphinx-doc.org/
- Sphinx GitHub: https://github.com/sphinx-doc/sphinx/
- reStructuredText (reST): https://www.sphinx-doc.org/en/master/usage/restructuredtext/index.html
- reStructuredText Primer: https://www.sphinx-doc.org/en/master/usage/restructuredtext/basics.html
- Configuration: https://www.sphinx-doc.org/en/master/usage/configuration.html
- Table of Contents: https://www.sphinx-doc.org/en/master/usage/restructuredtext/directives.html#table-of-contents
- Cross-referencing Python objects
Extensions and Themes
- ABlog: https://ablog.readthedocs.io/
- MyST - Markedly Structured Text: https://myst-parser.readthedocs.io/
- MyST-NB: https://myst-nb.readthedocs.io/
- markdown-it-py: https://markdown-it-py.readthedocs.io/
- recommonmark: https://recommonmark.readthedocs.io/
- Napoleon: https://sphinxcontrib-napoleon.readthedocs.io/
- Read the Docs Sphinx Theme: https://sphinx-rtd-theme.readthedocs.io/
- PyData Sphinx Theme: https://pydata-sphinx-theme.readthedocs.io/
MyST Syntax
- add a link to a locale PDF or other file - source
{download}`text <_static/reference.pdf>`
May.la Installation
- create repo on GitHub and clone it
- chane into the repo directory
- run
sphinx-quickstart
- say yes here:
You have two options for placing the build directory for Sphinx output.
Either, you use a directory "_build" within the root path, or you separate
"source" and "build" directories within the root path.
> Separate source and build directories (y/n) [n]:
- delete make.bat - we do not work with windows
- config git with
git config pull.rebase true
- install https://sphinx-rtd-theme.readthedocs.io/en/latest/
- install https://myst-nb.readthedocs.io/en/latest/
- add it to extensions
- turn off notebook building:
jupyter_execute_notebooks = "off"
- turn off
prev_next_buttons
:
html_theme_options = {
"prev_next_buttons_location": None,
}
Commands
- convert reStructuredText to Markdown:
pandoc -s -t commonmark -o <target>.md <source>.rst
15 - Tor
Links
- https://www.torproject.org/
- Relay Operations: https://community.torproject.org/relay/
- Middle/Guard relay: https://community.torproject.org/relay/setup/guard/
- Relay Post-install and good practices: https://community.torproject.org/relay/setup/post-install/
- Tor @ Arch Linux: https://wiki.archlinux.org/title/Tor
Commands
- see log:
journalctl -e -u tor@default
- restart tor:
systemctl restart tor@default
- command-line Tor monitor:
nyx
Config
Config is stored at /etc/tor/torrc
.
Example middle / guard relay config
Nickname my_nickname
ContactInfo mail _at_ host.com
ORPort 443
ExitRelay 0
SocksPort 0
# this does not work with AccountingMax
DirPort 9030
RelayBandwidthRate 9 MB
RelayBandwidthBurst 10 MB
MyFamily identity_key_fingerprint_01,identity_key_fingerprint_02
Bridge config
A bridge helps censored users connect to the Tor network.
Do not specify MyFamily
for bridge configs.
Backup
After installation and start of the tor daemon it is a good idea to make a backup of your relay’s long term identity keys.
They are located at: /var/lib/tor/keys
Best would be to backup all of /var/lib/tor
.
16 - Visual Studio Code
Links
Settings
- disable minimap:
Editor -> Minimap
- do not open last project when VSCode is opened:
Window -> Restore Windows: none
- also see https://stackoverflow.com/a/36797180/271118
- trim trailing whitespace:
Text Editor -> Files -> Trim Trailing whitespace
- debug all code - not just your own:
Extensions -> Python -> Debug Just My Code
- change font: Editor:
Font Family -> prepend "'Source Code Pro', "
for example - change textsize of UI:
Window: Zoom Level
- show vertical ruler for max line length:
Editor: Rulers
- also see https://stackoverflow.com/a/29972073 - auto save:
Text Editor -> Files -> Auto Save -> "onFocusChange"
- show modified settings:
open settings -> click "..." (top right) -> select "Show modifies settings"
Extensions
- Python specific
- Python: https://marketplace.visualstudio.com/items?itemName=ms-python.python
- Pylance: https://marketplace.visualstudio.com/items?itemName=ms-python.vscode-pylance
- Python Test Explorer for Visual Studio Code: https://marketplace.visualstudio.com/items?itemName=LittleFoxTeam.vscode-python-test-adapter
- Python Docstring Generator: https://marketplace.visualstudio.com/items?itemName=njpwerner.autodocstring
- Jupyter: https://marketplace.visualstudio.com/items?itemName=ms-toolsai.jupyter
- Visual Studio IntelliCode: https://marketplace.visualstudio.com/items?itemName=VisualStudioExptTeam.vscodeintellicode
- Bookmarks: https://marketplace.visualstudio.com/items?itemName=alefragnani.Bookmarks