savannah-cvs
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Savannah-cvs] [685] fsf tools pages migration


From: iank
Subject: [Savannah-cvs] [685] fsf tools pages migration
Date: Wed, 6 Dec 2023 15:05:21 -0500 (EST)

Revision: 685
          
http://svn.savannah.gnu.org/viewvc/?view=rev&root=administration&revision=685
Author:   iank
Date:     2023-12-06 15:05:18 -0500 (Wed, 06 Dec 2023)
Log Message:
-----------
fsf tools pages migration

Modified Paths:
--------------
    trunk/sviki/fsf.mdwn

Added Paths:
-----------
    trunk/sviki/fsf/tools/Crypto-keys.mdwn
    trunk/sviki/fsf/tools/IRR.mdwn
    trunk/sviki/fsf/tools/LUKS.mdwn
    trunk/sviki/fsf/tools/OBS.mdwn
    trunk/sviki/fsf/tools/anonomize-log-ips.mdwn
    trunk/sviki/fsf/tools/apache.mdwn
    trunk/sviki/fsf/tools/auditd.mdwn
    trunk/sviki/fsf/tools/awstats.mdwn
    trunk/sviki/fsf/tools/bash.mdwn
    trunk/sviki/fsf/tools/bind.mdwn
    trunk/sviki/fsf/tools/db.mdwn
    trunk/sviki/fsf/tools/decisions.mdwn
    trunk/sviki/fsf/tools/dig.mdwn
    trunk/sviki/fsf/tools/edward.mdwn
    trunk/sviki/fsf/tools/exim.mdwn
    trunk/sviki/fsf/tools/fail2ban.mdwn
    trunk/sviki/fsf/tools/ftp.mdwn
    trunk/sviki/fsf/tools/gnupg.mdwn
    trunk/sviki/fsf/tools/journalctl.mdwn
    trunk/sviki/fsf/tools/kiwiirc.mdwn
    trunk/sviki/fsf/tools/kvm.mdwn
    trunk/sviki/fsf/tools/local-vm.mdwn
    trunk/sviki/fsf/tools/mediawiki.mdwn
    trunk/sviki/fsf/tools/member-card-builder.mdwn
    trunk/sviki/fsf/tools/munin.mdwn
    trunk/sviki/fsf/tools/mydumper-myloader.mdwn
    trunk/sviki/fsf/tools/nagios.mdwn
    trunk/sviki/fsf/tools/netcat.mdwn
    trunk/sviki/fsf/tools/onion_service.mdwn
    trunk/sviki/fsf/tools/openscap.mdwn
    trunk/sviki/fsf/tools/openssl.mdwn
    trunk/sviki/fsf/tools/postgresql.mdwn
    trunk/sviki/fsf/tools/privoxy.mdwn
    trunk/sviki/fsf/tools/prometheus.mdwn
    trunk/sviki/fsf/tools/pwgen.mdwn
    trunk/sviki/fsf/tools/rsync.mdwn
    trunk/sviki/fsf/tools/siege.mdwn
    trunk/sviki/fsf/tools/smartctl.mdwn
    trunk/sviki/fsf/tools/split.mdwn
    trunk/sviki/fsf/tools/ssh.mdwn
    trunk/sviki/fsf/tools/stress.mdwn
    trunk/sviki/fsf/tools/sysrq.mdwn
    trunk/sviki/fsf/tools/systemd.mdwn
    trunk/sviki/fsf/tools/tor.mdwn
    trunk/sviki/fsf/tools/tor_usage.mdwn
    trunk/sviki/fsf/tools/ufw.mdwn
    trunk/sviki/fsf/tools/yourls.mdwn

Added: trunk/sviki/fsf/tools/Crypto-keys.mdwn
===================================================================
--- trunk/sviki/fsf/tools/Crypto-keys.mdwn                              (rev 0)
+++ trunk/sviki/fsf/tools/Crypto-keys.mdwn      2023-12-06 20:05:18 UTC (rev 
685)
@@ -0,0 +1,256 @@
+[[!toc  levels=2]]
+
+
+The following instructions should work for devices like Yubikey or Nitrokey. 
Note that some packages or settings use the name yubikey but should be 
vendor-neutral for U2F devices.
+
+## Yubikey specific settings
+
+First, configure the key as a OTP/U2F/CCID composite device (mode 6)
+
+    ykpersonalize -m6 -y
+
+The integration with LUKS, PAM, and Abrowser requires the device to be set in 
Challenge/response mode. To set up the yubikey configuration slot 2 for that 
use, run:
+
+    $ ykpersonalize -v -2 -ochal-resp -ochal-hmac -ohmac-lt64 
-oserial-api-visible -ochal-btn-trig
+
+This setting makes the key wait for you to touch it before replying to the 
challenge. Save the output of the command, as the secret can be used to 
recreate the key if you lose it. Slot 1 is still available to be configured in 
any other setting you may need (by default, OTP).
+
+You should set the access code to the key, you can do it with these commands:
+
+    $ CODE=$(openssl rand -hex 6)
+    $ echo $CODE
+    $ ykpersonalize -2 -oaccess=$CODE
+
+Note, save $CODE somewhere safe. If you do this step, you then should add -c 
$CODE to any future ykpersonalize commands, or you will get a write error.
+
+## Integration with LUKS
+
+Mostly taken from 
https://web.archive.org/web/*/https://www.howtoforge.com/ubuntu-two-factor-authentication-with-yubikey-for-harddisk-encryption-with-luks
+
+This will add a key in slot 7 that requires using a password in combination 
with the key. You can have other passwords in the other slots that don't 
require a key, but they should be very strong.
+
+    # apt-get install yubikey-luks
+    # # Edit /usr/bin/yubikey-luks-enroll and set $DISK to the appropriate path
+    # yubikey-luks-enroll
+    # update-initramfs -u
+
+If you need to wipe the key in slot 7, you can run cryptsetup luksKillSlot 
/dev/sdXX 7
+
+You should replace your original slot 1 with a very strong password, write 
that down and keep it in a safe place, for recovery in case of losing the key. 
This is the command to change the password:
+
+    # cryptsetup luksChangeKey /dev/XXXX
+
+## Integration with login (PAM / lightdm / gdm)
+
+Mostly taken from:
+
+ * 
https://web.archive.org/web/*/https://support.yubico.com/support/solutions/articles/15000011356-ubuntu-linux-login-guide-u2f
+ * https://metebalci.com/blog/using-u2f-at-linux-login/
+
+    # apt-get install libpam-u2f pamu2fcfg
+    $ mkdir ~/.config/Yubico -p
+    $ pamu2fcfg -opam://myhostname -ipam://myhostname > .config/Yubico/u2f_keys
+
+When the key begins blinking, touch to confirm.
+
+If you have a backup key, also run this with each key inserted:
+
+    pamu2fcfg -opam://myhostname -ipam://mydhostname -n >> 
.config/Yubico/u2f_keys
+
+Add the following line:
+
+    auth required pam_u2f.so  cue nouserok origin=pam://myhostname 
appid=pam://myhostname
+To these files, after “@include common-auth”
+
+ * /etc/pam.d/lightdm
+ * /etc/pam.d/sudo
+ * /etc/pam.d/login
+ * /etc/pam.d/mate-screensaver
+
+Note that it is possible to just add that to /etc/pam.d/common-auth to affect 
all services, but doing so would prevent remote access to the machine (ssh, 
cups, samba, etc). This is good practice in a laptop that is intended to never 
be remotely accessed, but would break things on servers.
+
+To be able to log in if you lose the key, you can either create a user with a 
very strong password and add it to the sudo group, or set a very strong 
password for root. Keep the password safe! If the disk is also LUKS encrypted, 
refer to that section.
+
+### Passwordless sudo
+
+If you want to replace your sudo password with touching the key, you can use 
this line in /etc/pam.d/sudo before "@include common-auth"
+
+    auth [success=done new_authtok_reqd=done default=die] pam_u2f.so cue 
origin=pam://myhostname appid=pam://myhostname
+
+## Integration with Abrowser
+
+Go to about:config and make sure that security.webauth.u2f is set to true. 
This allows for the key to be used for two-factor on sites like Gitlab, 
nextcloud and many others.
+
+You can now go to https://www.yubico.com/genuine/ to verify the key.
+
+## Integration with GPG
+
+https://github.com/drduh/YubiKey-Guide
+https://www.esev.com/blog/post/2015-01-pgp-ssh-key-on-yubikey-neo/
+
+(Ideally you want to do this in a live Trisquel session.)
+
+    # apt-get update
+    # apt-get install -y \
+     curl gnupg2 gnupg-agent \
+     cryptsetup scdaemon pcscd \
+     yubikey-personalization \
+     dirmngr \
+     secure-delete \
+     hopenpgp-tools \
+     pwgen \
+     rng-tools
+
+Harden gpg config
+
+<pre>
+$ mkdir .gnupg
+cat << EOF >> ~/.gnupg/gpg.conf
+# https://github.com/drduh/config/blob/master/gpg.conf
+# 
https://www.gnupg.org/documentation/manuals/gnupg/GPG-Configuration-Options.html
+# https://www.gnupg.org/documentation/manuals/gnupg/GPG-Esoteric-Options.html
+# Use AES256, 192, or 128 as cipher
+personal-cipher-preferences AES256 AES192 AES
+# Use SHA512, 384, or 256 as digest
+personal-digest-preferences SHA512 SHA384 SHA256
+# Use ZLIB, BZIP2, ZIP, or no compression
+personal-compress-preferences ZLIB BZIP2 ZIP Uncompressed
+# Default preferences for new keys
+default-preference-list SHA512 SHA384 SHA256 AES256 AES192 AES ZLIB BZIP2 ZIP 
Uncompressed
+# SHA512 as digest to sign keys
+cert-digest-algo SHA512
+# SHA512 as digest for symmetric ops
+s2k-digest-algo SHA512
+# AES256 as cipher for symmetric ops
+s2k-cipher-algo AES256
+# UTF-8 support for compatibility
+charset utf-8
+# Show Unix timestamps
+fixed-list-mode
+# No comments in signature
+no-comments
+# No version in signature
+no-emit-version
+# Long hexidecimal key format
+keyid-format 0xlong
+# Display UID validity
+list-options show-uid-validity
+verify-options show-uid-validity
+# Display all keys and their fingerprints
+with-fingerprint
+# Display key origins and updates
+#with-key-origin
+# Cross-certify subkeys are present and valid
+require-cross-certification
+# Disable putting recipient key IDs into messages
+throw-keyids
+# Enable smartcard
+use-agent
+EOF
+</pre>
+
+Preload entropy
+
+    sudo rngd -r /dev/urandom
+
+Create a private key, only for certifying subkeys, no expiration. Use a very 
strong password
+
+<pre>
+gpg2 --expert --full-gen-key
+ Select (8) RSA (set your own capabilities)
+ Toggle sign and encryption capabilities off, leave certify only
+ Set lenght to 4096
+ Set the key to not expire
+ Use your full real name and last name
+ For your main identity, it may be better to use @gnu.org instead of @fsf.org 
as that email would be kept forever.
+</pre>
+
+Create one key each for signing, encrypting and authenticating (ssh). Set them 
to expire in a year. 2048 is a good enough size for this and it would have 
better performance on the yubikey.
+Use a password you can remember, it will be used in combination with the 
yubikey
+
+    gpg2 --expert --edit-key $KEYID
+
+Select 4 for the signing key, 6 for encryption, and 8 for auth (for this one, 
toggle capabilities until it only lists authentication)
+
+Verify keys:
+
+    gpg2 --list-secret-keys
+
+Add more identities
+
+    gpg2 --expert --edit-key $KEYID
+    > adduid
+
+Export/backup your keys
+
+    gpg2 --armor --export-secret-keys $KEYID > ~/.gnupg/mastersub.key
+    gpg2 --armor --export-secret-subkeys $KEYID > ~/.gnupg/sub.key
+
+Save .gnupg to encrypted storage (LUKS)
+
+<pre>
+dd if=/dev/zero of=encrypted.img bs=1M count=100
+cryptsetup luksFormat encrypted.img
+cryptsetup luksOpen encrypted.img crypt0
+mkfs.xfs /dev/mapper/crypt0
+mount /dev/mapper/crypt0 /media
+cp .gnupg /media
+umount /media
+cryptsetup luksClose /dev/mapper/crypt0
+</pre>
+
+Transfer the keys into the yubikey. This may ask for your subkey passphrase. 
Repeat this for keys 1 to 3
+
+<pre>
+gpg --edit-key $KEYID
+key 1
+keytocard
+1
+save
+quit
+</pre>
+
+### Updating subkeys
+
+Import your secret keys from saved mastersub.key, then change expirations or 
do any other operations
+
+<pre>
+gpg --import mastersub.key
+gpg --edit-key $KEYID
+key x
+expire
+keytocard
+save
+</pre>
+
+## Integration with SSH
+
+This requires having set the integration with GPG
+
+Add this to your .gnupg/gpg-agent.conf
+
+    enable-ssh-support
+    pinentry-program /usr/bin/pinentry
+    default-cache-ttl 60
+    max-cache-ttl 120
+
+Launch agent with
+
+    gpgconf --launch gpg-agent
+
+Add this to your .bashrc
+
+    export SSH_AUTH_SOCK=$HOME/.gnupg/S.gpg-agent.ssh
+
+After running that command, you can print your public key with command
+
+    ssh-add -L
+
+You can now use that public key the usual way with ssh/scp/etc
+
+## Integration with OpenVPN
+
+TODO
+
+    openssl pkcs12 -export -out cert_key.p12 -inkey ruben3.key -in ruben3.crt 
-certfile ca.crt -nodes
+    yubico-piv-tool -s 9c -i cert_key.p12 -K PKCS12 -a import-key -a 
import-cert

Added: trunk/sviki/fsf/tools/IRR.mdwn
===================================================================
--- trunk/sviki/fsf/tools/IRR.mdwn                              (rev 0)
+++ trunk/sviki/fsf/tools/IRR.mdwn      2023-12-06 20:05:18 UTC (rev 685)
@@ -0,0 +1,19 @@
+# Internet Routing Registry (IRR)
+
+## Existing information
+
+We have an account with some contact IDs configured at <https://arin.net> (see
+'arin' in SPD). Also checkout our <https://www.peeringdb.com/> account.
+
+To look up our `as-set` info:
+
+    whois -h rr.arin.net AS22989:AS-ALL
+
+## Guides for changing information
+
+We should prefer using the ARIN IRR because that is where our network is
+located. There are alternative DBs, but we should avoid using them, and use the
+more official database instead.
+
+* ARIN IRR: <https://www.arin.net/resources/manage/irr/userguide/>
+* AltDB: (for documentation only) 
<https://fcix.net/whitepaper/2018/07/14/intro-to-irr-rpsl.html>

Added: trunk/sviki/fsf/tools/LUKS.mdwn
===================================================================
--- trunk/sviki/fsf/tools/LUKS.mdwn                             (rev 0)
+++ trunk/sviki/fsf/tools/LUKS.mdwn     2023-12-06 20:05:18 UTC (rev 685)
@@ -0,0 +1,6 @@
+# Set up a self-mounting, encrypted partition
+
+    KEYFILE=/boot/key-$1
+    pwgen 128 -s -1 | xargs echo -n > $KEYFILE
+    echo YES | cryptsetup luksFormat -y --cipher aes-xts-plain64 --hash sha256 
--use-urandom --key-size 256  /dev/$1 --key-file=$KEYFILE
+    echo "$HOSTNAME-crypt-$1 /dev/$1 /boot/key-$1 luks,discard,nofail" >> 
/etc/crypttab

Added: trunk/sviki/fsf/tools/OBS.mdwn
===================================================================
--- trunk/sviki/fsf/tools/OBS.mdwn                              (rev 0)
+++ trunk/sviki/fsf/tools/OBS.mdwn      2023-12-06 20:05:18 UTC (rev 685)
@@ -0,0 +1,123 @@
+# OBS Studio (formerly Open Broadcaster Software)
+
+## Installing through PPA on Trisquel
+
+[Official obs-studio 
PPA](https://launchpad.net/~obsproject/+archive/ubuntu/obs-studio)
+
+    sudo add-apt-repository ppa:obsproject/obs-studio
+    sudo apt-get update
+    sudo apt install -y obs-studio
+
+You may need additional dependencies.  If you need to pin additional packages, 
see the [relevant chromium apt pinning script on 
brains](https://brains.fsf.org/wiki/tools/chromium/).
+
+## Streaming to Icecast
+
+Related reference: <https://epir.at/2018/03/08/obs-icecast-streaming/>
+
+### Profile Configuration
+
+Enable `Studio Mode` button.
+
+File > Settings
+
+Below is a full list of settings on that page, if anything else appears
+in a newer version, add it to the list.
+
+#### General
+
+Check `Hide cursor over projectors`
+
+#### Output
+
+##### Recording tab
+
+Output Mode drop-down: `Advanced`
+
+* Type: `Custom Output (FFmpeg)`
+* FFmpeg Output Type: `Output to URL`
+* File path or URL: 
`icecast://source:PASSWORD@live-master.fsf.org:8000/stream-room-NAME.webm`
+  * Note: Password has an O as in Opal.  Not a zero.
+* Container format: `webm`
+* Muster settings (if any): `content_type=video/webm cluster_time_limit=5100 
cluster_size_limit=2M`
+* Video Bitrate: `1500 Kbps`
+* Keyframe interval (frames): `150`
+* Rescale Output: greyed out, not settable.
+* Show all codecs (even if potentially incompatible): checked
+* Video Encoder: `libvpx`
+* Video Encoder Settings (if any): `rt cpu-used=5 threads=2 error-resilient=1 
crf=30 g=150 minrate=1.5M maxrate=1.5M`
+* Audio Bitrate: `96 Kbps`
+* Audio Track: check mark on 1.
+* Audio Encoder: `libvorbis`
+* Audio Encoder Settings (if any): empty
+
+#### Audio
+
+* Desktop Audio: (Set to your output device such that you see sound levels 
when there is audio playing on your system)
+  * When using the Behringer U-CONTROL UCA222 USB audio device, it comes up as 
`PCM2902 Audio Codec Analog Stereo`
+
+#### Video
+
+Base (Canvas) Resolution: `1280x720`
+
+#### Hotkeys
+
+* Start Recording: `F9`
+* Stop Recording: `F9`
+* Transition: `F8`
+* BBB Capture Switch to scene: `Super + 1`
+* Interlude Switch to scene: `Super + 2`
+* Technical Difficulties Switch to scene: `Super + 3`
+* Prerecord Switch to scene: `Super + 4`
+* Desktop Audio Mute: `F7`
+* Desktop Audio Unmute `F7`
+
+### Scene Configuration
+
+Config files are in `/home/common/sysadmin/lp22/optLP-2022.tar.gz`
+
+Extract folder contents to `/opt/LP/`
+
+`Scene Collection > Import`
+
+Click on the `...` button.  Navigate to `/opt/LP/LP2022scenes.json` and click 
on the `Open` button.
+
+Click on the `Import` button.
+
+`Scene Collection > LP2022
+
+In BBB Capture scene, double click on `Window Capture` and set it to
+your web browser that is on your other monitor.
+
+### Capturing non-fullscreen window
+
+In the xcompsoite settings, top y: 74 pixels to remove the menubar.
+
+## Streaming
+
+Start with the Interlude scene.  Change the text in `/opt/LP/text.txt` to 
introduce your next talk.
+
+To stream, use the recording button, not the streaming button.  This can be 
set as a hotkey.
+
+Queue up your BBB capture scene and transition when the time is right.
+
+Say to the room, "I am going to count down from 5 and then you will be live.  
Wait another few seconds before you begin.  5-4-3-2-1- You're live!" Unmute 
Desktop Audio with F7.  Transition with F8.
+
+To avoid the "You are now muted" clip and to remove the distraction of the 
streaming room audio stream for the speakers, mute your mic with pavucontrol.
+
+### Removing the LP logo during stream
+
+If the logo in the top-right is covering someone's face in a panel or 
something like that, you can remove it live.
+
+Right-click on the current scene, select the `Duplicate` scene, keep the name 
the same and add without logo at the end, click on the eye icon beside `logo` 
and then transition to the new scene.
+
+### Prerecorded videos
+
+Test prerecorded videos before starting the stream for the day.
+
+At this time, you cannot preview and correct the video dimensions until the 
video is playing live.  If the video is the wrong dimensions such as only 
showing the top-left portion of the video, queue up the same scene that is 
playing and you will have the red box that can be resized, correct the display 
size, and then transition to the same scene.  Luckily this only takes a few 
seconds and continues to play the video.
+
+
+### Things that caused problems
+
+
+In 2022, using i3 wm: having source of "Screen Capture (XSHM)" caused video 
chop. Instead, use Windows Capture (Xcomposite)

Added: trunk/sviki/fsf/tools/anonomize-log-ips.mdwn
===================================================================
--- trunk/sviki/fsf/tools/anonomize-log-ips.mdwn                                
(rev 0)
+++ trunk/sviki/fsf/tools/anonomize-log-ips.mdwn        2023-12-06 20:05:18 UTC 
(rev 685)
@@ -0,0 +1,33 @@
+# anonymize log ips
+
+if you want to grab an ip addr from a log, hash it with a salt, then insert
+that back where the ip was, you can use a script like the following.
+
+log format:
+
+    ... client my-ip-addr#12345 ...
+
+script:
+
+    #! /usr/bin/env python3
+
+    import re
+    import sys
+    import hashlib
+
+    for line in sys.stdin:
+
+        match = re.search("(.* client )(.*)(#.*)", line)
+        start = match.groups()[0]
+        ip_addr = match.groups()[1]
+        end = match.groups()[2]
+
+        salted_addr = (ip_addr + " salt: change to some random 
string...").encode('utf-8')
+        hashed_addr = hashlib.sha256(salted_addr).hexdigest()
+
+        #print(ip_addr) # debug
+        print(start + hashed_addr + end)
+
+running the script:
+
+    ./anonymize.py < input.log | tee output.log

Added: trunk/sviki/fsf/tools/apache.mdwn
===================================================================
--- trunk/sviki/fsf/tools/apache.mdwn                           (rev 0)
+++ trunk/sviki/fsf/tools/apache.mdwn   2023-12-06 20:05:18 UTC (rev 685)
@@ -0,0 +1,59 @@
+# apache guides
+
+## rewrites, redirects, aliases
+
+If you have a rewrite rule in apache for all of "/wiki", then that will take
+precedence over any redirects in place. that is because the rewrite module has
+overall precedence. So convert your redirect to a rewrite rule, like the
+following:
+
+    RewriteRule "^/wiki/En.swpat.org:About$" "/wiki/ESP:About" [R]
+
+It may also be possible to use 'Alias' instead of a rewrite rule, so you can
+make use of redirects rather than rewrites:
+
+    Alias /wiki /var/www/w/index.php
+
+vs
+
+    RewriteEngine On
+    RewriteRule ^/?wiki(/.*)?$ %{DOCUMENT_ROOT}/w/index.php [L]
+
+## redirect many HTTP domains to HTTPS while working with HSTS
+
+Background: When redirecting from HTTP to HTTPS, the domain names of both URLs
+should be the same. This allows browsers to respect our HSTS settings, which
+tell it to always go directly to the HTTPS site even if `https://` is not typed
+at the beginning of the URL.
+
+If we only redirect all port 80 connections for all virtual domains to a single
+target domain, the URLs will vary between the original and target domain names,
+so HSTS wouldn't work. Here is how to set this up in Apache:
+
+Enable the `rewrite` module:
+
+    a2enmod rewrite
+
+Add to your Apache virtual host config for port 80:
+
+        RewriteEngine on
+        RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI} 
[END,QSA,R=permanent]
+
+Reload Apache:
+
+    systemctl reload apache2
+
+Example of testing:
+
+    rm .wget-hsts
+
+    wget emailselfdefense.com
+    wget emailselfdefense.com
+
+The second time, there should be a message about HSTS at the top of the output.
+Also check the flow of URLs that wget follows as it reaches the target domain.
+
+To confirm that the correct target is reached and that things are generally not
+broken, you can also clear the cache from your browser to remove any stale
+redirects from previous use, and then test out http://emailselfdefense.com in
+your browser, confirming that everything looks correct.

Added: trunk/sviki/fsf/tools/auditd.mdwn
===================================================================
--- trunk/sviki/fsf/tools/auditd.mdwn                           (rev 0)
+++ trunk/sviki/fsf/tools/auditd.mdwn   2023-12-06 20:05:18 UTC (rev 685)
@@ -0,0 +1,32 @@
+# auditd
+
+auditd can be used to track for when a file is deleted, and provides some
+information about who or what process deleted that file. It can track other
+system or file changes as well.
+
+## Install the package:
+
+    apt install auditd
+
+## Using:
+
+Set up an audit tracker (set 'foo' to some custom string):
+
+    auditctl -w /var/www/html/some_file -p wra -k foo
+
+Search the logs for entries:
+
+    ausearch -k foo -i
+
+Get some other general stats:
+
+    aureport -ts today -i -x --summary
+
+Remove an audit tracker (notice the uppercase 'W'):
+
+    auditctl -W /var/www/html/some_file -p wra -k foo
+
+## Caveats:
+
+Some reads and writes may not be tracked. It's complicated stuff, but check the
+`-p` option in `man auditctl`.

Added: trunk/sviki/fsf/tools/awstats.mdwn
===================================================================
--- trunk/sviki/fsf/tools/awstats.mdwn                          (rev 0)
+++ trunk/sviki/fsf/tools/awstats.mdwn  2023-12-06 20:05:18 UTC (rev 685)
@@ -0,0 +1,70 @@
+# awstats
+
+See cluestick [[/cluestick/Termite.fsf.org/]] for info setting up on old hosts.
+
+## Adding a machine to awstats
+
+Example `vhost.fsf.org` and IP address is `555.555.555.555`
+
+1. Make a new VM and add it to either the `[apache]` or `[nginx]` category of
+   `ansible`'s `inventory` file.
+1. Login to termite. `ssh root@termite.fsf.org`
+1. Copy a known good config. `cp /etc/awstats/awstats.jshelter.org.conf 
/etc/awstats/awstats.vhost.fsf.org.conf`
+1. Change the host name. `sed -i 's/jshelter.org/vhost.fsf.org/g' vim 
/etc/awstats/awstats.vhost.fsf.org.conf`
+1. Change the IP address. `sed -i 's/209.51.188.122/555.555.555.555/g' vim 
/etc/awstats/awstats.vhost.fsf.org.conf`
+1. Test your config. `/usr/lib/cgi-bin/awstats.pl -config=vhost.fsf.org 
-update`
+1. Check that your files are in `/var/log/vhosts/555.555.555.555/`. `ls 
/var/log/vhosts/555.555.555.555/`
+1. Edit the webpage to point to your new server. `vim /var/www/html/index.html`
+1. Exit. `exit`
+1. Open the <https://termite.fsf.org/> page and view your new server stats! :D
+
+### On GNU Hope
+
+We don't need `logpp`, because apache and nginx automatically log to syslog, as
+long as you are using the logging configuration suggested by ansible. note that
+changes to web server's site config are not enforced by ansible, so make sure
+that you have the proper logging config enabled. make sure that you have
+something like the following.
+
+`/etc/syslog-ng/conf.d/00load-fsf-termite.conf` is automatically
+copied to apache and nginx vms.
+
+It needs to be `local4.info`.
+
+    CustomLog "|/usr/bin/logger -t apache -p local4.info" combined
+
+When migrating an old VM, that uses awstats, to a new machine, edit the
+`LogFile` directive on `termite.fsf.org` in
+`/etc/awstats/awstats.directory.fsf.org.conf`, for example, to point to the new
+log file.
+
+## Notes
+
+Our Awstats config ignores IPv4 requests that originate from the FSF office.
+
+## Documentation
+
+Awstats glossary: <https://awstats.sourceforge.io/docs/awstats_glossary.html>
+
+## Debugging
+
+Before running Awstats commands, ensure that Awstats isn't running. It runs
+every 10 minutes.
+
+    ps -ef | grep -i awstats
+
+If you want to know why some records are ignored by Awstats:
+
+    sudo -u www-data /usr/local/awstats-7.6/wwwroot/cgi-bin/awstats.pl 
-config=ryf.fsf.org -update -showdropped | less
+
+Also useful:
+
+    /usr/local/awstats-7.6/wwwroot/cgi-bin/awstats.pl -help | less
+
+Cron job:
+
+    sudo -u www-data /usr/local/bin/awstats-update.sh
+
+## Alerts
+
+    klaxon:/usr/local/bin/nagios_check_termite.sh

Added: trunk/sviki/fsf/tools/bash.mdwn
===================================================================
--- trunk/sviki/fsf/tools/bash.mdwn                             (rev 0)
+++ trunk/sviki/fsf/tools/bash.mdwn     2023-12-06 20:05:18 UTC (rev 685)
@@ -0,0 +1,18 @@
+# BASH
+
+## tutorials / guides
+
+### General guides
+
+* <https://mywiki.wooledge.org/EnglishFrontPage>
+* `man bash`
+
+### Common Pitfalls
+
+* <https://mywiki.wooledge.org/BashPitfalls>
+
+### Ian's style guide
+
+<https://iankelling.org/git/?p=bash-template;a=tree>
+
+    git clone https://iankelling.org/git/bash-template

Added: trunk/sviki/fsf/tools/bind.mdwn
===================================================================
--- trunk/sviki/fsf/tools/bind.mdwn                             (rev 0)
+++ trunk/sviki/fsf/tools/bind.mdwn     2023-12-06 20:05:18 UTC (rev 685)
@@ -0,0 +1,197 @@
+# bind (DNS name server)
+
+This is mostly FSF internal, a few volunteers have read access to bind.
+
+## Cloning bind
+
+    git clone root@ns1.gnu.org:/etc/bind
+
+## Install dependencies
+
+    sudo apt install -y bind9-utils
+
+## Adding a new entry
+
+This documentation is easiest to describe with a concrete example. In our
+example we will create an address for `mwikiserver2p.libreplanet.org`.
+Adjust as necessary.
+
+* To find an unused address, first look in the `masters/*.arpa` files to find
+  an address that is no longer used. Unused addresses are ones that start
+  with `;` previously decommissioned machines, or development instances
+  that we no longer need. Some research may be required to make sure
+  that the hostname is really not used. Start with `ping` and look at
+  [[/hosts/virtual/]], and maybe search for the hostname in gluestick and
+  brains.
+  * If you have extra time, comment out unused entries with `;` to speed up
+    this step the next time.
+* Edit the chosen line in the `.arpa` file with the new address name. In this
+  example, we are using `masters/db.0-24.188.51.209.in-addr.arpa` which
+  is the address space `209.51.188.0/24` in reverse. `242` was available so
+  we used that. This means that `209.51.188.242` is the IPv4 address for
+  `mwikiserver2p.libreplanet.org`
+
+    vim masters/db.0-24.188.51.209.in-addr.arpa
+
+* Notice that there is a trailing period. The line would look like this:
+
+    242             PTR     mwikiserver2p.libreplanet.org.
+
+* Edit the associated domain file `masters/db.libreplanet.org`.
+
+    vim masters/db.libreplanet.org
+
+* Here is an example of how the `A` (IPv4) and `AAAA` (IPv6) addresses
+  would look like:
+
+    mwikiserver2p   A       209.51.188.242
+    mwikiserver2p   AAAA    2001:470:142:5::242
+
+* Now we need to update the zones. We have `update-zone` scripts to
+  make sure this process works correctly. The scripts take an abbreviation
+  of the file names that we used previously. Basically, the same argument,
+  but without `masters/db.` at the beginning. Start with the `.arpa` file.
+
+    ./update-zone 0-24.188.51.209.in-addr.arpa
+
+* Once run, there will be a git commit text entry. Edit the top line to state
+  the change you are making. Save and quit vim with `:wq` in you are
+  unfamiliar. The program ends with a feed of the DNS queries. If there
+  is no output or a stream of errors, something is wrong. Use `CTRL` + `c`
+  to exit the text stream.
+* Repeat the same workflow for the domain file.
+
+    ./update-zone libreplanet.org
+
+* Now, you can assign the machine or VM to these new addresses.
+
+## Adding a new domain name
+
+The new file in masters/ should be similar to others, but may have lower
+starting serial number. `db.patentabsurdity.com` is a good starting point.
+
+    cp masters/db.patentabsurdity.com masters/db.example.com
+    vim masters/db.example.com
+    git add masters/db.example.com
+
+    ./update-zone example.com
+
+    # Important: this deletes all untracked files, like `*~`
+    # otherwise `fsf.org~`, etc, will get in our configs
+    git clean -fxd
+    ./update-slaves
+
+    git add -p
+    git commit
+    git push
+
+Now configure the name servers for the domain in Gandi, so they point to
+`ns(1|2|3).gnu.org`, etc.
+
+Do not forget to set up [[email delivery|/Tickets/email/mail.fsf.org-aliases]],
+especially if we are migrating a domain from another hosting provider.
+
+## Adding a new name server
+
+* Make sure it exists in Gandi in the external name servers list for each
+  domain.
+* If needed, update the ns records of each domain
+* Add the new namserver's IP to "allow-transfer" in named.conf, git commit. git
+  push.
+* Install bind9 on the new name server.
+* Check that `./update-slaves` has the right name servers and run it.
+
+## Logging
+
+It is possible to temporarily log DNS lookups on ns1. This type of logging is
+very verbose, and can reach **over 500 MB in 12 hours**, so the files need to
+be frequently pruned.
+
+The `/var/log/named/` directory needs to exist and be owned by / writable for
+the `bind` user, otherwise the syslog will get flooded when bind tries to
+rotate files.
+
+    chown -R bind:bind /var/log/named/
+
+To make the change, clone the `/etc/bind/` repo from `ns1.gnu.org`, and edit
+the `named.conf` file:
+
+    logging {
+
+        ...
+
+        channel querylog {
+            file "/var/log/named/querylog" versions 600 size 20m;
+            print-time yes;
+            print-category yes;
+            print-severity yes;
+            severity debug 3;
+        };
+        category "queries" { "querylog"; };
+        //category "queries" { "info"; }; //normal logging level
+
+        ...
+
+    };
+
+## Updating TTL
+
+Time To Live (TTL) is the time a DNS entry will stay active before expiring.
+Once expired, other name servers will fetch again when requested. Best practice
+seems to indicate that 30 minutes or 1 hour is a good middle ground for what
+this value should be set to under normal conditions.
+
+Last time we changed TTL is documented in the [[!rt 1931475]] ticket.
+
+Example workflow changing TTL from 30 minutes to 1 hour:
+
+Change TTL for all domains at once. Note: Look at the source of this document
+for proper formatting.
+
+    cd bind
+    grep -R "\$TTL[[:blank:]]1800" masters/ | sed 's/:.*$//g' | sort -u | 
xargs sed -i --follow-symlinks 's/\$TTL[[:blank:]]1800/\$TTL 3600/'
+
+Explanation of parts:
+
+* Note: Look at the source of this document for proper formatting.
+  `grep -R "\$TTL[[:blank:]]1800" masters/` finds all files that have $TTL 1800
+  with any number of spaces or tabs between the two values.
+* `sed 's/:.*$//g'` excludes the matching part leaving only the file name.
+* `sort -u` sorts the list and only return unique values.
+* `xargs sed -i --follow-symlinks 's/\$TTL[[:blank:]]1800/\$TTL 3600/'` uses
+  the output as files to work on and replaces the first instance of $TTL 1800
+  with $TTL 3600 and saves in-place. `--follow-symlinks` does not clobber
+  symlinks which is a dangerous default for sed to have.
+
+Verify the output with git.
+
+    git status
+
+Create the list of zones that needed updating.
+
+    git status | grep modified | sed 's/^.*db.//g' | tr '\n' ' ' && echo
+
+No entry should be anything other than `modified`.
+
+I took that output and made this command to update all changed zones at once:
+
+   ./update-zone $(git status | grep modified | sed 's/^.*db.//g' | tr '\n' ' 
')
+
+If the commands were not matching the remaining entries for some reason, use
+this command to find any stragglers that did not have a TTL of 1 hour:
+
+    grep -R "\$TTL" masters/ | grep -v 3600 | sed 's/:.*$//g' | xargs vim
+
+Verify the output with git status and update zones.
+
+Check that it succeeded with `dig`. This example checks the `ns1.gnu.org` name
+server for the `www.gnu.org` address.
+
+    dig www.gnu.org @ns1.gnu.org
+
+The second value is the current TTL.
+
+## Resources
+
+* [Bind9 - Debian Wiki](https://wiki.debian.org/Bind9)
+* [Bind - Arch Wiki](https://wiki.archlinux.org/title/BIND)

Added: trunk/sviki/fsf/tools/db.mdwn
===================================================================
--- trunk/sviki/fsf/tools/db.mdwn                               (rev 0)
+++ trunk/sviki/fsf/tools/db.mdwn       2023-12-06 20:05:18 UTC (rev 685)
@@ -0,0 +1,21 @@
+# Adding entries to a Berkeley DB file
+
+Instrucitions assume that you are working with addresses.db.
+
+    cp addresses.db addresses.db.bak
+
+    db4.8_dump -p addresses.db > addresses.txt
+
+    cp addresses.txt addresses.new.txt
+    # edit addresses.new.txt by copying lines and editing addresses.
+
+    rm addresses.new.db # to avoid merging changes with old changes
+    db4.8_load addresses.new.db < addresses.new.txt
+
+    db4.8_dump -p addresses.new.db # check
+    rm addreses.txt addresses.new.txt
+
+    cat addresses.new.db > addresses.db
+    rm addresses.new.db
+
+    systemctl reload exim4

Added: trunk/sviki/fsf/tools/decisions.mdwn
===================================================================
--- trunk/sviki/fsf/tools/decisions.mdwn                                (rev 0)
+++ trunk/sviki/fsf/tools/decisions.mdwn        2023-12-06 20:05:18 UTC (rev 
685)
@@ -0,0 +1,59 @@
+# Decisions
+
+Making decisions can be hard.
+
+## Research method
+
+Develop pros and cons for each choice and gather consensus.
+
+## Coin and Dice toss method
+
+Sometimes the all results are equal and we just need to pick.
+
+If two or more sysadmins need to decide by coin toss or dice rolls we can use 
python.
+
+All interested parties login to the same server such as vault.
+
+    ssh root@vault.office.fsf.org
+
+The first person runs `tmux`.
+
+    tmux
+
+All others attach to the tmux session.
+
+    tmux a
+
+Start interactive python.
+
+    python
+
+Import randrange.
+
+    from random import randrange
+
+For a coin toss, enter this line in and wait for a call to be made.
+
+    print("Heads!") if randrange(0,2) else print("Tails!")  # Call it.
+
+For more complex issues, it is better for everyone to roll a 20 sided die.  The
+DND and D20 ruleset makes higher rolls win.  COC ruleset makes lower rolls win.
+D20 is the most common, but make sure everyone agrees first.  `20` can be
+replaced another number to represent a different die.
+
+    randrange(1, int(20 + 1))
+
+Coin and dice are based on
+[roll.py](https://github.com/TechnologyClassroom/dice-mechanic-sim/blob/master/roll.py).
+
+Quit interactive python.
+
+    quit()
+
+Exit tmux.
+
+    exit
+
+Exit ssh.
+
+    exit

Added: trunk/sviki/fsf/tools/dig.mdwn
===================================================================
--- trunk/sviki/fsf/tools/dig.mdwn                              (rev 0)
+++ trunk/sviki/fsf/tools/dig.mdwn      2023-12-06 20:05:18 UTC (rev 685)
@@ -0,0 +1,7 @@
+# DNS
+
+## dig
+
+look at the serial number, to see if the zones are being transferred:
+
+    dig @209.51.188.164 gnu.org soa +short

Added: trunk/sviki/fsf/tools/edward.mdwn
===================================================================
--- trunk/sviki/fsf/tools/edward.mdwn                           (rev 0)
+++ trunk/sviki/fsf/tools/edward.mdwn   2023-12-06 20:05:18 UTC (rev 685)
@@ -0,0 +1,34 @@
+# EmailSelfDefense.org gpg bot 'Edward'
+
+## repos
+
+* `git@vcs.fsf.org:edward.git`
+* `https://vcs.fsf.org/git/edward.git`
+
+## adding a new language
+
+1. ask the translator to translate `en.py` (the parts after the colons). if
+   they want attribution, they should add their name to the license header. see
+   commit `c68461f1d6c20adcfd3a9acece871b2d9af928fb` for an example. the
+   short two letter name of the language is used in the new edward email
+   address below.
+1. add to `/etc/aliases` on fencepost.
+1. add to `/etc/aliases-fsf.org` on mail.fsf.org.
+1. translate the string "Edward, the GPG Bot", or use part of the string from
+   the translation of the signature.
+1. update edward's public key in the `edward` user on fencepost for the new
+   email address and translsated name. you don't need a password to change it.
+   the secret key lives in `~/.gnupg` for that user.
+1. export an ascii-armored version of that public gpg key. add it to
+   <https://agpl.fsf.org/emailselfdefense.fsf.org/> and upload to popular key
+   servers.
+
+## publishing an update to source code
+
+***This info is outdated. vcs.fsf.org has a hook that automatically creates 
tarballs on agpl.fsf.org.***
+
+To publish new versions of the code on agpl.fsf.org:
+
+    ssh agpl.fsf.org mv 
/var/www/agpl.fsf.org/emailselfdefense.fsf.org/edward/CURRENT 
/var/www/agpl.fsf.org/emailselfdefense.fsf.org/edward/PREVIOUS-$(date +%Y%m%d)
+    ssh agpl.fsf.org mkdir 
/var/www/agpl.fsf.org/emailselfdefense.fsf.org/edward/CURRENT
+    scp edward.tar.gz edward.tar.gz.asc 
agpl.fsf.org:/var/www/agpl.fsf.org/emailselfdefense.fsf.org/edward/CURRENT

Added: trunk/sviki/fsf/tools/exim.mdwn
===================================================================
--- trunk/sviki/fsf/tools/exim.mdwn                             (rev 0)
+++ trunk/sviki/fsf/tools/exim.mdwn     2023-12-06 20:05:18 UTC (rev 685)
@@ -0,0 +1,11 @@
+When updating a config, first
+
+```
+update-exim4.conf --check
+```
+
+When using an alternate config, use -d to pass the config dir.
+
+Then do `systemctl reload exim4`. Whenever restarting an exim daemon,
+check to see if the old instance actually goes away. Sometimes it gets
+left around, which will lead to problems.

Added: trunk/sviki/fsf/tools/fail2ban.mdwn
===================================================================
--- trunk/sviki/fsf/tools/fail2ban.mdwn                         (rev 0)
+++ trunk/sviki/fsf/tools/fail2ban.mdwn 2023-12-06 20:05:18 UTC (rev 685)
@@ -0,0 +1,46 @@
+# fail2ban
+
+[[More fail2ban docs|/Tickets/shop/fail2ban/]]
+
+## Unban an IP
+
+Login to the server in question.
+
+Check the `fail2ban.log`.
+
+    less /var/log/fail2ban.log
+
+Log looks like this:
+
+    2021-04-12 12:59:51,344 fail2ban.filter         [779]: INFO    
[apache-civi-newu] Found 111.222.111.222 - 2021-04-12 12:59:50
+
+The rule name is in [brackets] followed by the IP address and a redundant 
timestamp.
+
+You can find more information by comparing the `syslog`.
+
+    less /var/log/syslog
+
+You can see how the rules works by checking the config for that rule.
+
+    less /etc/fail2ban/filter.d/apache-civi-newu.conf
+
+Un-ban syntax:
+
+    fail2ban-client set RULENAME unbanip IPADDRESS
+
+To un-ban an ssh IP addr:
+
+    fail2ban-client set ssh unbanip 74.94.156.211
+
+To un-ban like the above example:
+
+    fail2ban-client set apache-civi-newu unbanip 111.222.111.222
+
+## old fail2ban
+
+There may not be an unban command for older versions of fail2ban. So you can
+use commands like these:
+
+    iptables -L -v -n
+
+    iptables -D fail2ban-courierauth -s 74.94.156.218 -j DROP

Added: trunk/sviki/fsf/tools/ftp.mdwn
===================================================================
--- trunk/sviki/fsf/tools/ftp.mdwn                              (rev 0)
+++ trunk/sviki/fsf/tools/ftp.mdwn      2023-12-06 20:05:18 UTC (rev 685)
@@ -0,0 +1,20 @@
+# FTP
+
+You can also use the Filezilla GUI (default user name and port are 'anonymous'
+and '21' when using "Quick Connect").
+
+log in:
+
+    ftp photoupload.fsf.org 21
+    > user name: anonymous
+    > password: <empty>
+
+enter ftp passive mode:
+
+    > pass
+
+use the ftp server:
+
+    > ls
+    > cd upload-here
+    > put foo

Added: trunk/sviki/fsf/tools/gnupg.mdwn
===================================================================
--- trunk/sviki/fsf/tools/gnupg.mdwn                            (rev 0)
+++ trunk/sviki/fsf/tools/gnupg.mdwn    2023-12-06 20:05:18 UTC (rev 685)
@@ -0,0 +1,49 @@
+# GnuPG / GPG
+
+## keyservers
+
+Note that these key servers may contain keys spammed with bogus signatures 
that might break your `~/.gnupg` dir. It's best not to use them if you can 
avoid it. Before importing or refreshing keys, it's important to make a backup 
of your `~/.gnupg` directory for easy recovery. [More info 
here](https://gist.github.com/rjhansen/67ab921ffb4084c865b3618d6955275f), and a 
[possible recovery 
process](https://gist.github.com/Disasm/dc44684b1f2aa76cd5fbd25ffeea7332).
+
+* keyring.debian.org
+* keys.gnupg.net
+* pool.sks-keyservers.net
+* keyserver.ubuntu.com
+* pgp.mit.edu (generally very slow)
+
+## long key ID strings by default
+
+Add this to `~/.gnupg/gpg.conf`:
+
+    keyid-format 0xlong
+
+## WKD
+
+Web Key Directory (WKD) allows Thunderbird and other email clients to fetch PGP
+keys from our servers rather than keys.openpgp.org. This allows us to work
+around a [Thunderbird
+bug](https://bugzilla.mozilla.org/show_bug.cgi?id=1721668) for now. WKD is easy
+to set up as a static directory under Nginx or Apache.
+
+See the generic guide 
<https://gist.github.com/kafene/0a6e259996862d35845784e6e5dbfc79>.
+
+    mkdir wkd ; cd wkd
+
+    # get the main hash that you want
+    gpg --with-wkd-hash --fingerprint edward-en@fsf.org
+
+    gpg --no-armor --export edward-en@fsf.org > 
eix5xw7cppcb1wwpq69rhseb31rky1uy
+
+    gpg --with-wkd-hash --fingerprint edward-en@fsf.org | grep -E -e 
"fsf.org$" | sed -e "s/\s*//;s/@fsf.org$//" | xargs -I {} ln -s 
eix5xw7cppcb1wwpq69rhseb31rky1uy {}
+
+Sync the files to the Web server under a path like
+`fsf.org:/var/www/wkd/.well-known/openpgpkey/hu/`.
+
+For Nginx:
+
+    location /.well-known/openpgpkey {
+      root /var/www/wkd;
+    }
+
+It's okay if fsf.org redirects to a URL on www.fsf.org.
+
+Test: `gpg --auto-key-locate clear,wkd --locate-keys edward-en@fsf.org`

Added: trunk/sviki/fsf/tools/journalctl.mdwn
===================================================================
--- trunk/sviki/fsf/tools/journalctl.mdwn                               (rev 0)
+++ trunk/sviki/fsf/tools/journalctl.mdwn       2023-12-06 20:05:18 UTC (rev 
685)
@@ -0,0 +1,54 @@
+# using journalctl
+
+Here is some documentation for `journalctl` that should make your life much
+easier. The fact that it uses binary logs means that you can filter and query
+those logs with commands. This allows you to show all kernel and dmesg logs
+that are 'error' or above, add time filters, or query a few service logs
+simultaneously while ignoring everything else. Using journalctl is much more
+powerful than looking at syslog with a text editor. You can also combine
+journalctl with grep, just like any other tool.
+
+## commands
+
+Print everything
+
+    journalctl
+
+Start at the end. iank: I almost always add this
+
+    Journalctl -e
+
+Kernel / dmesg logs from this boot, 'err' level and above
+
+    journalctl -b 0 -k -p err
+
+Interleaved apache and php-fpm logs from previous boot, at level 'info' or
+above. Using multiple '-u' options is useful for diagnosing errors related to
+interactions between processes.
+
+    journalctl -b -1 -u apache2 -u php-fpm -p info
+
+All logs in time window, level 'err' and above
+
+    journalctl --since 2020-04-13 --until "1 hour ago" -p err
+
+All logs from pid 1024. This is not as comprehensive as filtering by service
+name, because services also include child processes.
+
+    journalctl _PID=1024
+
+Logs for comamnds run via bash
+
+    journalctl /bin/bash
+
+Print last 100 lines of journal
+
+    journalctl -n 100
+
+Follow journalctl output
+
+    journalctl -f
+
+There are also commands for limiting journal file sizes on disk, and for
+formatting data as json, etc. ([source for most of these
+commands](https://www.digitalocean.com/community/tutorials/how-to-use-journalctl-to-view-and-manipulate-systemd-logs))

Added: trunk/sviki/fsf/tools/kiwiirc.mdwn
===================================================================
--- trunk/sviki/fsf/tools/kiwiirc.mdwn                          (rev 0)
+++ trunk/sviki/fsf/tools/kiwiirc.mdwn  2023-12-06 20:05:18 UTC (rev 685)
@@ -0,0 +1,37 @@
+# KiwiIRC
+
+Used for IRC during LibrePlanet on libreplanet.org/20xx/live/ pages.
+
+Websites:
+
+- [site](https://kiwiirc.com/)
+- [git](https://github.com/kiwiirc/kiwiirc)
+- [Configuration 
options](https://github.com/kiwiirc/kiwiirc/wiki/Configuration-Options)
+
+## Start
+
+```
+tmux # If you are not already running it.
+ssh root@irc0d.libreplanet.org
+su -s /bin/bash irc
+tmux a
+# If that does not work use 'tmux' instead.
+cd ~/KiwiIRC/
+./kiwi start -c config.js -p kiwiirc.pid
+```
+
+Restart your tmux pane to disconnect and leave the tmux session open.
+
+CTRL+b `:respawn-pane -k`
+
+## Stop
+
+```
+ssh root@irc0d.libreplanet.org
+su -s /bin/bash irc
+tmux a
+cd ~/KiwiIRC/
+./kiwi stop -c config.js -p kiwiirc.pid
+exit # Exit from tmux
+exit # Exit form irc0d
+```

Added: trunk/sviki/fsf/tools/kvm.mdwn
===================================================================
--- trunk/sviki/fsf/tools/kvm.mdwn                              (rev 0)
+++ trunk/sviki/fsf/tools/kvm.mdwn      2023-12-06 20:05:18 UTC (rev 685)
@@ -0,0 +1,39 @@
+## Booting kvm without grub
+
+    cat << EOF > /tmp/grub.cfg
+    linux (hd0)/vmlinuz root=/dev/sda ro elevator=noop console=tty0 
console=ttyS0,115200
+    initrd (hd0)/initrd.img
+    boot
+    EOF
+
+    grub-mkimage -v -C xz -c /tmp/grub.cfg  -O i386-pc  -o 
/var/lib/libvirt/images/grub-i386.bin  biosdisk ext2 linux xfs
+
+
+Then use grub-i386.bin as the kernel for that vm. A kernel needs to be 
installed in the vm, and symlinked as /vmlinuz /initrd.img
+No need to install grub in the vm, or have /boot/grub/grub.cfg|menu.lst
+
+# With LUKS automount
+
+ This requires patches to grub2 from http://grub.johnlane.ie/ They are already 
applied to Trisquel's grub starting on 8.0
+
+    cat << EOF > /tmp/grub.cfg
+    cryptomount -f (memdisk)/keyfile (hd0)
+    linux (hd0)/vmlinuz root=/dev/sda ro elevator=noop console=tty0 
console=ttyS0,115200
+    initrd (hd0)/initrd.img
+    boot
+    EOF
+
+    dd if=/dev/zero of=/dev/shm/memdisk bs=1M count=1
+    mkdir /dev/shm/memdiskmount
+    mkfs.ext2 /dev/shm/memdisk
+    mount /dev/shm/memdisk /dev/shm/memdiskmount
+    echo -n "password" > /dev/shm/memdiskmount/keyfile
+    umount /dev/shm/memdiskmount
+
+    grub-mkimage -v -C xz -c /tmp/grub.cfg  -O i386-pc  -o 
/var/lib/libvirt/images/grub-i386.bin  biosdisk ext2 linux xfs normal luks help 
crypto cryptodisk zfscrypt gcry_sha512 echo cat memdisk -m /dev/shm/memdisk
+
+    rm /dev/shm/memdisk
+
+## Extract password from grub image
+
+If for some reason you don't have the password that's inside a grub image, use 
this script: https://github.com/msuhanov/grub-unlzma/blob/master/grub-guess and 
then run "strings" on the output. The password should be near the end of the 
output.

Added: trunk/sviki/fsf/tools/local-vm.mdwn
===================================================================
--- trunk/sviki/fsf/tools/local-vm.mdwn                         (rev 0)
+++ trunk/sviki/fsf/tools/local-vm.mdwn 2023-12-06 20:05:18 UTC (rev 685)
@@ -0,0 +1,41 @@
+# running local virtual machines
+
+gnome boxes could be used to run an archived backup that is stored on monolith,
+but it seems to reject older operating system versions. ideally we can use
+virsh locally, without the added risk of running it with direct access from the
+internet.
+
+## install
+
+    apt install virsh
+
+## prepare vm
+
+    dd if=/dev/zero of=example.img bs=4M count=5K oflag=sync status=progress  
## creates 20 GB image file
+    mkfs.ext4 -L example-root example.img
+
+    sudo mount -o loop ~andrew/example.img /mnt/
+    sudo chown andrew:andrew /mnt/
+
+Sync over the files
+
+    sudo rsync -e "ssh -i /home/me/.ssh/id_rsa.pub" -avhSAXP 
monolith.office.fsf.org:/mnt/restic/medea/snapshots/latest/ /mnt/
+
+    # or, if you're using GPG-Agent for SSH:
+    sudo SSH_AUTH_SOCK=/run/user/1000/gnupg/S.gpg-agent.ssh rsync -avhSAXP 
monolith.office.fsf.org:/mnt/restic/medea/snapshots/latest/ /mnt/
+
+    sudo mkdir -p /mnt/{sys,proc,tmp,dev,run,mnt}
+    sudo cp /mnt/boot/{vmlinuz,initrd}* ~/
+
+    sudo umount /mnt
+
+...
+
+Use kernel and initrd file names according to what you copied out of the root 
above.
+
+    qemu-system-x86_64 -m 2048 -enable-kvm -drive format=raw,file=example.img 
-kernel vmlinuz-3.13.0-165-generic -initrd initrd.img-3.13.0-165-generic 
-append root=/dev/sda
+
+You may need to enter maintenance mode, enter in root password from SPD, then
+`mount -o rewrite,rw /`, then edit `/etc/fstab` to use `/dev/sda` rather than
+the default encrypted target. Then update initramfs, copy the new initrd out,
+and reboot.

Added: trunk/sviki/fsf/tools/mediawiki.mdwn
===================================================================
--- trunk/sviki/fsf/tools/mediawiki.mdwn                                (rev 0)
+++ trunk/sviki/fsf/tools/mediawiki.mdwn        2023-12-06 20:05:18 UTC (rev 
685)
@@ -0,0 +1,126 @@
+# MediaWiki
+
+## Exporting pages
+
+Copy and paste the HTML table source of each listing generated by
+<http://cluestick/wiki/Special:AllPages>, except for the `Category` and `File`
+listings (put those in a separate file).
+
+Use regexes to extract page URLs from that text document. Remove the `/wiki/`
+prefix.
+
+Paste those into: <http://cluestick/wiki/Special:Export>. Download the full
+page history. Make an `xz` zip file to store as an archive. Paste the URLs into
+the same form and download, this time with no revision history. Use this format
+for extracting page data.
+
+## Extracting page data from XML
+
+    apt install python3-xmltodict pandoc
+
+Symlink from the XML file (without revision history) as `cluestick.xml`.  Put
+the following code into a script:
+
+```
+#! /usr/bin/python3
+
+import os
+import re
+import json
+import string
+import xmltodict
+
+
+os.system('mkdir -p ./old-pages ./old-pages-md')
+
+with open('./cluestick.xml') as f:
+
+    parsed = xmltodict.parse(f.read())
+
+    wanted_data = parsed['mediawiki']['page']
+
+    for page in wanted_data:
+
+        title = page['title']
+        title = re.sub(r' ',  '_', title)
+        title = re.sub(r'/',  '_', title)
+        title = re.sub(r'\(', '_', title)
+        title = re.sub(r'\)', '_', title)
+
+        print(title)
+
+        try:
+            text = page['revision']['text']['#text']
+        except:
+            text = "Blank Page on Cluestick"
+
+        # fix last line on nico's user page
+        if title == "User:Ncesar":
+            text = text[:text.rfind('\n')] + "\n</pre>"
+
+        with open('./old-pages/' + title + '.wiki', 'w') as f:
+            f.write(text)
+
+        os.system('pandoc -r mediawiki ./old-pages/' + title + '.wiki -t 
markdown -o ./old-pages-md/' + title + '.md')
+```
+
+Run the script. It will generage markdown pages, but links will be broken,
+including special RT ticket number links and links to other wiki pages, given
+that once imported, they'll start with something like "/cluestick/".
+
+Fixing URLs so they point to other pages under `/cluestick`.
+
+    for x in old-pages-md/* ; do pandoc -r markdown "$x" -t html -o 
old-pages-html/"$(basename -s .md "$x")".html ; done
+    for x in old-pages-html/* ; do sed -i -e "s:</a>:\n</a>:g" "$x" ; done
+    sed -i -e "/wikilink/ 
s:href=\":href=\"https\://gluestick.office.fsf.org/cluestick/:" *
+
+Update links that contain `/`, `(`, `)` to: `_` (There was one error with this 
command).
+
+    grep -ri -E -e "[/()]" page-list > pages-with-special-chars
+    while read line ; do sed -i "/"$(echo "$line" | sed -e "s:/:\\\\/:")"/ 
s.$line.$(echo "$line" | sed -e "s:[/()]:_:g")." old-pages-html/* ; done < 
pages-with-special-chars
+
+Convert back to markdown:
+
+    for x in old-pages-html/* ; do pandoc -r html "$x" -t markdown -o 
old-pages-md-new/"$(basename -s .html "$x")".md ; done
+
+Convert broken code lines, etc to indented code
+
+    #! /usr/bin/python3
+
+    import os
+    import re
+    import json
+    import string
+    import xmltodict
+
+    os.system('mkdir -p ./old-pages-code-fixed')
+
+    for filename in os.listdir('./old-pages-md-new'):
+
+        with open('./old-pages-md-new/' + filename, 'r') as f:
+
+            text = ""
+            for line in f.readlines():
+
+                if (line.find('`') == 0) and ((line.rfind('`') == len(line) - 
2) \
+                        or ((line.rfind('`') == len(line) - 3) and 
(line.rfind('\\') == len(line) - 2))):
+
+                    line = re.sub(r'^`', '', line)
+                    line = re.sub(r'`(\\)?$', '', line)
+                    line = "    " + line
+
+                line = re.sub(r'^\\$', '', line)
+
+                if (line.find('  ') == 0 and line[2] != ' '):
+
+                    line = re.sub(r'^  ', '    ', line)
+
+                text += line
+
+            with open('./old-pages-code-fixed/' + filename, 'w') as g:
+
+                g.write(text)
+
+Add header, copy to gluestick repo:
+
+    for x in old-pages-code-fixed/* ; do cat header.mdwn "$x" > 
~/src/wikis/gluestick/cluestick/"$(basename "$x")" ; done

Added: trunk/sviki/fsf/tools/member-card-builder.mdwn
===================================================================
--- trunk/sviki/fsf/tools/member-card-builder.mdwn                              
(rev 0)
+++ trunk/sviki/fsf/tools/member-card-builder.mdwn      2023-12-06 20:05:18 UTC 
(rev 685)
@@ -0,0 +1,102 @@
+# FSF member card builder
+
+## Trisquel ISO images
+
+When a new `_fsf.iso` image is made, Ruben will ping us on IRC to download the 
new version.
+
+The files are located at:
+<http://jenkins.trisquel.info/trisquel-images/>
+
+If you need a new image and Ruben is unavailable, the images are made with the 
`makeiso.sh` script found in this repository:
+
+<https://gitlab.trisquel.org/trisquel/makeiso>
+
+## git repo
+
+```
+git clone git@vcs.fsf.org:fsf-member-card-builder.git
+```
+
+## previously generated member card images
+
+Check `/home/common/sysadmin/membercard/` on tarantula. Upload completed images
+there when you're done making them. Also upload the sources tarball and the
+base fsf ISO made by Ruben.
+
+## Customizing installed media
+
+Edit the `install-extras.sh` script. You can also automatically generate a list
+of HTML tags for recent LP audio recordings: `index-generator.py` so you can
+update the `index.html` file in the above repo.
+
+## Generating member cards
+
+Plug in a USB member card to your system. Wipe the entire disk with dd and
+/dev/zero. Follow the instructions in the readme of the git repo.
+
+```
+umount /dev/sdb*
+sudo dd if=/dev/zero of=/dev/sdb bs=4M
+
+...
+```
+
+Make sure that the newly created member card boots on LibreBoot with GRUB,
+SeaBIOS, and a proprietary BIOS. The easiest (but perhaps not fastest) way to
+check when you're trying things out is to boot to the syslinux menu with qemu:
+
+```
+qemu-system-x86_64 -enable-kvm -m 2G -hda /dev/sdb
+```
+
+Check the amount of free space on the drive, and update the HTML file that gets
+booted on the member card so the dd command says how much space to use for
+storage persistence.
+
+Check that the media URLs are correct on the booted system.
+
+If you make any experimentations to get things working, then for the final good
+version of the image, start afresh, with a fully wiped member card.
+
+Once you're done creating the final version, fix the boot sector so the new
+main version is copied to the backup. (Might not be critical, but this helps
+users who fsck later).
+
+```
+sudo fsck.vfat /dev/sdb1
+```
+
+Then copy the disk image to your local filesystem before you mount it or make
+any other changes to the iso. Compute the sha256sum and zip the file:
+
+```
+dd if=/dev/sdb of=member-card-2023-08-29.img bs=4M status=progress
+sha256sum member-card-2023-08-29.img > member-card-2023-08-29.img.sha256
+pigz < member-card-2023-08-29.img > member-card-2023-08-29.img.gz
+```
+
+Then take the USB drive and test it on all of the main systems, to ensure that
+the copy you saved is a good and working version. Follow the instructions that
+pop up in the browser on boot in order to enable persistent storage. Test that
+it's working.
+
+Upload those files to tarantula, as described above. Notify the operations
+assistant about the new image.
+
+## Edit squashfs
+
+**Ruben is capable of building the base image via a build script on his end.**
+We should use that rather than try to reproduce that work with our own code.
+
+<https://sleeplessbeastie.eu/2012/05/27/how-to-modify-squashfs-image/>
+
+Remove the gimp help docs (except en and es) and libreoffice translations
+(except en_US, en_GB and es) from /usr/share. this might be best done as a
+package uninstallation step.
+
+You'll need to install the .desktop file that launches the web browser with
+the local static page, with links to the videos. Get that from the old member
+image. You'll also need the HTML, and various media that lies outside of the
+squashfs image, in /fsf on the member card main partition.
+
+You should also grab the custom syslinux boot screen splash image.

Added: trunk/sviki/fsf/tools/munin.mdwn
===================================================================
--- trunk/sviki/fsf/tools/munin.mdwn                            (rev 0)
+++ trunk/sviki/fsf/tools/munin.mdwn    2023-12-06 20:05:18 UTC (rev 685)
@@ -0,0 +1,79 @@
+# munin install for client
+
+set up according to:
+
+    http://cluestick/wiki/Munin#Munin
+
+on client (using jabserver2p.fsf.org as an example):
+
+    apt-get install munin-node
+
+    vim /etc/munin/munin-node.conf
+    +host_name jabserver2p.fsf.org
+    +#bernie/sudoman: office NAT gateway for monitor.office.fsf.org
+    +allow ^74\.94\.156\.210$
+
+    cd /etc/munin/plugins
+    rm -f threads if_err_eth0 fw_packets swap users exim_* entropy open_files 
open_inodes
+
+    vim /etc/munin/plugin-conf.d/munin-node
+    +[diskstats]
+    +user root
+    +env.graph_width 640
+
+    service munin-node restart
+
+on monitor.office.fsf.org:
+
+    nc jabserver2p.fsf.org 4949
+    ^D
+
+    vim /etc/munin/munin-conf.d/domU_jabserver2p.fsf.org.conf
+    [domU;jabserver2p.fsf.org]
+            address jabserver2p.fsf.org
+## custom graphs
+
+ssh'd into the target machine, in a script like: `/etc/munin/plugins/foo_bar`
+with permissions `750` and ownership `root:munin` if it contains a password:
+
+```
+#! /bin/bash
+
+function config() {
+        cat <<'EOM'
+graph_title CiviCRM Contact Count
+graph_vlabel contacts
+civicrm_contacts.label CiviCRM contacts
+EOM
+        exit 0
+}
+
+# don't use the leading backslash for the source code version of this line. 
it's just for ikiwiki formatting
+\[[ $1 == "config" ]] && config
+
+count="$(echo "select count(*) from civicrm_contact;" | mysql -u munin -pFOO 
civicrm | tail -n +2)"
+
+echo "civicrm_contacts.value ${count}"
+```
+
+Give the munin user select access on the id column of the table you're
+accessing:
+
+    ...
+    GRANT SELECT (id) ON civicrm_contact TO munin@localhost;
+    ...
+
+Start using the new script:
+
+    systemctl restart munin-node.service
+    systemctl status munin-node.service
+
+    vim /var/log/munin/munin-node.log
+
+## Script
+
+There is a script in `/home/common` that can download all of the graphs with 
day, week, month, and year views.
+
+    /home/common/sysadmin/munin-year.sh
+
+This is helpful for getting a wider view or sharing data with an external 
community admin.

Added: trunk/sviki/fsf/tools/mydumper-myloader.mdwn
===================================================================
--- trunk/sviki/fsf/tools/mydumper-myloader.mdwn                                
(rev 0)
+++ trunk/sviki/fsf/tools/mydumper-myloader.mdwn        2023-12-06 20:05:18 UTC 
(rev 685)
@@ -0,0 +1,25 @@
+# mydumper and myloader
+
+## Dump
+
+We use the multi-threaded mydumper to regularly backup MySQL DBs on gnuhope.
+See `/usr/local/bin/dump-mysql` for details.
+
+## Load
+
+If you are loading a DB backup from another host onto a dev machine, the DB can
+be loaded in like so:
+
+    cd 2021052513
+
+    # you may wish to drop the old db first if it doesn't have production data
+
+    echo "CREATE DATABASE shop" | mysql
+    mysql shop < schemas/shop.sql
+
+    # we don't want to overwrite our local 'mysql' table
+    mkdir ../2021052513-bak
+    mv mysql.* ../2021052513-bak
+
+    # import the data
+    myloader -d . -B shop -s shop

Added: trunk/sviki/fsf/tools/nagios.mdwn
===================================================================
--- trunk/sviki/fsf/tools/nagios.mdwn                           (rev 0)
+++ trunk/sviki/fsf/tools/nagios.mdwn   2023-12-06 20:05:18 UTC (rev 685)
@@ -0,0 +1,113 @@
+[[!toc  levels=4]]
+
+# Nagios
+
+Nagios is one of our monitoring solutions.
+
+## Useful commands for Nagios and check_mk
+
+### Login to Klaxon
+
+    ssh root@klaxon.fsf.org
+
+### Refresh DNS Cache
+
+    check_mk --update-dns-cache
+    check_mk -U && service nagios3 reload
+
+### Update Nagios' expectations of the presence and value of check_mk results:
+
+    # Inventory
+    check_mk -II
+    # Generate nagios config
+    check_mk -U
+    # Apply config
+    /etc/init.d/nagios3 reload
+
+### Update inventory for only one host
+
+    check_mk -II vcs1.savannah.gnu.org
+
+### View a machine's mk
+
+    cd /etc/check_mk/conf.d/machines/
+    less vcs1.savannah.gnu.org
+
+### Update email queue thresholds and other variables
+
+    vim /etc/check_mk/conf.d/custom.mk
+    check_mk -U
+    service nagios3 reload
+
+### Adding a new URL Content service check
+
+There is a custom script to generate URL checks 
`/usr/local/bin/nagios-website-add.sh`
+
+Examples:
+
+```
+/usr/local/bin/nagios-website-add.sh -s -6 -C '/irc/!The Lounge' -w 
irc.libreplanet.org -h irc1p.libreplanet.org
+/usr/local/bin/nagios-website-add.sh -s -6 -A -C '/!We prioritize your 
privacy' -w jitsi.member.fsf.org -h jitsi1p.fsf.org
+```
+
+See `/etc/nagios3/conf.d/web_services.cfg` for some more examples.
+
+### Adding a custom service check script
+
+Add a script to `/usr/lib/check_mk_agent/local/`. See ansible for
+examples. For it to go into effect:
+
+For a single host:
+
+```
+check_mk -II FQDN; service nagios3 reload
+```
+
+Or to fully reload everything:
+
+```
+check_mk -II;check_mk -U; service nagios3 reload
+```
+
+### Add a new remote check for all hosts
+
+<https://docs.checkmk.com/latest/en/cmc_migration.html>
+
+```
+vim /etc/check_mk/main.mk
+
+custom_checks += [
+  ({
+      'command_name':        'check-down-firewall',
+      'service_description': 'Check for down firewalls',
+      'command_line':        '/usr/local/bin/check-down-firewall $HOSTNAME$ 
1234',
+      'has_perfdata':        False,
+  },
+  [],
+  ALL_HOSTS )]
+```
+
+## Useful web interface tips
+
+### Disabling a notification
+
+1. Visit <https://klaxon.fsf.org/nagios3/> and login.
+1. On the left column, click on [Host 
Groups](https://klaxon.fsf.org/cgi-bin/nagios3/status.cgi?hostgroup=all&style=overview).
+1. Click on a specific VM.  In my example, I wanted to silence docker alerts 
from 
[emba-runner.gnu.org](https://klaxon.fsf.org/cgi-bin/nagios3/status.cgi?host=emba-runner.gnu.org&style=detail)
 which gave 1,482 unhelpful alerts in two months.
+1. Click on the service that is giving unhelpful alerts.  In my example, I 
wanted to silence Interfaces 
[3](https://klaxon.fsf.org/cgi-bin/nagios3/extinfo.cgi?type=2&host=emba-runner.gnu.org&service=Interface+3),
 
[4](https://klaxon.fsf.org/cgi-bin/nagios3/extinfo.cgi?type=2&host=emba-runner.gnu.org&service=Interface+4),
 and 
[5](https://klaxon.fsf.org/cgi-bin/nagios3/extinfo.cgi?type=2&host=emba-runner.gnu.org&service=Interface+5).
+1. In the right column, click on [Disable notifications for this 
service](https://klaxon.fsf.org/cgi-bin/nagios3/cmd.cgi?cmd_typ=22&host=emba-runner.gnu.org&service=Interface+3)
  To reverse the change, click on `Enable notifications for this service`.
+1. No more alert notifications for that!
+
+## Disabling an alert via Nagstamon
+
+Right-click on an active alert, and choose "acknowledge". If you want the ack
+to persist after it's okay then comes back / flaps, then check the "sticky"
+option in the dialog. Click OK.
+
+## Disabling alerts types for a host
+
+Alerts can be disabled by type, per host, by editing the ignored_services 
section at /etc/check_mk/conf.d/custom.mk on klaxon
+
+## On-call scripts
+
+See [[/checklists/person/klaxon/]].

Added: trunk/sviki/fsf/tools/netcat.mdwn
===================================================================
--- trunk/sviki/fsf/tools/netcat.mdwn                           (rev 0)
+++ trunk/sviki/fsf/tools/netcat.mdwn   2023-12-06 20:05:18 UTC (rev 685)
@@ -0,0 +1,13 @@
+# netcat
+
+Netcat can be used to scan a port without nmap. For instance:
+
+    # 5 second timeout on port 161
+    $ nc -vz -w 5 my.fsf.org 161
+    nc: connect to my.fsf.org (209.51.188.223) port 161 (tcp) timed out: 
Operation now in progress
+    nc: connect to my.fsf.org (2001:470:142:5::223) port 161 (tcp) failed: 
Network is unreachable
+
+Some people consider having netcat installed to be a security risk, because it
+can be used to listen to and make TCP connections. However that is not the
+biggest part of our attack surface. It might make sense not to install it in a
+container though.

Added: trunk/sviki/fsf/tools/onion_service.mdwn
===================================================================
--- trunk/sviki/fsf/tools/onion_service.mdwn                            (rev 0)
+++ trunk/sviki/fsf/tools/onion_service.mdwn    2023-12-06 20:05:18 UTC (rev 
685)
@@ -0,0 +1,130 @@
+[[!meta title="Create an Onion Service"]]
+
+## Create an Onion Service
+
+[[!toc levels=3]]
+
+[problem] - [[potential duplicate page|/tools/tor/]]
+
+### How Onion Services works
+
+Below here is a schema of Onion Services working.
+
+![Onion Services overview schema](onion_service_overview.jpg)
+
+Briefly:
+
+1. You want to create an Onion Service. You connect your Onion Service to the 
Tor network.
+Your Onion Service establishes a connection through an anonymized circuit (Tor 
relays) to **three introduction points** to be reachable.
+Your Onion Service is hidden and protected itself behind the Tor network.
+
+2. For a client access to your Onion Service, your Onion Service creates an 
**Onion Service descriptor**.
+This **descriptor** contains **a list of introduction points**, next It's 
signed this with the Onion Service's identity private key.
+The identity private key used here is the private part of the public key that 
is encoded in the Onion Service address.
+Then the Onion Service publishes the **descriptor** to a **distributed hash 
table** (also called Directory) with still an anonymized circuit.
+
+3. Let's say that the client knows your Onion address.
+To visit your Onion Service, the client connects to the Tor network with Tor 
Browser.
+After the client contact the **distributed hash table** to get the signed 
descriptor of your Onion Service to know the three **introduction points**.
+
+4. When the client receives the signed descriptor, thanks to the public key 
embedded in the onion address of your Onion Service, It can verify the 
signature of the descriptor.
+It permits to be sure you connect to the real Onion Service and trust the 
**introduction points**.
+
+5. Now the client knows the **introduction points** to contact your Onion 
Service, it before determines a Tor relay which becomes a **rendez-vous** point.
+Next it sends to the **rendez-vous** point a secret string called "one-time 
secret".
+This secret string will be used later during the meeting between the client 
and your Onion Service.
+Then the client sends the **rendez-vous** point address as well as the secret 
to your Onion Service through the Tor network and one of **three introduction 
points** to ask the Onion Service about this meeting and establish a connection.
+
+6. When the **introduction points** receives the **rendez-vous** point as well 
the secret string from the client, It passes to your Onion Service and the 
latter does an all of verifications process to decide if it can trust the 
client or not.
+
+7. If your Onion Service trusts the client, It connects to the **rendez-vous** 
point via the Tor network and finally It sends the secret string's client to 
the **rendez-vous** point.
+To conclude the **rendez-vous** point compares the secret string between the 
client and the Onion Service.
+If it matches, it creates a connection between the client and your Onion 
Service and works as a relay.
+The connection is composed of six relays, three from the Onion Service to the 
**rendez-vous** point and three relays where the third is the **rendez-vous** 
point for the client.
+
+
+### How to create an Onion Service
+
+There are few steps to create an Onion Service.
+
+**1. Open these ports on your firewall:**
+- 443/tcp out toward Tor repository servers, to permit a connection to the Tor 
repository via HTTPS and download the **tor** package as well as keyring.
+- greater than 1024/tcp out toward anywhere, to permit your Onion Service 
connects to 3 introduction points, a distributed hash table (directory) server 
and a rendezvous point.
+
+**2. Install Tor on your server.**
+According your operating system, the way to install Tor can be different. Here 
are the steps for a Tor installation on Trisquel.
+- You must know your CPU architecture. ``` $ dpkg --print-architecture```.
+  * Only three CPU architecture type are supported: amd64, arm64 or i386.*
+- Install apt-transport-https to enable all package managers using the 
libapt-pkg library to access metadata and packages available in sources 
accessible over https. ```# apt install apt-transport-https```.
+- Go to ``` /etc/apt/sources.list.d/``` and create a file named 
```tor.list```. It permits to enable new repositories, in this case Tor 
repositories.
+Then fill this file by the following terms:
+
+```
+deb     [signed-by=/usr/share/keyrings/tor-archive-keyring.gpg] 
https://deb.torproject.org/torproject.org <DISTRIBUTION> main
+deb-src [signed-by=/usr/share/keyrings/tor-archive-keyring.gpg] 
https://deb.torproject.org/torproject.org <DISTRIBUTION> main
+```
+
+Where ```<DISTRIBUTION>``` must be replaced by your Operating System codename. 
Run this command to know the latter.
+
+```
+# grep -i "UBUNTU_CODENAME" /etc/os-release
+```
+
+With this example the result is **focal**, so replace ```<DISTRIBUTION>``` by 
**focal**.
+- Then add the gpg key used to sign the packages by running the following 
command.
+    # wget -qO- 
https://deb.torproject.org/torproject.org/A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89.asc
 | gpg --dearmor | tee /usr/share/keyrings/tor-archive-keyring.gpg >/dev/null
+- Now install Tor and tor debian keyring.
+
+```
+# apt update
+# apt install tor deb.torproject.org-keyring
+```
+
+**3. Have a working web server.**
+You can use for example Apache or Ngnix as a web server.
+You must configure your server to run your website on **localhost:80**.
+You can test your configuration as well as your website in this way:
+
+- To test the Apache configuration: ```# apache2ctl -t```
+- To test the virtual host configuration: ```# apache2ctl -t -D DUMP_VHOSTS```
+- To test your website, open a web browser and connect to 
```http://localhost/``` from your server.
+
+**4. Configure Tor.**
+To configure Tor, edit the ```torrc``` file. To find it run this command: ```# 
find / -type f -name "torrc" 2> /dev/null```
+Then add these lines to set up an Onion Service.
+
+```
+HiddenServiceDir /var/lib/tor/my_website/
+HiddenServicePort 80 127.0.0.1:80
+```
+
+The HiddenServiceDir line specifies the directory which should contain 
information and cryptographic keys for your Onion Service.
+
+The HiddenServicePort line specifies a virtual port (that is, the port that 
people visiting your Onion Service will be using), and in the above case it 
says that any traffic incoming to port 80 of your Onion Service should be 
redirected to 127.0.0.1:80 (which is where the web server is listening).
+
+Note, it's recommended to run Onion Services over Unix sockets instead of a 
TCP socket. To avoid leaking an Onion Service to a local network.
+You need to replace the ```HiddenServicePort``` line by ```HiddenServicePort 
80 unix:/var/run/tor-my-website.sock```
+
+**5. Restart Tor and check if it works.**
+Restart Tor, ```# systemctl restart tor```.
+If Tor starts up again, great. Otherwise, something is wrong. First look at 
your logfiles for hints.
+
+**6. Test your Onion Service.**
+
+The Onion address of your Onion Service is available in this file: 
```/var/lib/tor/my_website/hostname```.
+With Tor Browser, try to access to your Onion address.
+
+**7. And after ?**
+
+It's mandatory to follow 
[this](https://riseup.net/en/security/network-security/tor/onionservices-best-practices)
 guide of Best Practices for Hosting Onion Services, **you must read it and 
apply it for secure reasons**.
+
+If you plan to keep your service available for a long time, you might want to 
make a backup copy of the private_key file (available in 
```/var/lib/tor/my_website/```).
+
+### Secure an Onion Service
+
+To secure your Onion Service as well as your server, you must follow this 
guide from Tor website.
+
+- Basic recommendations, click 
[here](https://community.torproject.org/onion-services/advanced/opsec/).
+- To secure your server, click 
[here](https://gitlab.torproject.org/legacy/trac/-/wikis/doc/OperationalSecurity).
+- To improve the security of your Onion Service with Vanguards, click 
[here](https://blog.torproject.org/announcing-vanguards-add-onion-services/).
+- To scan your Onion Service to find any vulnerabilities, click 
[here](https://onionscan.org/).

Added: trunk/sviki/fsf/tools/openscap.mdwn
===================================================================
--- trunk/sviki/fsf/tools/openscap.mdwn                         (rev 0)
+++ trunk/sviki/fsf/tools/openscap.mdwn 2023-12-06 20:05:18 UTC (rev 685)
@@ -0,0 +1,13 @@
+# OpenSCAP
+
+Security configuration scanner based around NIST.
+
+[Website](https://www.open-scap.org/)
+
+[GitHub Repos](https://github.com/OpenSCAP)
+
+[Documentation](https://www.open-scap.org/resources/documentation/)
+
+[ComplianceAsCode GitHub Repo](https://github.com/ComplianceAsCode/content) - 
Bash/Ansible automation scripts for OpenSCAP.
+
+[Compiling 
ComplianceAsCode](https://complianceascode.readthedocs.io/en/latest/manual/developer/02_building_complianceascode.html)

Added: trunk/sviki/fsf/tools/openssl.mdwn
===================================================================
--- trunk/sviki/fsf/tools/openssl.mdwn                          (rev 0)
+++ trunk/sviki/fsf/tools/openssl.mdwn  2023-12-06 20:05:18 UTC (rev 685)
@@ -0,0 +1,111 @@
+[[!toc  levels=4]]
+
+# OpenSSL
+
+## Letsencrypt
+
+See also [[Tickets/dns/letsencrypt/]].
+
+On older servers, try [acme.sh](https://github.com/acmesh-official/acme.sh) 
which is a simple bash implementation that does not have all of the python 
dependencies.
+
+### Verify a cert
+
+With openssl:
+
+    $ ssh foo.gnu.org  openssl x509 -enddate -noout -in 
/etc/letsencrypt/live/*/cert.pem
+    notAfter=Oct  5 14:03:00 2016 GMT
+
+With gnutls (install `gnutls-bin` package):
+
+    gnutls-cli food.gnu.org
+
+Check for multiple hosts:
+
+    host foo.gnu.org
+
+Sometimes one of them might have an expired cert.
+
+### Fetch an active cert / use a netcat-like interface
+
+    openssl s_client -connect minetest.libreplanet.org:443
+
+For services that use StartTLS (see `man s_client`):
+
+    openssl s_client -starttls smtp -connect mail.fsf.org:587
+
+For XMPP:
+
+    openssl s_client -starttls xmpp -connect jabber.fsf.org:5222 -xmpphost 
fsf.org
+
+### Add domains to letsencrypt cert:
+
+To add more domains to a cert:
+
+    export HTTP_PROXY=http://serverproxy0p.fsf.org:8118; export 
HTTPS_PROXY=http://serverproxy0p.fsf.org:8118; export 
NO_PROXY=localhost,127.0.0.1
+
+    certbot certonly --apache -d shop.fsf.org -d shopserver0p.fsf.org -d 
gnupress.org -d www.gnupress.org -d ...
+
+While the above command is supported on Trisquel 8, due to [security 
issues](https://letsencrypt.status.io/) with LE, you might want to try:
+
+    certbot certonly --standalone -d shop.fsf.org -d shopserver0p.fsf.org -d 
gnupress.org -d www.gnupress.org -d ...
+
+A dialog should appear. I was able to get verification while pressing cancel
+each time it asked about a file with the proper vhost.
+
+    service apache2 reload
+
+### Remove a cert
+
+    certbot delete --cert-name olddomain.fsf.org
+
+## Create a new CSR
+* ssh -A root@nessus
+* cd /etc/ssl/  then run below
+* openssl req -nodes -newkey rsa:2048 -sha256 -keyout <site name>.key -out 
<site name>.csr
+* Use the <site name>.csr for gandi
+* scp the newly created key to the root@<sitename>:/etc/ssl/private/
+
+
+## Gandi
+* request new ssl with the newly created .csr file
+* use dns zones
+* add the designated dns zone data to your local /bind/masters/db.<sitename>
+* run the ./update-zones <sitename>
+* Wait for Gandi to send the notification of the newly created ssls
+* Retrieve the SSLs and place them on the target site
+* On the 
http://wiki.gandi.net/en/ssl/intermediate#sha2_intermediate_certificates copy 
the http://crt.usertrust.com/USERTrustRSAAddTrustCA.crt local an place it on 
the targeted site as well
+### Apache
+* in most likely root@<target>:/etc/ssl/ cat GandiStandardSSLCA2.pem 
USERTrustRSAAddTrustCA.crt > GandiChainFile.pem
+* update the /etc/apaches/sites-available/<target site>
+* Make it look similar to below
+  * SSLCertificateFile      /etc/ssl/certs/lists.defectivebydesign.org.crt
+  * SSLCertificateKeyFile   /etc/ssl/private/lists.defectivebydesign.org.key
+  * SSLCertificateChainFile /etc/ssl/certs/GandiStandardSSLCA2.pem
+* then after the apache file is updated run service apache2 reload
+* Visit the site in a private browser window and verify the cert is updated.
+
+### nginx
+* in most likely root@<target>:/etc/ssl/ cat <name of cert>.crt 
GandiStandardSSLCA2.pem USERTrustRSAAddTrustCA.crt > GandiChainFileNgninx.crt
+* in the /etc/ngnix/sites-available/<target site>
+* Make sure the
+  * server {
+  * listen 443;
+  * include    /etc/nginx/mediagoblin-common.conf;
+  * access_log  /var/log/nginx/media.libreplanet.org-ssl.access.log;
+  * error_log  /var/log/nginx/media.libreplanet.org-ssl.error.log;
+  * ssl on;
+  *    ssl_session_cache shared:SSL:10m;
+  *    ssl_session_timeout 10m;
+
+  * ssl_certificate /etc/ssl/certs/wildcard.libreplanet.org-full-chain.crt;
+  * ssl_certificate_key /etc/ssl/private/wildcard.libreplanet.org.key;
+* save the changes && /etc/init.d/nginx configtest
+* If the test passes then /etc/init.d/nginx reload
+
+
+## OCSP Stapling
+
+OCSP stapling allows for ssl clients (usually a web browser) to obtain the ssl 
certificate revocation status directly from our servers, which saves on an 
extra connection during ssl handshake, and improves on privacy for the users. 
Since our web servers are firewalled off (so they cannot initiate outbound 
connections), requests to OSCP validation servers have to go through a proxy. 
We run two OCSP proxy servers:
+
+* serverproxy0p.fsf.org:8002 running 
[[https://github.com/philfry/ocsp_proxy]]. This process would detect the OCSP 
validator url automatically, and it is compatible with 
2.4.18-2ubuntu3.17+8.0trisquel3 or older, using the SSLOCSPProxyURL config 
variable. This is a caching service
+* serverproxy0p.fsf.org:8001 running 
[[https://github.com/dlecorfec/ocsp-proxy]]. **NOT RECOMMENDED** This process 
has a hardcoded url for the OCSP validator server to target, and it needs to be 
updated when letsencrypt changes providers. It is only to be used on nginx, or 
apache2 < 2.4.18-2ubuntu3.17+8.0trisquel3. This is a non-caching service

Added: trunk/sviki/fsf/tools/postgresql.mdwn
===================================================================
--- trunk/sviki/fsf/tools/postgresql.mdwn                               (rev 0)
+++ trunk/sviki/fsf/tools/postgresql.mdwn       2023-12-06 20:05:18 UTC (rev 
685)
@@ -0,0 +1,8 @@
+# postgresql
+
+restore a single table from a directory-based backup:
+
+    pg_restore -l . > db.list
+
+    # edit db.list by commenting out lines with ";" to exclude tables
+    pg_restore -L db.list .

Added: trunk/sviki/fsf/tools/privoxy.mdwn
===================================================================
--- trunk/sviki/fsf/tools/privoxy.mdwn                          (rev 0)
+++ trunk/sviki/fsf/tools/privoxy.mdwn  2023-12-06 20:05:18 UTC (rev 685)
@@ -0,0 +1,19 @@
+# Privoxy
+
+[Website](https://www.privoxy.org/) -
+[Doc](https://www.privoxy.org/user-manual/index.html) -
+[Source](https://www.privoxy.org/gitweb/?p=privoxy.git) (GPL-2.0-or-later)
+
+"Privoxy is a non-caching web proxy with advanced filtering capabilities for
+enhancing privacy, modifying web page data and HTTP headers, controlling
+access, and removing ads and other obnoxious Internet junk. Privoxy has a
+flexible configuration and can be customized to suit individual needs and
+tastes. It has application for both stand-alone systems and multi-user
+networks."
+
+We use Privoxy on serverproxy0p.fsf.org to filter outbound traffic on our
+servers.
+
+## Configuration
+
+Configuration files are located in the `/etc/privoxy/` directory.

Added: trunk/sviki/fsf/tools/prometheus.mdwn
===================================================================
--- trunk/sviki/fsf/tools/prometheus.mdwn                               (rev 0)
+++ trunk/sviki/fsf/tools/prometheus.mdwn       2023-12-06 20:05:18 UTC (rev 
685)
@@ -0,0 +1,296 @@
+# Prometheus
+
+Use cases:
+
+* Alerting (replace Nagios & Checkmk)
+* Explore data, eg to debug problems
+* Create Munin-like graphs, maybe eventually replace Munin
+
+Prometheus UI is at
+<https://prom.fsf.org/graph> and
+<https://prom.office.fsf.org/graph>. When not in office or on the VPN 
<https://valis.gnu.org/graph> is an alternative for 
<https://prom.office.fsf.org/graph>, if it does not work.
+
+The password is in spd. Alertmanager has the same pass.
+
+Alertmanager UI is at <https://prom.fsf.org:9095/#/alerts>
+and <https://prom.office.fsf.org:9095/#/alerts>. When not in office or on the 
VPN, <https://valis.gnu.org:9095/#/alerts> is an alternative for 
<https://prom.office.fsf.org:9095/#/alerts>, if it does not work.
+
+Blackbox exporter UI is at <https://prom.fsf.org:9116>
+and <https://prom.office.fsf.org:9116>
+
+Don't silence an alert forever, we will forget about it.
+
+Alertmanager UI is mostly useful for temporarily silencing
+alerts. amtool cli program is more advanced and the main documented
+way of interacting with alertmanager.
+
+To silence all alerts temporarily, add a matcher like `x!="1"`
+
+Host based differentiation of alert priority: This is
+configured via ansible inventory [priority_high:children], which is a
+list of groups. One of which is [priority_high_hosts], in order to
+directly add hosts.
+
+Initial brain dump meeting notes pad: 
https://etherpad.fsf.org/p/2pRPC_jizUwnEVg5kam4
+
+## amtool configuration
+
+The good basic example usage: https://github.com/prometheus/alertmanager
+
+~/.config/amtool/config.yml :
+
+```
+alertmanager.url: "https://admin:PUT_PASSWORD_FROM_SPD_HERE@prom.fsf.org:9095";
+output: extended
+comment_required: false
+```
+
+```
+amall() {
+  echo "$(tput setaf 5 2>/dev/null ||:)█ coresite █$(tput sgr0 2>/dev/null||:)"
+  amfsf "$@"
+  echo "$(tput setaf 5 2>/dev/null ||:)█ office █$(tput sgr0 2>/dev/null||:)"
+  amoffice "$@"
+}
+amallq() { # amall quiet
+  amfsf "$@"
+  amoffice "$@"
+}
+amfsf() {
+  sedi -r '/alertmanager.url/s/@prom.office/@prom/' ~/.config/amtool/config.yml
+  amtool "$@"
+}
+amoffice() {
+  sedi -r '/alertmanager.url/s/@prom.fsf/@prom.office.fsf/' 
~/.config/amtool/config.yml
+  amtool "$@"
+}
+amls() {
+  amall silence query "$@"
+}
+amrmall() {
+  # note: not sure if quoting of this arg is correct
+  amfsf silence expire "$(amfsf silence query -q)"
+  amoffice silence expire "$(amoffice silence query -q)"
+}
+```
+
+## Good Documentation
+
+* Official docs
+* https://www.robustperception.io/blog/
+* https://utcc.utoronto.ca/~cks/space/blog/sysadmin/
+
+Useful for understanding Blackbox relabeling:
+
+* 
<https://utcc.utoronto.ca/~cks/space/blog/sysadmin/PrometheusBlackboxBulkChecks>
+* <https://utcc.utoronto.ca/~cks/space/blog/sysadmin/PrometheusBlackboxNotes>
+
+Links:
+
+* GitHub
+  * [Prometheus](https://github.com/prometheus/prometheus)
+  * [Alertmanager](https://github.com/prometheus/alertmanager)
+  * [Node exporter](https://github.com/prometheus/node_exporter)
+  * [Blackbox prober exporter](https://github.com/prometheus/blackbox_exporter)
+
+## Debugging blackbox grep failure
+
+Curl can almost always get the same response as blackbox exporter probe did. 
Append the url to the following:
+
+```
+curl -H "User-Agent: Blackbox Exporter/0.24.0" -4
+# or for ipv6
+curl -H "User-Agent: Blackbox Exporter/0.24.0" -6
+```
+
+You can view debug logs via <https://prom.fsf.org:9116/>
+
+
+
+You can also construct a probe url and manually set it off and get the
+logs
+
+```
+https://prom.fsf.org:9116/probe?target=https://h-node.org&module=grep4_h-node.org&debug=true
+
+# one liner, append target and module name as args to the following:
+bburl() { echo 
"https://prom.fsf.org:9116/probe?target=$1&module=$2&debug=true";; }; bburl
+```
+
+## Background, future improvements
+
+In 2023, Ian estimated we would need about 120gb of storage to store
+data on CoreSite hosts for 200 days.
+
+
+todo:
+
+* see if we can make critical alerts be black in nagstamon
+
+* Monitor the office network being down from the data center.
+
+* Update valis & office WiFi to new LibreCMC version and add node exporter
+
+* Setup alerts that are at least as comprehensive as Nagios
+
+* monitor the office network being down from the data center.
+
+* migrate
+  
roles/prom/files/prometheus0p.fsf.org/etc/prometheus/file_sd/blackbox_https/ansible.yml
+  to be grep tests then remove
+
+* Make a disk filling alert to be 3 days out
+
+* update valis & office wifi to new librecmc version and add node exporter
+
+* Setup other exporters like MySQL
+
+* Get more disk space for the VM, extend storage to 200 days.
+
+* Setup Munin-like graphs
+
+## Notes on manual deployment steps for machines not ansible managed
+
+
+To check what is listening on 9100:
+
+```
+ss -lptn | grep 9100
+```
+
+vim /etc/default/iptables-rules
+or depending on machine
+vim /etc/default/iptables
+```
+# Prometheus node exporter. whole subnet to allow for ip changes
+-A INPUT -m tcp -p tcp --src 209.51.188.0/24 --dport 9100 -j ACCEPT
+-A state-check -m tcp -p tcp --src 209.51.188.0/24 --dport 9100 -j ACCEPT
+# Prometheus from office public ip
+-A INPUT -m tcp -p tcp --src 74.94.156.210 --dport 9100 -j ACCEPT
+```
+
+If the machine has an IPv6 address, node exporter will only listen on
+that by default. The chain name may vary, examples below.
+
+vim /etc/default/ip6tables-rules
+```
+-A state-check -p tcp -m tcp --src 2001:470:142:5::115 --dport 9100 -j ACCEPT
+-A input_block -p tcp -m tcp --src 2001:470:142:5::115 --dport 9100 -j ACCEPT
+```
+
+
+For office network:
+```
+# prometheus node exporter
+-A INPUT -m tcp -p tcp --src 192.168.0.56 --dport 9100 -j ACCEPT
+```
+
+
+On Savannah machines
+
+/etc/shorewall6/rules
+
+```
+ACCEPT  net:2001:470:142:5::115     fw     tcp     9100  # FSF Prometheus
+```
+
+```
+cd /etc
+cat shorewall6/rules
+sed -i '/Nagios/a ACCEPT net:2001:470:142:5::115     fw     tcp     9100  # 
FSF Prometheus' /etc/shorewall6/rules
+i diff shorewall*
+shorewall6 safe-restart
+git add shorewall*
+git commit -m 'adding FSF Prometheus'
+```
+
+### IPv4 stuff if needed
+
+```
+sed -i '/Nagios/a ACCEPT net:209.51.188.115/24   fw      tcp     9100  # FSF 
prometheus' /etc/shorewall/rules
+i diff shorewall*
+shorewall safe-restart
+```
+
+/etc/shorewall/rules
+
+```
+ACCEPT  net:209.51.188.115/24   fw      tcp     9100  # FSF Prometheus
+```
+
+## Exporter security background
+
+The only real risk of exposing node-exporter to the internet is that it
+could expose that we are using outdated software.
+
+The other known issue with exposing node-exporter is potential for DOS
+by sending lots of GETs, but we expose lots of we pages that cause much
+more processing than node-exporter, so that is not a serious issue.
+
+Other exporters could have sensitive data, evaluate each one as we adopt
+it. There was a Kubernetes exporter that had sensitive info and was used
+nefariously. The Prometheus upstream strangely just says "you can
+restrict access to it if you want", but doesn't recommend you do, and
+then just implicitly recommends that you don't by not doing much to
+document anything about it. The main way we restrict it is via
+tls. Secondary is the firewall. Apache/nginx tls + basic auth may be the
+best option for exporters other than node-exporter.
+
+
+How to curl a node-exporter from the Prometheus server. alter host= to curl
+a different host
+
+```
+host=localhost; curl --cert /etc/prometheus/ssl/prometheus_cert.pem --key 
/etc/prometheus/ssl/prometheus_key.pem --cacert 
/etc/prometheus/ssl/prom_node_cert.pem --resolve prom_node:9100:$host -v 
https://prom_node:9100/metrics
+```
+
+How certs were initially generated:
+
+```
+openssl req -x509 -newkey rsa:2048 -keyout ${keydir}prom_node_key.pem -out 
prom_node_cert.pem -days 29220 -nodes -subj /commonName=prom_node/ -addext 
"subjectAltName=DNS:prom_node"
+
+openssl req -x509 -newkey rsa:2048 -keyout ${keydir}prometheus_key.pem -out 
prometheus_cert.pem -days 29220 -nodes -subj /commonName=prometheus/ -addext 
"subjectAltName=DNS:prometheus"
+```
+
+
+## Background on why we are avoiding Grafana for graphs
+
+
+Grafana:
+
+Pros:
+
+* More popular
+
+Cons:
+
+* Graphs are setup by clicking in a web GUI and stored in undocumented
+JSON format.
+* GUI has bugs on GNU/Linux abrowser
+* Uses more resources than Prometheus
+* Open core business model
+
+Prometheus
+
+Pros:
+
+* Graph definitions uses PromQL, which is needed for setting up
+  alerts. helps you learn and remember
+* Graph definitions are text files
+
+Cons:
+
+* Graphs often take longer to create than Grafana
+
+## Nagstamon
+
+I wrote documentation for configuring our Prometheus instances with Nagstamon, 
but it got lost in a gluestick malfunction.
+
+### Filter pending Attempts
+
+Prometheus announces when it is actively checking something which is noise for 
Nagstamon.
+
+* Click on the `Filters` button.
+* Click on the `Regular expression for attempt` checkbox.
+* Write `pending` in the text box.
+* Click on the `OK` button.

Added: trunk/sviki/fsf/tools/pwgen.mdwn
===================================================================
--- trunk/sviki/fsf/tools/pwgen.mdwn                            (rev 0)
+++ trunk/sviki/fsf/tools/pwgen.mdwn    2023-12-06 20:05:18 UTC (rev 685)
@@ -0,0 +1,17 @@
+# pwgen
+
+pwgen is a password generator.
+
+To generate 5 lines of secure 20 character passwords:
+
+    pwgen -s -1 20 5
+
+To generate an [xkcd](https://m.xkcd.com/936/) style password using your 
computer's dictionary, use this command:
+
+    shuf -n 6 /usr/share/dict/words | sed -e s/"'"//g | tr '\n' ' ' | sed 's/ 
//g' && echo \
+
+Change `shuf -n 6` to adjust the word length.
+
+Another dictionary:
+
+    shuf -n3 /usr/share/hunspell/en_US.dic | sed 's,/.*,,' | paste -sd . -

Added: trunk/sviki/fsf/tools/rsync.mdwn
===================================================================
--- trunk/sviki/fsf/tools/rsync.mdwn                            (rev 0)
+++ trunk/sviki/fsf/tools/rsync.mdwn    2023-12-06 20:05:18 UTC (rev 685)
@@ -0,0 +1,72 @@
+# rsync
+
+## useful command line options
+
+options affecting the transfer:
+
+-S, --sparse
+-A, --acls
+-X, --xattrs
+-H, --hard-links
+-a, --archive
+-P --partial --progress
+
+    rsync -avhSAXPH src dest/
+    rsync -avhSAXPH --numeric-ids --delete --exclude=/proc --exclude=/sys 
--exclude=/dev --exclude=/tmp --exclude=/run src dest/
+
+Options affecting the output
+
+-v, --verbose
+-h, --human-readable
+
+
+Other flags to consider
+
+`-z` option adds compression.
+
+### for fat32 file32 systems
+
+    rsync -rtvhP --modify-window=1 --delete-after ...
+
+If you don't like that all of your files show up as executables, you could
+remount the file system with different options.
+
+### for wizbackups
+
+Use the H flag to preserve hardlinks for wizbackup dirs, since files
+that havent changed between backups are hardlinked to save space. This
+flag slows down overall transfer (not sure exactly how much), but it's
+probably not worth using otherwise.
+
+## caveats
+
+be careful about trailing slashes on the source directory. if you include a
+slash, the contents of the directory will be synced into the destination,
+rather than a directory in the destination directory.
+
+also be careful about syning a home dir from a remote server to your local
+home, or vice versa. this will likely screw up your ssh keys and other dot
+files / directories.
+
+Note the leading `/` characters in the `--exclude=` parameters. These are
+necessary so you don't exclude paths like `/foo/tmp`.
+
+## Overall progress
+
+Here is an example of using df to track progress of a long running
+transfer. For a long transfers, the v and P flag mostly just waste cpu
+cycles and I (Ian) don't use them.
+
+replace /mnt/monolith below
+
+```
+mnt=/mnt/monolith
+m() { df -BM $mnt | tail -n1 | awk '{print $3}'| sed 's/[^0-9]//g'; }
+old=$(m)
+while true; do
+  sleep 600
+  new=$(m)
+  printf "%s %'d MB/min in last 10 minutes\n" "$(date)" $(( (new - old) / 10))
+  old=$new
+done
+```

Added: trunk/sviki/fsf/tools/siege.mdwn
===================================================================
--- trunk/sviki/fsf/tools/siege.mdwn                            (rev 0)
+++ trunk/sviki/fsf/tools/siege.mdwn    2023-12-06 20:05:18 UTC (rev 685)
@@ -0,0 +1,33 @@
+# Siege
+
+Load test for websites
+
+[Source](https://github.com/JoeDog/siege) GPL-3.0-or-later
+
+[Tutorial](https://www.admin-magazine.com/Archive/2022/72/Load-test-your-website-with-Siege/(offset)/3)
+
+## Installation
+
+    apt install -y siege
+
+## Use
+
+Make sure you have permission before load testing a website. By default,
+`siege` acts as 25 users. Replace `https://www.fsf.org/` with the URL of your
+choice.
+
+    siege https://www.fsf.org/
+
+Change the number of users with `-c`. Change `60` to the number of users you
+want to test with.
+
+    siege -c 60 https://www.fsf.org/
+
+Test load on a number of URLs by placing them into a plaintext file and test
+all of them at once.
+
+    siege -f ~/sites.txt
+
+Set the time to test with the `-t` switch.
+
+    siege -t 30s https://www.fsf.org/

Added: trunk/sviki/fsf/tools/smartctl.mdwn
===================================================================
--- trunk/sviki/fsf/tools/smartctl.mdwn                         (rev 0)
+++ trunk/sviki/fsf/tools/smartctl.mdwn 2023-12-06 20:05:18 UTC (rev 685)
@@ -0,0 +1,32 @@
+# smartctl
+
+To test drives with the SMART / S.M.A.R.T. capabilities:
+
+    # install smartctl
+    apt install smartmontools
+
+    # check most of the stored SMART data
+    smartctl -a /dev/sda
+
+    # start a long test (it will tell you how long it should take)
+    smartctl -t long /dev/sda
+
+If you get an error about not being able to determine if the drive is SATA or
+SCSI, etc, then you may want to look up the `-d` option in the man page, or if
+you know it's SATA based, just try `-d sat` to the command line invocations of
+smartctl.
+
+    # check the test results (you can check during the test too)
+    smartctl -l selftest /dev/sda
+
+When using the `-l` option, you want to see a blank entry or a `-` for
+`LBA_of_first_error`, and `00%` for `Remaining`. At that point, the test is
+done. If there is an LBA reported, then that means that that test failed.
+
+Sometimes it can be helpful to check `dmesg` for certain kinds of errors, but
+the testing is performed within the disk itself.
+
+As long as the disks keep power and aren't suspended by the kernel or your
+motherboard, they should run in the background. If the tests are automatically
+interrupted by disks getting suspended due to inactivity, you can write a bash
+loop that reads a single block from the device about once every minute.

Added: trunk/sviki/fsf/tools/split.mdwn
===================================================================
--- trunk/sviki/fsf/tools/split.mdwn                            (rev 0)
+++ trunk/sviki/fsf/tools/split.mdwn    2023-12-06 20:05:18 UTC (rev 685)
@@ -0,0 +1,12 @@
+# split file into chunks
+
+Split file into 16 MB chunks (useful for emails):
+
+    split -b 16M my.tgz my.tgz.part-
+
+You can also do it by number of lines with `-l`.
+
+Be careful not to pass a number like `16` to the `-b` parameter, otherwise it
+will split your file into millions of 16 byte files. To recover from that:
+
+    for x in my.tgz.part-* ; do rm $x ; done

Added: trunk/sviki/fsf/tools/ssh.mdwn
===================================================================
--- trunk/sviki/fsf/tools/ssh.mdwn                              (rev 0)
+++ trunk/sviki/fsf/tools/ssh.mdwn      2023-12-06 20:05:18 UTC (rev 685)
@@ -0,0 +1,101 @@
+## key fingerprints
+
+To print the md5 fingerprint of a public key:
+
+    ssh-keygen -E md5 -l -f /tmp/t.pub
+    2048 MD5:a2:09:53:76:73:9d:bf:ef:fd:e9:91:05:f1:46:27:25 foo@bar (RSA)
+
+## ssh jump
+
+This is similar to and simpler than manually setting up SSH tunneling.
+
+To ssh into tarantula via valis (the office router):
+
+    ssh -J valis.fsf.org 192.168.0.25
+
+To ssh into tarantula via valis then via reresolver.office.fsf.org:
+
+    ssh -J valis.fsf.org,192.168.0.10 192.168.0.25
+
+## ssh tunneling
+
+To ssh into a machine via an ssh tunnel:
+
+    ssh -L 2201:endpoint.fsf.org:22 root@via.fsf.org
+
+    ssh localhost -p 2201
+
+or:
+
+    # place this in ~/.ssh/config
+    Host endpoint.fsf.org
+      ProxyCommand ssh via.fsf.org -W %h:%p
+
+ssh tunneling is more secure than agent forwarding, because keys are not 
copied to the intermediary host.
+
+## sshd_config
+
+The `ssh` config is located at `/etc/ssh/sshd_config`
+
+### AllowUsers
+
+`AllowUsers` would be helpful during a fencepost upgrade.
+
+Edit /etc/ssh/sshd_config.
+
+```
+vim /etc/ssh/sshd_config
+```
+
+Add a line:
+
+```
+AllowUsers root andrew iank johns michael ruben
+```
+
+Test the config:
+
+```
+sshd -t
+```
+
+Reload the config:
+
+```
+systemctl reload sshd
+```
+
+After that only those users would be allowed to login.  I am not sure if 
anyone would be kicked off, but I am less concerned about that.
+
+Once the upgrade completes, we would comment out the `AllowUsers` line, test, 
and reload again.
+
+## Hardening SSH
+
+### Audit with ssh-audit.py
+
+    git clone https://github.com/jtesta/ssh-audit
+    cd ssh-audit
+    python3 ssh-audit.py fencepost.gnu.org
+
+You can add a list of servers to a file and run ssh-audit on the list.
+
+    python3 ssh-audit.py -T servers.txt
+
+## Tor onion service
+
+[From Ludovic](https://toot.aquilenet.fr/@civodul/106377264582243612)
+
+"Protip: when installing a server, install #Tor on it and have its SSH port 
accessible as an onion service.  That way, if you mess up with network config, 
you might still be able to access the onion service.
+
+(Really! It’s saved me a few times already.)"
+
+## Connecting to older SSH servers
+
+To connect to nessus, you can use the following:
+
+    ssh -oKexAlgorithms=+diffie-hellman-group1-sha1 nessus.office.fsf.org
+
+Or add the following to your `~/.ssh/config` file
+
+    Host nessus.office.fsf.org amt.fsf.org
+        KexAlgorithms +diffie-hellman-group1-sha1

Added: trunk/sviki/fsf/tools/stress.mdwn
===================================================================
--- trunk/sviki/fsf/tools/stress.mdwn                           (rev 0)
+++ trunk/sviki/fsf/tools/stress.mdwn   2023-12-06 20:05:18 UTC (rev 685)
@@ -0,0 +1,102 @@
+# Stress Testing hardware
+
+## hwtest.sh
+
+Michael has a generic script to test components individually one at a time: 
<https://github.com/TechnologyClassroom/HardwareTest/blob/master/hwtest.sh>  
This can be used as a base to build a custom stress test.
+
+## Preparation
+
+### Disable TTY blanking
+
+    setterm -blank 0 -powerdown 0 -powersave off
+
+## Custom monitor that can be run within tmux or a TTY
+
+    watch -n .7 'sensors | grep -e fan1 -e fan3 -e fan5 -e fan6 -e temp1 -e 
temp7 -e power1 && echo \ && iostat -ctd | grep -e avg -e Devic -e sd'
+
+## Stress the CPU(s)
+
+### stress
+
+10 minutes with one worker per core.
+
+    stress --cpu $(cat /proc/cpuinfo | grep -e processor | wc -l) -t 
$((60*10)) &
+
+### stress-ng
+
+Note that this doesn't seem to stress cores to the max, even if you specify
+double the machine's cores.
+
+    stress-ng -C 0 --cpu-method all
+
+## 'yes' stress
+
+This seems to get 95% of the cores going at full speed.
+
+    for x in {0..31} ; do yes > /dev/null & done
+
+    pkill -f yes
+
+## Stress the RAM
+
+### stress
+
+10 minutes using 90% of RAM.
+
+    stress --vm-bytes $(cat /proc/meminfo | grep mF | awk '{printf "%d\n", $2 
* 0.9}')k --vm-keep -m 1 -t $((60*10)) &
+
+### memtester
+
+Test 90% of RAM.
+
+    memtester $(free -m | head -n 2 | tail -n 1 | awk '{print $7 * 0.9}') 1
+
+## Stress the storage
+
+### fio
+
+sd[b-e] write example
+
+    fio --name=readwrite --ioengine=libaio --iodepth=128 --rw=readwrite 
--bs=8k --direct=1 --size=512M --numjobs=8 
--filename=/dev/sdb:/dev/sdc:/dev/sdd:/dev/sde --time_based=7200 --runtime=7200 
--filesize=990G --group_reporting | grep io > /tmp/storage/fiosdb &
+
+You can enter configurations into a file and tell fio to use that file.
+
+randread.fio content:
+
+```
+[global]
+bs=8k
+iodepth=128
+direct=1
+ioengine=libaio
+randrepeat=0
+group_reporting
+time_based
+runtime=90000
+filesize=990G
+
+[job1]
+rw=randread
+filename=/dev/sdc:/dev/sdd:/dev/sde:/dev/sdf:/dev/sda:/dev/sdb
+name=random-read
+```
+
+Call randread.fio
+
+    fio randread.fio
+
+## Stress the network
+
+### iperf
+
+On another machine, run an iperf server.
+
+    iperf -s
+
+On the machine to stress, run iperf clients.
+
+    iperf -c 10.12.16.5 -d -t 3600 -p 5001
+
+Run one client for each port.
+
+Configure each port with a static IP so you can tell which NIC is being 
stressed.

Added: trunk/sviki/fsf/tools/sysrq.mdwn
===================================================================
--- trunk/sviki/fsf/tools/sysrq.mdwn                            (rev 0)
+++ trunk/sviki/fsf/tools/sysrq.mdwn    2023-12-06 20:05:18 UTC (rev 685)
@@ -0,0 +1,12 @@
+# using sysrq to reboot when systemd is broken
+
+This script comes from Bob Proulx:
+
+    #!/bin/sh
+    echo 1 >/proc/sys/kernel/sysrq
+    echo s >/proc/sysrq-trigger
+    echo b >/proc/sysrq-trigger
+
+It may also be worth trying to manually kill important processes first, so they
+have a chance to write out any state to disk before the system halts and
+immediately reboots.

Added: trunk/sviki/fsf/tools/systemd.mdwn
===================================================================
--- trunk/sviki/fsf/tools/systemd.mdwn                          (rev 0)
+++ trunk/sviki/fsf/tools/systemd.mdwn  2023-12-06 20:05:18 UTC (rev 685)
@@ -0,0 +1,41 @@
+## Auto-restart services
+
+For non-systemd-native services like apache2 or coturn, you can implement the 
systemd behavior of auto-restart services by adding a config override:
+
+```
+cat /etc/systemd/system/coturn.service.d/override.conf
+[Service]
+Restart=always
+RestartSec=5s
+PIDFile=/var/run/turnserver.pid
+RemainAfterExit=no
+Type=forking
+```
+
+You can add that with *systemctl edit coturn.service* or apply via ansible 
like in commit *[master 454c0a8] Autorestart coturn on failure*
+
+For a normal service where systemd correctly detects that the service is 
failed when the process died, you only need to add:
+
+```
+[Service]
+Restart=always
+RestartSec=5s
+```
+
+Note, that to restart forever, RestartSec will need to be >=3s, because
+of the defaults listed in /etc/systemd/system.conf:
+
+```
+#DefaultStartLimitIntervalSec=10s
+#DefaultStartLimitBurst=5
+```
+
+If you want to restart faster, you need to change those values, for example:
+
+```
+[Unit]
+StartLimitIntervalSec=0
+```
+
+However, you may also want to consider setting `StartLimitAction=` (man
+systemd.unit) to reboot the system.

Added: trunk/sviki/fsf/tools/tor.mdwn
===================================================================
--- trunk/sviki/fsf/tools/tor.mdwn                              (rev 0)
+++ trunk/sviki/fsf/tools/tor.mdwn      2023-12-06 20:05:18 UTC (rev 685)
@@ -0,0 +1,154 @@
+#Installing a Tor Service on a Trisquel Virtual Machine
+
+[problem] - [[potential duplicate page|/tools/onion_service/]]
+
+##How Onion Services works
+Below here is a schema of Onion Services working.
+
+![Onion Services overview schema](onion_service_overview.jpg)
+
+Briefly:
+
+1. You want to create an Onion Service. You connect your Onion Service to the 
Tor network.
+Your Onion Service establishes a connection through an anonymized circuit (Tor 
relays) to **three introduction points** to be reachable.
+Your Onion Service is hidden and protected itself behind the Tor network.
+
+2. For a client access to your Onion Service, your Onion Service creates an 
**Onion Service descriptor**.
+This **descriptor** contains **a list of introduction points**, next It's 
signed this with the Onion Service's identity private key.
+The identity private key used here is the private part of the public key that 
is encoded in the Onion Service address.
+Then the Onion Service publishes the **descriptor** to a **distributed hash 
table** (also called Directory) with still an anonymized circuit.
+
+3. Let's say that the client knows your Onion address.
+To visit your Onion Service, the client connects to the Tor network with Tor 
Browser.
+After the client contact the **distributed hash table** to get the signed 
descriptor of your Onion Service to know the three **introduction points**.
+
+4. When the client receives the signed descriptor, thanks to the public key 
embedded in the onion address of your Onion Service, It can verify the 
signature of the descriptor.
+It permits to be sure you connect to the real Onion Service and trust the 
**introduction points**.
+
+5. Now the client knows the **introduction points** to contact your Onion 
Service, it before determines a Tor relay which becomes a **rendez-vous** point.
+Next it sends to the **rendez-vous** point a secret string called "one-time 
secret".
+This secret string will be used later during the meeting between the client 
and your Onion Service.
+Then the client sends the **rendez-vous** point address as well as the secret 
to your Onion Service through the Tor network and one of **three introduction 
points**
+to ask the Onion Service about this meeting and establish a connection.
+
+6. When the **introduction points** receives the **rendez-vous** point as well 
the secret string from the client, It passes to your Onion Service and the 
latter does an all of
+verifications process to decide if it can trust the client or not.
+
+7. If your Onion Service trusts the client, It connects to the **rendez-vous** 
point via the Tor network and finally It sends the secret string's client to 
the **rendez-vous** point.
+To conclude the **rendez-vous** point compares the secret string between the 
client and the Onion Service.
+If it matches, it creates a connection between the client and your Onion 
Service and works as a relay.
+The connection is composed of six relays, three from the Onion Service to the 
**rendez-vous** point and three relays where the third is the **rendez-vous** 
point for the client.
+
+
+##Install a webserver
+Here I outline the steps for setting up Nginx and Apache2. Either is fine.
+
+###Nginx:
+
+    sudo apt-get install nginx
+
+Nginx stores the website root directory location in 
/etc/nginx/sites-available/default so change the line
+
+    root /var/www/html;
+
+to the path to your website
+
+then
+
+    service nginx start
+
+###Apache2:
+
+    sudo apt-get install apache2
+
+Apache2 stores the website root directory location in 2 places. In 
/etc/apache2/sites-available/000-default on line
+
+    DocumentRoot /var/www/html
+
+and in /etc/apache2/apache2.conf on line
+
+    <Directory /var/www/html>
+
+Change both to the appropriate path, then
+
+    service apache2 start
+
+
+##Install [Tor](https://2019.www.torproject.org/docs/tor-onion-service.html.en)
+
+###With apt
+
+    sudo apt-get install tor
+
+###With Ansible
+
+Create a tor group in your inventory file. The default location is 
/etc/ansible/hosts
+
+    [tor]
+    emailselfdefense.fsf.org
+
+Add these lines to your playbook, /etc/ansible/playbook.yml
+
+    - hosts: tor
+      become: true
+      roles:
+      - tor
+
+Put this role in /etc/ansible/roles/tor/tasks/main.yml
+
+    ---
+    - name: "Installing Tor"
+      apt: pkg=tor state=installed
+
+Then run
+
+    ansible-playbook /etc/ansible/playbook.yml
+
+
+##Configuring Tor
+Uncomment these lines in your /etc/tor/torrc:
+
+    HiddenServiceDir /Library/Tor/var/lib/tor/hidden_service/
+    HiddenServicePort 80 127.0.0.1:8080
+
+Change the HiddenServiceDir line to a location that is readable/writable by 
the user.
+
+i.e. /home/username/hidden_service/
+
+    tor -f /etc/tor/torrc
+
+Tor will create a private_key and hostname file in the hidden_service 
director, that's the .onion URL.
+
+Both a clearnet and a onion service of the same site should now be running. If 
your site has all relative links then your onion service should work as 
expected.
+
+
+#Converting absolute links to relative links
+If your site has absolute links then they will bring you out of tor and into 
the clearnet. You will have to convert them to relative links.
+
+To convert a single file, run:
+
+    sed -i -E -e 
's,(http://|https://|http://www.|https://www.|www.)fsf.org,,g' index.html
+
+To convert the files in your entire directory tree, **carefully** run:
+
+    find ./ -type f -exec sed -i -E -e 
's,(http://|https://|http://www.|https://www.|www.)fsf.org,,g' {} \;
+
+
+##Secure an Onion Service
+To secure your Onion Service as well as your server, you must follow this 
guide from Tor website.
+- Basic recommendations, click 
[here](https://community.torproject.org/onion-services/advanced/opsec/).
+- To secure your server, click 
[here](https://gitlab.torproject.org/legacy/trac/-/wikis/doc/OperationalSecurity).
+- To improve the security of your Onion Service with Vanguards, click 
[here](https://blog.torproject.org/announcing-vanguards-add-onion-services/).
+- To scan your Onion Service to find any vulnerabilities, click 
[here](https://onionscan.org/).
+
+
+##Moving an Onion Service
+To move an Onion Service towards on another machine, just copy the 
/var/lib/tor/my_website/ directory of the target machine in /var/lib/tor/.
+
+On the old one machine, stop and disable the Tor service.
+
+Make sure the torrc (/etc/tor/torrc) on the new system has the same 
configuration as the old one.
+
+Ensure that /var/lib/tor/my_website/ directory on the new machine have the 
right owner (debian-tor).
+
+Then restart the Tor service on the new machine.

Added: trunk/sviki/fsf/tools/tor_usage.mdwn
===================================================================
--- trunk/sviki/fsf/tools/tor_usage.mdwn                                (rev 0)
+++ trunk/sviki/fsf/tools/tor_usage.mdwn        2023-12-06 20:05:18 UTC (rev 
685)
@@ -0,0 +1,29 @@
+# Access git repos or other services through Tor:
+
+https://rt.gnu.org/Ticket/Display.html?id=1037998
+if using the command line, torsocks should do it. Something like
+
+    torsocks git clone blah
+
+or
+
+    torsocks cvs -z3 -d:pserver:anonymous@cvs.savannah.gnu.org:/web/www co www
+
+Note that this is probably not safe to use with UDP based programs such as 
torrent or sip
+
+# Tunnel VPN through tor:
+Add to /etc/tor/torrc these lines:
+
+    SocksPort 9150 PreferSOCKSNoAuth
+    SocksPort 9050 PreferSOCKSNoAuth
+
+Restart tor:
+
+    /etc/init.d/tor restart
+
+Add these 2 lines to /etc/openvpn.client.conf
+
+    socks-proxy localhost 9150
+    socks-proxy-retry
+
+Start the vpn as usual.

Added: trunk/sviki/fsf/tools/ufw.mdwn
===================================================================
--- trunk/sviki/fsf/tools/ufw.mdwn                              (rev 0)
+++ trunk/sviki/fsf/tools/ufw.mdwn      2023-12-06 20:05:18 UTC (rev 685)
@@ -0,0 +1,31 @@
+# ufw
+
+`ufw` (Uncomplicated Firewall) is used by some VMs in Ansible. To enable `ufw` 
for the VM, add it to the
+`ufw` group. Note that rules stick around, even if you remove the rule from
+ansible that added the firewall rule. You have to add a rule to delete it in
+Ansible, or delete any custom rules by hand.
+
+To see a list of firewall rules:
+
+    ufw show added
+
+The above format allows you to copy and paste a rule then to add the word
+`delete` after `ufw` to remove the rule. Make sure to include the `comment`
+keyword and its quoted value.
+
+You may also be interested in a shorter form:
+
+    ufw status
+
+Add an IP address for example giving access to a volunteer. Replace 
`127.0.0.1` with their IP address
+
+    ufw insert 1 allow from 127.0.0.1 comment 'Allow Eric in.'
+
+To add an IPv6 address, it needs to come after IPv4 rules. Using insert 1 will 
result in `ERROR: Invalid position '1'`. Use `ufw status` and count the number 
of rules until an IPv6 rule and insert above that. [More 
information](https://austinsnerdythings.com/2021/09/13/ipv6/).
+
+## Resources
+
+* [Arch wiki](https://wiki.archlinux.org/title/Uncomplicated_Firewall)
+* [Ubuntu wiki](https://wiki.ubuntu.com/UncomplicatedFirewall)
+  * [Ubuntu community](https://help.ubuntu.com/community/UFW)
+* [Debian wiki](https://wiki.debian.org/Uncomplicated%20Firewall%20%28ufw%29)

Added: trunk/sviki/fsf/tools/yourls.mdwn
===================================================================
--- trunk/sviki/fsf/tools/yourls.mdwn                           (rev 0)
+++ trunk/sviki/fsf/tools/yourls.mdwn   2023-12-06 20:05:18 UTC (rev 685)
@@ -0,0 +1,23 @@
+[YOURLS](https://yourls.org/) URL Shortener
+
+Our YOURLS instance is on [u1p](https://u1p.fsf.org/) with the address 
<https://u.fsf.org/>.
+
+## Config
+
+Config file:
+
+`/var/www/yourls/user/config.php`
+
+[More about the config file](https://yourls.org/#Config)
+
+## Allow duplicate links
+
+<https://github.com/YOURLS/YOURLS/issues/2411>
+
+Add `define( 'YOURLS_UNIQUE_URLS', 'false' );` to the config.
+
+## Links
+
+* [Project page](https://yourls.org/)
+* [Github](https://github.com/YOURLS/YOURLS/)
+* [brains](https://brains.fsf.org/wiki/tools/yourls/)

Modified: trunk/sviki/fsf.mdwn
===================================================================
--- trunk/sviki/fsf.mdwn        2023-12-06 19:27:40 UTC (rev 684)
+++ trunk/sviki/fsf.mdwn        2023-12-06 20:05:18 UTC (rev 685)
@@ -1,12 +1,13 @@
 FSF Tech Notes Directory
 ===================
 
+Documentation useful to the FSF tech team and hopefully the public which
+is not strictly related to Savannah.
 
-Documentation useful the FSF & hopefully the public which is mostly not 
strictly related to Savannah.
 
 ## Contents
 
-* Tools    - About the tools we depend on.
+### Tools    - About the tools we depend on.
 
 [[!map pages="fsf/tools/* and ! fsf/tools/*/*"]]
 
@@ -20,6 +21,11 @@
 documentation there that was better off public since we didn't have a
 public ikiwiki.
 
-This subdirectory aims to start rectifying that situation. The pages
-here might be moved to somewhere more appropriate if we can decide where
-that is.
+This subdirectory aims to start rectifying that situation.
+
+Migration here started December 2023. It contains some references to
+things like sshing to hosts only the tech team has access to. Just be
+aware and use common sense and feel free to improve.
+
+The pages here might be moved to somewhere more appropriate if we can
+decide where that is.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]