gnunet-svn
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[taler-grid5k] 184/189: add documentation


From: gnunet
Subject: [taler-grid5k] 184/189: add documentation
Date: Thu, 28 Apr 2022 10:49:14 +0200

This is an automated email from the git hooks/post-receive script.

marco-boss pushed a commit to branch master
in repository grid5k.

commit 9ee4f87c8277a1b7b2e9dfab199a6c5b70d157ba
Author: Boss Marco <bossm8@bfh.ch>
AuthorDate: Thu Apr 21 18:44:18 2022 +0200

    add documentation
---
 README.md                                          |  11 +-
 additional/README.md                               |  20 +++-
 additional/recover/docker-compose.yaml             |   2 +-
 docker/README.md                                   |  32 ++---
 experiment/README.md                               | 131 +++++++++++++++------
 experiment/TODO                                    |   0
 experiment/env                                     |  19 +++
 experiment/experiment-specification.yml            |   2 +
 experiment/scripts/benchmark.sh                    |   4 +-
 experiment/scripts/createusers.sh                  |   5 +-
 experiment/scripts/data-backup.sh                  |   5 +-
 experiment/scripts/database-centos.sh              |  18 ++-
 experiment/scripts/database.sh                     |  15 ++-
 experiment/scripts/dns.sh                          |   5 +-
 experiment/scripts/install.sh                      |  73 +++++++-----
 experiment/scripts/monitor.sh                      |   2 +-
 experiment/scripts/ping.sh                         |   3 +-
 experiment/scripts/postgres-cluster/README.md      |  24 ++++
 .../scripts/{ => postgres-cluster}/db-cluster.sh   |  13 +-
 .../scripts/{ => postgres-cluster}/exch-cluster.sh |   9 ++
 .../scripts/postgres-cluster/proxy-cluster.sh      |  18 +++
 experiment/scripts/proxy-cluster.sh                |   7 --
 experiment/scripts/run.sh                          |   2 +
 experiment/scripts/setup.sh                        |  20 +++-
 image/README.md                                    |  26 ++--
 image/centos8/taler-centos8.yaml                   |   5 +-
 notes.txt                                          |   0
 27 files changed, 343 insertions(+), 128 deletions(-)

diff --git a/README.md b/README.md
index 3f170a0..6846a17 100644
--- a/README.md
+++ b/README.md
@@ -32,16 +32,19 @@ data of experiments.
 ### Configs
 
 Contains the configurations for the applications in the environment.
-They will be adjusted copied to '/' (some make sure to add the correct 
directory strucutre) 
-once an experiment is started.
+They will be adjusted copied to '/' once an experiment is started
+(make sure that they match the images directory structure).
+
+**NOTE**: Postgres configuration is located in 
`experiment/script/database[-centos].sh`
+          and not in `configs`
 
 ## Quick Start
 
-To run an experiment, you must
+To run an experiment, you need to
 
 * (optionally) have a grafana instance
 * Make sure the environment exists in the public direcory which is configured 
in 
   `experiment/taler.rspec`. If its not in the grid, use `image/README.md` or 
`docker/README.md`
   to see how to build such an environment.
-* Read `experiment/README.md` for instructions on how to run an experiment 
inside the grid 
+* Read `experiment/README.md` for instructions on how to run an experiment on 
Grid'5000. 
   
diff --git a/additional/README.md b/additional/README.md
index 27f0f47..7739f9e 100644
--- a/additional/README.md
+++ b/additional/README.md
@@ -5,7 +5,17 @@
 Backup Grid'5000 shares which were created by an experiment.
 Contains all logs and the node configuration.
 
-Simply use with `./persist.sh <BACKUP_NAME> 
<OPTIONAL_PLOT_ARCHIVE_TO_INCLUDE>`.
+Simply use with: 
+
+```bash
+./persist.sh -b <BACKUP_NAME> <OPTIONAL_PLOT_ARCHIVE_TO_INCLUDE>`.
+```
+
+Once the backup succeeded it is a good idea to delete the remaining data on 
the NFS:
+
+```bash
+./persist.sh -d
+```
 
 ## plots
 
@@ -55,9 +65,11 @@ the dump from the experiment database. To do so run the 
following steps (in its
 
 ## grafana
 
+Dashboard json files to upload to grafana - also needed for experiment 
recovery.
+
 ### Custom
 
-Contains all *custom* dashboards for the experiments. 
+Contains all *custom* (and downloaded library) dashboards for the experiments. 
 Import them vie `Create->Import->Upload JSON` (plus sign) 
 
 The database dashboard is a combination of the following two (plus some custom 
custom panels):
@@ -67,7 +79,7 @@ The database dashboard is a combination of the following two 
(plus some custom c
 
 ### Library
 
-Additional ones needed can be imported from the library.
+Additional ones needed can be imported from the library (or as described 
above).
 In your Grafana instance head to `Create->Import->Import via grafana.com` 
 and copy those ID one after another:
 
@@ -82,4 +94,4 @@ can be found in `../experiment`.
 ## recover
 
 Docker setup to recreate the Grafana dashboards of a past experiment. Please 
refer
-to `recover/README.md` for usage information
+to `recover/README.md` for usage information.
diff --git a/additional/recover/docker-compose.yaml 
b/additional/recover/docker-compose.yaml
index 4f25fa4..5375828 100644
--- a/additional/recover/docker-compose.yaml
+++ b/additional/recover/docker-compose.yaml
@@ -4,7 +4,7 @@ services:
 
   prometheus:
     hostname: prometheus
-    image: prom/prometheus:latest
+    image: prom/prometheus:v2.35.0
     restart: 'no'
     container_name: prometheus
     user: "${U_ID}:${G_ID}"
diff --git a/docker/README.md b/docker/README.md
index 325a939..0dc1573 100644
--- a/docker/README.md
+++ b/docker/README.md
@@ -1,6 +1,6 @@
-# Taler Grid5000 Build Image
+# Taler Grid'5000 Build Image
 
-This docker image can be used to build the Grid5000 image for the taler 
performance experiments
+This docker image can be used to build the Grid'5000 environment for the taler 
performance experiments
 
 ## Build
 
@@ -9,9 +9,9 @@ Or alternatively with `docker-compose up --build` **NOTE** this 
will also run th
 
 ## Run the build
 
-Running the image will build GNUnet, Taler and the Grid5000 image from the 
specified commits. 
+Running the image will build GNUnet, Taler and the Grid'5000 environment from 
the specified commits. 
 
-The image will then be uploaded to the specified nodes Grid5000 public 
directory using the certificate provided.
+The environment will then be uploaded to the specified nodes Grid5000 public 
directory using the certificate provided.
 
 ### docker
 
@@ -30,12 +30,12 @@ docker run -it --rm \
            taler:build <ARGUMENTS>
 ```
 
-**NOTE** about the port 5900, this one can be used for vncviewer to see whats 
happening inside the image which 
-will be created. Run `vncviewver :0`.
+**NOTE** About the port 5900: this one can be used for vncviewer to see whats 
happening inside the image while 
+kameleon is running. Run `vncviewver :0`.
 
-#### Manual Build
+#### Manual Build / Debugging
 
-To get an interactive shell into the image override the entrypoint by adding 
the following argument
+To get an interactive shell into the docker container override the entrypoint 
by adding the following argument
 before the last line in the command above:
 
 ```bash
@@ -71,14 +71,16 @@ docker-compose run --entrypoint /bin/bash taler-build
 
 #### Environment Variables
 
+All variables listed below can be passed to the container with either -e or by 
adding them to the docker-compose file.
+
 **GRID5K_USER**: the user which `GRID5K_CERT` belongs to
 **GRID5K_CERT**: the certificate which is used to login to the Grid5000 nodes 
(docker-compose only)
 **GRID5K_CERT_PASSWD**: the password to decrypt `GRID5K_CERT`
 **GRID5K_DEST**: comma separated list of where to copy the image to in the 
grid (default: lille,lyon)
 **ARGUMENTS**: args to pass to entrypoint, one of 
   -r|--rebuild (rebuild the image)
-  -n|--no-copy (do not copy the generated image to Grid5000 - make sure output 
volume is mounted)
-  --centos     (build the centos8 image instead of the default debian11)
+  -n|--no-copy (do not copy the generated image to Grid5000 - make sure output 
volume is mounted - see below)
+  --centos     (build the `centos8` image instead of the default `debian11`)
  As per default, running the docker command again will not clean or rebuild 
the image
 
 ##### Additional
@@ -109,15 +111,15 @@ apt install -y tigervnc-viewer
 vncviewer :0
 ```
 
-#### Output
+#### Output Volume
 
-The image will be published to the Grid5000's public directory on a specified 
node.
-Additionally the generated image can also be mounted to the host by specifying 
`-v <some_path>:/root/output`
-or 
+The image will be published to Grid'5000's public directory on a specified 
node.
+Additionally the generated image can also be mounted to the host by passing 
`-v <some_path>:/root/output` do `docker run`
+or with docker-compose:
 
 ```yaml
 volumes:
   - <some_path>:/root/output
 ```
 
-respectively
+in `docker-compose.yaml`.
diff --git a/experiment/README.md b/experiment/README.md
index 1a65104..8b6caab 100644
--- a/experiment/README.md
+++ b/experiment/README.md
@@ -1,77 +1,140 @@
 # Experiment Setup
 
+## Requirements
+
+### jFed
+
+jFed is needed to run an experiment with the current setup. Please see 
[here](https://jfed.ilabt.imec.be/)..
+
 ## Files
 
 * experiment-specitication.yml: [ESpec](https://jfed.ilabt.imec.be/espec/) for 
jFed 
 * taler.rspec: Complete set of nodes to run an experiment (others contain more 
wallets
-  or shards for example). Find more in ../aditional/rpsecs
+  or shards for example). Find more in `../aditional/rspecs`
 * env: template file to add enviroment variables needed for the experiment
 * scripts: Bash scripts which will be run in the experiment
 * ssh: ssh key material which the nodes use in the ESpec phase to communicate 
with each other.
-  Safe to use here, since nodes can only be reached from inside the grid 
already
+  Safe to use here, since nodes can only be reached from inside the grid 
already (currently used
+  for centos8 DB only)
   
 ## Run an Experiment
 
 To successfully run an experiment the following steps must be made:
 
-**NOTE** An external Grafana instance with Taler Performance Dashboards is 
needed
-         Dashboards can be found in `additional/grafana`
+**NOTE** An external Grafana instance with Taler Performance Dashboards is 
needed (optionally) to see metrics and results.
+         The Dashboards can be found in `additional/grafana`
          Install instructions can be found on 
[grafana.com](https://grafana.com/docs/grafana/latest/installation/)
          Once installed, two datasources must be added - Prometheus and Loki, 
they will be updated from the experiment
 
 * Copy the environment default configuration `env` to `.env`
-* Read through `.env` and define the missing variables
+* Read through `.env` and define the missing variables **NOTE**: Postgres 
configuration is located in `scripts/database[-centos].sh`
 * Start jFed Experimenter GUI 
 * Load taler.rspec and click Run
 * Specify the experiment name and time
 * Wait until taler.rspec is allocated successfully and nodes are ready
 * Click (Re)Run ESpec for the job (use the type Directory and select this 
directory (experiment))
-* If any error ocurrs just press (Re)Run Espec again because sometimes there 
are still unindentidfied errors
+* If any error ocurrs just press (Re)Run Espec again because sometimes there 
are still unidentified errors in jFed / G5k
+* Start wallet processes with `taler-perf start wallet N` on any node in the 
experiment
 
-### Start wallets
+**NOTE** The Grid'5000 environments are copied to a public directory of the 
grid. So it might still be
+         available with 
`http://public.lille.grid5000.fr/~bfhch01/taler-debian11.dsc`
+         If not you must build your own (see `../image`) and specify this one 
for each node in each rspec.
+         This can be done manually via jFed (double click node, and replace 
`bfhch01` with your Grid'5000 username,
+         Or with sed (which is simpler and faster):
 
-* Run `talet-perf start wallets N` where N is any number
+```bash
+sed -i "s/bfhch01/YOUR_G5K_USERNAME/g" <RSPEC_FILE>
+```
 
-**NOTE** On `taler-perf`, when not using a terminal opened from jFed make sure 
to forward the ssh-agent
-         to make the script work. E.g. `ssh -A graoully-3.nancy.g5k` 
+### Hints and Bugs
 
-## Rebuild Taler Binaries
+#### Dependencies
 
-To quickly test fixes of new commits in gnunet,exchange,merchant and wallet, 
there is a script `scripts/install.sh`
-which can be run inside a running experiment rather than rebuilding the whole 
image.
+Nodes may have dependencies in `run.sh`, thus waiting for another node to 
finish before continuing initialization.
+An example of this is the exchange, it will only be started once `DB_USER` has 
remote access enabled (`wait_for_db` in `helpers.sh`).
 
-To do so copy the following snippet into the `Multi Command` window in jFed:
-Please adjust commits as needed, the ones which are not defined or empty will 
not be built.
-The ones which are a space only (" ") will build on master.
+However, most nodes which log to Promtail/Loki will wait with initialization 
until Promtail is running, so if something is stuck
+execute the following command on the `monitor` node:
 
 ```bash
-#!/bin/bash
+systemctl status promtail.service
+```
+or
+```bash
+cat /var/log/syslog | grep -i promtail
+```
 
-export GNUNET_COMMIT_SHA=master
-export EXCHANGE_COMMIT_SHA=master
-export MERCHANT_COMMIT_SHA=master
-export WALLET_COMMIT_SHA=master
+#### DNS not working
 
-/bin/bash /root/taler/grid5k/experiment/scripts/install.sh
-# /bin/bash /root/scripts/install.sh
-```
+It _should_ work, if not just (RE)run ESpec, it happened sometimes that bind 
was not ready when we are setting the domain names with dyndns.
+
+## Rebuild Taler Binaries
+
+On each experiment start the `setup` script checks if variables like 
`GNUNET_COMMIT_SHA` are set,
+if they are (.env), the corresponding binary is rebuilt from source with the 
specified commit.
+For more info please read `scripts/setup.sh` or `scripts/install.sh`
 
-## Actions in running Experiment
+## Actions in a running Experiment
+
+### Start wallets
+
+* Run `talet-perf start wallet N` where N is any number
+
+**NOTE** On `taler-perf`, when not using a terminal opened from jFed make sure 
to forward the ssh-agent
+         to make the script work. E.g. `ssh -A graoully-3.nancy.grid5000.fr` 
(from an access machine).
 
-To add more exchange processes run `taler-perf stop ecxchange <NUM>` on any 
node
+To add more exchange processes run `taler-perf stop exchange <NUM>` on any node
 To add more wallet processes run `taler-perf start wallet <NUM>` on any node
 
 They can also be stopped in the same way: `taler-perf stop <KIND> <NUM>`
-To stop the wallet processes which are logging, specify `taler-perf stop 
wallet logging`, they will not 
-be stopped otherwise.
 
-### Grafana Dashboard Plotting
- 
-To persists the dashboards as png plots, there is a script in `../additional` 
which creates png
-plots based on a configuration. Please refer to the README located in the 
specified directory.
-
-### Experiment Persistance
+### Experiment Persistence
 
 The script `../additional/persist.sh` can be used to backup and clean the data 
in the grid5k NFS.
 This archive created can then be passed to `../additional/recover.sh`, which 
will run a local Grafana setup
 in which the experiment can be inspected again.
+
+For this to work the service `taler-databackup` using `scripts/data-backup.sh` 
is run periodically.
+Loki data is directly written to the NFS. For the best results it is needed to 
stop loki on the `monitor` node
+before an experiment ends - otherwise some of the loki data might become 
corrupt.
+
+#### (Deprecated) Grafana Dashboard Plotting
+
+To persists the dashboards as png plots, there is a script in `../additional` 
which creates png
+plots based on a configuration. Please refer to the README located in the 
specified directory.
+
+
+### Developer Notes
+
+#### Experiment Flow
+
+1. `experiment-specification.yml` is run when running ESpec, setting up 
requirements and uploading scripts
+2. `setup.sh` is run on every node, exporting necessary environment variables 
to `~/.env` and /etc/environment
+3. `run.sh` is run on every node, setting domain names and starting the 
correct script according to the role the node has assigned 
+   (given from `NODES` and `HOSTNAME`)
+
+#### Add more nodes
+
+Adding more nodes should be as simple as:
+
+1. Adding it in an rspec file (jFed)
+2. Adding the node name given in jFed to the `NODES` variable in `env` & `env`
+3. Extending `run.sh` - add a section like: 
+   ```bash
+   ...
+   elif [[ "${HOSTNAME}" =~ ${<NODE_NAME_UPPERCASE>_HOSTS} ]]; then
+     ...
+     exec ~/scripts/<node-script>.sh
+   ...
+   ```
+4. Creating the `<node-script>.sh` containing node specific setup steps
+
+#### Environment
+
+That all environment variables are always available in every shell, they are 
exported to `/etc/environment`.
+This file is also used by most of our custom service files in 
`/usr/lib/systemd/system`.
+
+#### Useful functions
+
+The file `scripts/helpers.sh` contains a lot of re-usable functions - please 
read this file for more information.
+
diff --git a/experiment/TODO b/experiment/TODO
deleted file mode 100644
index e69de29..0000000
diff --git a/experiment/env b/experiment/env
index eeed21f..ab41fc1 100644
--- a/experiment/env
+++ b/experiment/env
@@ -72,6 +72,8 @@ LOKI_G5K_PROXY_PORT=http
 
 # Which version of postgres is installed in the environment
 POSTGRES_VERSION=13
+# Setup postgres to use RAM instead of disks to store WAL and data
+POSTGRES_IN_MEMORY=false
 # Exchange database configuration
 DB_NAME=taler-exchange
 DB_USER=taler
@@ -137,3 +139,20 @@ GRAFANA_API_KEY=
 # previous experiment should be removed before starting
 # a new one
 REMOVE_PREVIOUS_EXPERIMENT_DATA=true
+
+# If *_COMMIT_SHA is not empty the corresponding
+# binary is rebuilt from source on the specified commit
+# with *_CFLAGS before starting the experiment.
+LIBMICROHTTPD_COMMIT_SHA=""
+    LIBMICROHTTPD_CFLAGS=""
+       
+GNUNET_COMMIT_SHA=""
+    GNUNET_CFLAGS=""
+
+EXCHANGE_COMMIT_SHA=""
+    EXCHANGE_CFLAGS=""
+
+MERCHANT_COMMIT_SHA=""
+    MERCHANT_CFLAGS=""
+
+WALLET_COMMIT_SHA=""
diff --git a/experiment/experiment-specification.yml 
b/experiment/experiment-specification.yml
index 6a4a8c0..32ab5be 100644
--- a/experiment/experiment-specification.yml
+++ b/experiment/experiment-specification.yml
@@ -32,6 +32,8 @@ execute:
       # Add the uploaded key as trusted
       echo "" >> ~/.ssh/authorized_keys
       cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
+      # Stop all 'run.sh' scripts which are still running
+      kill $(ps -aux | grep run[.sh] | awk '{print $2}') || true
     log: /dev/null
   # Setup DNS and Environment config
   - path: ~/scripts/setup.sh
diff --git a/experiment/scripts/benchmark.sh b/experiment/scripts/benchmark.sh
index 72cd87c..0802aee 100755
--- a/experiment/scripts/benchmark.sh
+++ b/experiment/scripts/benchmark.sh
@@ -13,13 +13,13 @@ source ~/scripts/helpers.sh
 
 # Start a wallet benchmark loop
 function start_wallet_bench() {
-  LOG_LEVEL=ERROR
-  PROTO=http
 
+  PROTO=http
   if [[ ${WALLET_USE_HTTPS} == "true" ]]; then
     PROTO=https
   fi
   
+  LOG_LEVEL=ERROR
   # One wallet in a hundred should log messages
   if ! (($1 % 100)) || [ $1 == "1" ]; then
     LOG_LEVEL=INFO
diff --git a/experiment/scripts/createusers.sh 
b/experiment/scripts/createusers.sh
index c53098a..3f8d8eb 100755
--- a/experiment/scripts/createusers.sh
+++ b/experiment/scripts/createusers.sh
@@ -4,8 +4,9 @@
 # (normaly done automatically when installing from packages)
 #
 # Usage: ./createusers.sh
-set -e
+set -ex
 
+DEBIAN_FRONTEND=noninteractive
 source /usr/share/debconf/confmodule
 
 TALER_HOME="/var/lib/taler"
@@ -121,5 +122,3 @@ if ! dpkg-statoverride --list 
/etc/taler/secrets/merchant-db.secret.conf >/dev/n
     ${MERCHUSERNAME} root 460 \
     /etc/taler/secrets/merchant-db.secret.conf
 fi
-
-exit 0
diff --git a/experiment/scripts/data-backup.sh 
b/experiment/scripts/data-backup.sh
index ed31ef5..7a229c5 100755
--- a/experiment/scripts/data-backup.sh
+++ b/experiment/scripts/data-backup.sh
@@ -1,11 +1,12 @@
 #!/bin/bash
 
-# Script which creates a snapshot of a running Prometheu
-# and Loki instance
+# Script which creates a snapshot of a running Prometheus instance
 
 # This will copy the snapshot to the configured LOG_DIR
 # (setup.sh) hopefully persistent on the Grid5000 NFS
 
+# Usage: ./data-backup.sh
+
 set -eu
 
 if [[ $(ps -aux | grep "[data]-backup.sh" | wc -l) -eq 1 ]]; then
diff --git a/experiment/scripts/database-centos.sh 
b/experiment/scripts/database-centos.sh
index 492dff6..fe50885 100755
--- a/experiment/scripts/database-centos.sh
+++ b/experiment/scripts/database-centos.sh
@@ -1,10 +1,16 @@
 #!/bin/bash
 INFO_MSG="
-Setup the database node (start postgresql)
+Setup the database node (start postgresql) in a centos environment
 "
 OPT_MSG="
 init:
-  Initialize and start the taler database
+  Initialize and start the taler database 
+   (calls remote-init on a remote node and thus 
+   needs ssh access to other experiment nodes)
+
+remote-init:
+  Call taler-exchange-dbinit against the database
+  node from a remote node (since centos does not have Taler)
 "
 
 set -eux
@@ -13,6 +19,7 @@ source ~/scripts/helpers.sh
 # move to tmp to prevent change directory errors
 cd /tmp 
 
+# Create disk and mount it it if possible
 function setup_disks() {
   if [ -b /dev/disk1 ]; then
     echo 'start=2048, type=83' | sfdisk /dev/disk1 || true
@@ -35,6 +42,7 @@ Environment=PGDATA=/tmp/postgresql/13/data
  
   PGSETUP_INITDB_OPTIONS="-D /tmp/postgresql/13/data"
 
+  # Only do this if the disk has been configured
   if [ -d /mnt/disk ]; then
     mkdir /mnt/disk/pg_wal || true
     chown -R postgres:postgres /mnt/disk/pg_wal
@@ -223,6 +231,8 @@ Environment=PGDATA=/tmp/postgresql/13/data
   systemctl restart postgresql-${POSTGRES_VERSION}
 }
 
+# Allow DB_USER from remote (Exchange will continue initialization only when 
+# this one was run)
 function enable_remote_access() {
   # Enable password for taler since this is the case in real world deployments
   # For the postgres user do not enable authentication (used in metrics)
@@ -235,6 +245,7 @@ function enable_remote_access() {
   fi
 }
 
+# Create user mappings for DB_USER for each shard
 function configure_shard_access() {
 
   for i in $(seq $NUM_SHARDS); do
@@ -248,6 +259,7 @@ EOF
 }
 
 # Initialize the database for taler exchange
+# Calls remote-dbinit from a remote system - requires ssh access
 function init_db() {
 
   # Create the role taler-exchange-httpd and the database
@@ -286,11 +298,11 @@ GRANT SELECT,INSERT,UPDATE ON ALL TABLES IN SCHEMA public 
TO "${DB_USER}";
 GRANT USAGE ON ALL SEQUENCES IN SCHEMA public TO "${DB_USER}";
 EOF
 
-
   enable_remote_access
   systemctl restart postgresql-13
 }
 
+# Initialize the database from a system which has GNU Taler installed
 function remote_init_db() {
 
   sed -i 
"s\<DB_URL_HERE>\postgresql://taler-exchange-httpd@db.${DNS_ZONE}:${DB_PORT}/${DB_NAME}\g"
 \
diff --git a/experiment/scripts/database.sh b/experiment/scripts/database.sh
index e25aa9b..47761fd 100755
--- a/experiment/scripts/database.sh
+++ b/experiment/scripts/database.sh
@@ -13,6 +13,8 @@ source ~/scripts/helpers.sh
 # move to tmp to prevent change directory errors
 cd /tmp 
 
+# Setup a disk if present to be used for WAL and move the WAL there
+# Don't use in combination with `setup_ram_storage`
 function setup_disks() {
   if [ -b /dev/disk1 ]; then
     echo 'start=2048, type=83' | sfdisk /dev/disk1 || true
@@ -28,8 +30,9 @@ function setup_disks() {
   fi
 }
 
+# Setup Postgres to use RAM instead of disks to store data
 function setup_ram_storage() {
-  SIZE=$(($(awk '/MemTotal/ {print $2}' /proc/meminfo) / 10))
+  SIZE=$(($(awk '/MemTotal/ {print $2}' /proc/meminfo) / 4))
   if ! df | grep -q /tmp/postgresql; then
     mv /tmp/postgresql /tmp/postgresql.bak
     mkdir /tmp/postgresql
@@ -232,6 +235,8 @@ function setup_pgbouncer() {
   fi
 }
 
+# Allow DB_USER from remote (Exchange will continue initialization only when 
+# this one was run)
 function enable_remote_access() {
   # Enable password for taler since this is the case in real world deployments
   # For the postgres user do not enable authentication (used in metrics)
@@ -244,6 +249,7 @@ function enable_remote_access() {
   fi
 }
 
+# Create user mappings for DB_USER for each shard
 function configure_shard_access() {
 
   for i in $(seq $NUM_SHARDS); do
@@ -317,8 +323,11 @@ EOF
 case ${1} in 
   init)
     setup_config
-    setup_disks
-    # setup_ram_storage
+    if [[ "${POSTGRES_IN_MEMORY}" == "true" ]]; then
+      setup_ram_storage
+    else
+      setup_disks
+    fi
     init_db
     setup_pgbouncer
     restart_rsyslog
diff --git a/experiment/scripts/dns.sh b/experiment/scripts/dns.sh
index 3ad4e72..1754526 100755
--- a/experiment/scripts/dns.sh
+++ b/experiment/scripts/dns.sh
@@ -4,7 +4,10 @@ set -eux
 # Backup used nodes for experiment
 cp ~/nodes.json ${LOG_DIR}/nodes.json
 
-if ! grep -q "# Times" ${LOG_DIR}/espec-times; then
+if [[ "$REMOVE_PREVIOUS_EXPERIMENT_DATA" == "true" ]]; then
+  rm -rf /home/${G5K_USER}/espec-times || true
+fi
+if ! grep -q "# Times" /home/${G5K_USER}/espec-times; then
   echo "# Times to use for recovery" > /home/${G5K_USER}/espec-times
 fi
 echo "$(date +%s)" >> /home/${G5K_USER}/espec-times
diff --git a/experiment/scripts/install.sh b/experiment/scripts/install.sh
index 86d8ed6..74d535c 100755
--- a/experiment/scripts/install.sh
+++ b/experiment/scripts/install.sh
@@ -1,18 +1,25 @@
 #!/bin/bash
-# Rebuild the taler binaries from source
-# Requires the following optional variables to be set,
-# if not set the corresponding repo will not be rebuilt.
-# <GUNET|EXCHANGE|MERCHANT|WALLET>_COMMIT_SHA
+echo "
+Rebuild the taler binaries from source
+The following optional variables can be set:
 
+<LIBMICROHTTPD|GUNET|EXCHANGE|MERCHANT|WALLET>_COMMIT_SHA
+
+If not set the corresponding repo will not be rebuilt.
+
+Optionally, CFLAGS can be passed with:
+
+<LIBMICROHTTD|GNUNET|EXCHANGE|MERCHANT>_CFLAGS
+"
 TALER_HOME=~/taler
 
 # Prepare the repository
 # $1: Git repo to clone
 # $2: Commit to checkout to
-function prepare() {
-  DIR="${TALER_HOME}/$(basename ${1%.*})"
-  test -d "${DIR}" || git clone "${1}" "${DIR}"
-  cd "${DIR}"
+function prepare_repo() {
+  SRC_DIR="${TALER_HOME}/$(basename ${1%.*})"
+  test -d "${SRC_DIR}" || git clone "${1}" "${SRC_DIR}"
+  cd "${SRC_DIR}"
   git checkout master > /dev/null && \
     (git pull > /dev/null 2>&1 || true)
   git checkout "$2" > /dev/null && \
@@ -21,24 +28,26 @@ function prepare() {
 
 # Build the binaries in the current directory with make
 # (runs ./bootstrap & ./configure)
+# $1: optional CFLAGS
 function build() {
   echo "INFO running bootstrap and configure"
   ./bootstrap
   if [ -f contrib/gana.sh ]; then
     ./contrib/gana.sh
   fi
-  ./configure --enable-logging=verbose --prefix=/usr || ./configure
+  CFLAGS="$1" ./configure --enable-logging=verbose --prefix=/usr || 
CFLAGS="$1" ./configure
   make
 }
 
 # Install from a git repo
 # $1: Git repo to clone
 # $2: Commit to checkout to
-function install() {
-  prepare "$1" "$2" 
-  build 
+# $3: Optional CFLAGS for ./configure
+function install_repo() {
+  prepare_repo "$1" "$2" 
+  build "$3"
   echo "INFO installing"
-  make install
+  make install_repo
   ldconfig
 }
 
@@ -46,29 +55,37 @@ if [ ! -d "${TALER_HOME}" ]; then
   mkdir "${TALER_HOME}"
 fi
 
-# Use ! -z since -n would be false for ""
-if [ ! -z "${GNUNET_COMMIT_SHA}" ]; then
+if [[ -n ${LIBMICROHTTPD_COMMIT_SHA} ]]; then
+  echo "INFO installing libmicrohttpd"
+  install_repo "https://git.gnunet.org/libmicrohttpd.git"; \
+               "${LIBMICROHTTPD_COMMIT_SHA:-master}" \
+               "${LIBMICROHTTPD_CFLAGS}"
+fi
+
+if [[ -n ${GNUNET_COMMIT_SHA} ]]; then
   echo "INFO installing GNUnet"
-  install "https://git.gnunet.org/gnunet.git"; \
-          "${GNUNET_COMMIT_SHA:-master}"
+  install_repo "https://git.gnunet.org/gnunet.git"; \
+               "${GNUNET_COMMIT_SHA:-master}" \
+              "${GNUNET_CFLAGS}"
 fi
 
-if [ ! -z "${EXCHANGE_COMMIT_SHA}" ]; then
+if [[ -n ${EXCHANGE_COMMIT_SHA} ]]; then
   echo "INFO installing Taler Exchange"
-  install "https://git.taler.net/exchange.git"; \
-          "${EXCHANGE_COMMIT_SHA:-master}"
+  install_repo "https://git.taler.net/exchange.git"; \
+               "${EXCHANGE_COMMIT_SHA:-master}" \
+               "${EXCHANGE_CFLAGS}"
 fi
 
-if [ ! -z "${MERCHANT_COMMIT_SHA}" ]; then
+if [[ -n ${MERCHANT_COMMIT_SHA} ]]; then
   echo "INFO installing Taler Merchant"
-  install "https://git.taler.net/merchant.git"; \
-          "${MERCHANT_COMMIT_SHA:-master}"
+  install_repo "https://git.taler.net/merchant.git"; \
+               "${MERCHANT_COMMIT_SHA:-master}" \
+               "${MERCHANT_CFLAGS}"
 fi
 
-if [ ! -z "${WALLET_COMMIT_SHA}" ]; then
+if [[ -n ${WALLET_COMMIT_SHA} ]]; then
   echo "INFO installing Taler Wallet"
-  install "https://git.taler.net/wallet-core.git"; \
-          "${WALLET_COMMIT_SHA:-master}"
+  install_repo "https://git.taler.net/wallet-core.git"; \
+               "${WALLET_COMMIT_SHA:-master}" \
+               ""
 fi
-
-exit 0
diff --git a/experiment/scripts/monitor.sh b/experiment/scripts/monitor.sh
index a1f6eea..7c62e75 100755
--- a/experiment/scripts/monitor.sh
+++ b/experiment/scripts/monitor.sh
@@ -41,7 +41,7 @@ function update_datasource() {
 # If GRAFANA_HOST or GRAFANA_API_KEY are empty this
 # step is skipped - requires admin level api key to update data sources
 function update_grafana() {
-  if [ -z "${GRAFANA_HOST}" ] || [ -z ${GRAFANA_API_KEY} ]; then
+  if [[ -z ${GRAFANA_HOST} || -z ${GRAFANA_API_KEY} ]]; then
     return
   fi
   AUTH_HEADER="Authorization: Bearer ${GRAFANA_API_KEY}"
diff --git a/experiment/scripts/ping.sh b/experiment/scripts/ping.sh
index 267f137..421df25 100755
--- a/experiment/scripts/ping.sh
+++ b/experiment/scripts/ping.sh
@@ -10,8 +10,9 @@ OPT_MSG="
 
 source ~/scripts/helpers.sh
 
-if [ -z "${1}" ]; then
+if [[ -z $1 ]]; then
   taler_perf_help $0 "$INFO_MSG" "$OPT_MSG"
+  exit 2
 fi
 
 RTT=$(\
diff --git a/experiment/scripts/postgres-cluster/README.md 
b/experiment/scripts/postgres-cluster/README.md
new file mode 100644
index 0000000..fcd2e0d
--- /dev/null
+++ b/experiment/scripts/postgres-cluster/README.md
@@ -0,0 +1,24 @@
+## Postgres Cluster Experiment
+
+**UNMAINTAINED** Used for some experiments hosting multiple instances of 
postgres on the same node. 
+                 They may still work but are not actively maintained.
+
+Run in the following order after espec was run (example for two instances):
+
+**NOTE** Requires as many exchange and proxy nodes as postgres instances are 
created
+         Example for 2 postgres instances: `DB`, `Exchange-1`, `Exchange-2`, 
`Proxy-1`, `Proxy-2`
+
+1. Modify scripts/benchmark.sh to be 'randomly using only one exchange for 
each wallet`. e.g. add this line
+   before taler-wallet-cli is called:
+   ```bash
+   EXCHANGE_GW_DOMAIN="exchange-$(shuf -i 1-3 -n 1).${DNS_ZONE}"
+   ```
+2. Run Espec
+3. Create a second postgres instance on the DB node: `./db-cluser.sh 1`
+4. Initialize one Exchange node as a primary exchange for this db 
(`Exchange-2`): `./exch-cluster 1`
+5. Configure all proxies to be responsible for one exchange only:
+   `Proxy-1`: `./proxy-cluster 2` - deletes all exchange-2 entries from the 
nginx config
+   `Proxy-2`: `./proxy-cluster 1` - ----------  exchange-1 
-----------------------------
+6. Start wallets  
+
+This can be done with as many instances as required.
diff --git a/experiment/scripts/db-cluster.sh 
b/experiment/scripts/postgres-cluster/db-cluster.sh
similarity index 80%
rename from experiment/scripts/db-cluster.sh
rename to experiment/scripts/postgres-cluster/db-cluster.sh
index 51226b5..d1c655f 100755
--- a/experiment/scripts/db-cluster.sh
+++ b/experiment/scripts/postgres-cluster/db-cluster.sh
@@ -1,4 +1,13 @@
 #!/bin/bash
+# Create independent postgres instances (cluster)
+
+if [[ -z ${1} ]]; then
+  echo "Usage: ./db-cluster.sh <N>"
+  echo ""
+  echo "Creates an independent Postgres instance main<N> running on port 
5432+N"
+  echo "Call after espec was run - only works with partitioning"
+  exit 2
+fi
 
 PORT=$((5432 + ${1}))
 
@@ -53,4 +62,6 @@ GRANT USAGE ON ALL SEQUENCES IN SCHEMA public TO "${DB_USER}";
 EOF
 
 ssh -o StrictHostKeyChecking=no monitor.${DNS_ZONE} \
-       "sed -i \"s/DATA_SOURCE_NAME.*'$//\" 
/etc/default/prometheus-postgres-exporter && sed -i 
\"s|DATA_SOURCE_NAME.*|&,postgresql://postgres@db.${DNS_ZONE}:${PORT}'|\" 
/etc/default/prometheus-postgres-exporter && systemctl restart 
prometheus-postgres-exporter" 
+       "sed -i \"s/DATA_SOURCE_NAME.*'$//\" 
/etc/default/prometheus-postgres-exporter && 
+        sed -i 
\"s|DATA_SOURCE_NAME.*|&,postgresql://postgres@db.${DNS_ZONE}:${PORT}'|\" 
+       /etc/default/prometheus-postgres-exporter && systemctl restart 
prometheus-postgres-exporter" 
diff --git a/experiment/scripts/exch-cluster.sh 
b/experiment/scripts/postgres-cluster/exch-cluster.sh
similarity index 86%
rename from experiment/scripts/exch-cluster.sh
rename to experiment/scripts/postgres-cluster/exch-cluster.sh
index e941c2d..9500dc4 100755
--- a/experiment/scripts/exch-cluster.sh
+++ b/experiment/scripts/postgres-cluster/exch-cluster.sh
@@ -1,4 +1,13 @@
 #!/bin/bash
+# Run an exchange against an independent db created with db-cluster.sh
+
+if [[ -z ${1} ]]; then
+  echo "Usage: ./exch-cluster.sh <N>"
+  echo ""
+  echo "Creates an independent Exchange instance running against a db on port 
5432+N"
+  echo "Call after espec was run - only works with partitioning"
+  exit 2
+fi
 
 source /root/scripts/helpers.sh
 
diff --git a/experiment/scripts/postgres-cluster/proxy-cluster.sh 
b/experiment/scripts/postgres-cluster/proxy-cluster.sh
new file mode 100755
index 0000000..a36adf5
--- /dev/null
+++ b/experiment/scripts/postgres-cluster/proxy-cluster.sh
@@ -0,0 +1,18 @@
+#!/bin/bash
+# Setup an exchange proxy to serve only a single exchange
+
+if [[ -z ${1} ]]; then
+  echo "Usage: ./proxy-cluster.sh <N1...Nn>"
+  echo ""
+  echo "Setup an exchange proxy to serve only for the specified exchange 
instance"
+  echo "(deletes the ones matching the arguments N1-Nn)"
+  echo "Call after espec was run"
+  echo "Deletes all exchange-N1...Nn entries in 
/etc/nginx/sites-enabled/exchange"
+  exit 2
+fi
+
+for i in $@; do
+  sed -i "/exchange-${i}.${DNS_ZONE}/d" /etc/nginx/sites-enabled/exchange
+done
+
+systemctl reload nginx
diff --git a/experiment/scripts/proxy-cluster.sh 
b/experiment/scripts/proxy-cluster.sh
deleted file mode 100755
index 85f5ab2..0000000
--- a/experiment/scripts/proxy-cluster.sh
+++ /dev/null
@@ -1,7 +0,0 @@
-#!/bin/bash
-
-for i in $@; do
-  sed -i "/exchange-${i}.${DNS_ZONE}/d" /etc/nginx/sites-enabled/exchange
-done
-
-systemctl reload nginx
diff --git a/experiment/scripts/run.sh b/experiment/scripts/run.sh
index 8ea1adb..8f1dd47 100644
--- a/experiment/scripts/run.sh
+++ b/experiment/scripts/run.sh
@@ -11,6 +11,7 @@ then
   systemctl restart prometheus-node-exporter
 fi
 
+# Set the experiment domain name
 set_ddn ${NODE_NAME}.${DNS_ZONE}
 set_host ${NODE_NAME}
 
@@ -29,6 +30,7 @@ elif [[ "${HOSTNAME}" =~ ${DB_HOSTS} ]]; then
     setup_log
     enable_logrotate
     if grep -q "Red Hat" /proc/version; then 
+      # Postgres is run differently in centos
       exec ~/scripts/database-centos.sh init
     else
       exec ~/scripts/database.sh init
diff --git a/experiment/scripts/setup.sh b/experiment/scripts/setup.sh
index e63f5e6..6ceeeb4 100644
--- a/experiment/scripts/setup.sh
+++ b/experiment/scripts/setup.sh
@@ -2,8 +2,7 @@
 # Setup nodes for the experiment
 # This script does the following:
 
-# 0. Cleanup data from previous experiments 
-#    (prometheus and loki only when said so in .env)
+# 0. Stop previous experiments 
 # 1. Parse the experiment-info.json from jFed to get
 #    - The user which runs the experiment (used for NFS) - env: G5K_USER
 #    - Which Grid5k node is which node in jFed (used to run the correct script 
later)
@@ -13,7 +12,7 @@
 #    or /tmp/exp-logs
 # 3. Export all environment to ~/.env and /etc/environment
 # 4. Update the g5k repo from taler.net and copy the configurations 
(g5k-repo/configs) to /
-# 5. Configure and start the DNS server on the DNS node
+# 5. Configure the DNS and start the DNS server on the DNS node
 
 
 # Set the current user
@@ -186,12 +185,14 @@ function setup_dns() {
   fi
 }
 
+# Stop and 'unconfigure' all important services
+# to start on an 'empty' playground
 function clean_previous_setup() {
-  # Stop all important services
+  # (not all are present on every node - || true - to ignore errors)
   systemctl stop taler-exchange-* \
                 taler-wallet* \
                 prometheus* \
-                 postgresql \
+                 postgresql* \
                 promtail \
                 loki \
                 || true
@@ -204,12 +205,19 @@ function clean_previous_setup() {
 }
 
 clean_previous_setup
+# Check if binaries need to be rebuilt on debian based operating systems
+if ! grep -q "Red Hat" /proc/version; then
+  source ~/scripts/install.sh &
+fi
 parse_experiment_nodes
 setup_log_dir
 setup_environment
 setup_config
 setup_dns
 
-if ! grep -q "Red Hat" /proc/version; then 
+wait
+
+if ! grep -q "Red Hat" /proc/version; then
+  # Only works in debian based operating systems
   exec ~/scripts/createusers.sh
 fi
diff --git a/image/README.md b/image/README.md
index 22a6737..2799de6 100644
--- a/image/README.md
+++ b/image/README.md
@@ -1,9 +1,9 @@
-# Grid5000 Environment creation
+# Grid'5000 Environment creation
 
 Official documentation can be found on these links:
 
-* [Grid5000 
Environment](https://www.grid5000.fr/w/Environments_creation_using_Kameleon_and_Puppet)
-* [Grid5000 Kadeploy](https://www.grid5000.fr/w/Advanced_Kadeploy)
+* [Grid'5000 
Environment](https://www.grid5000.fr/w/Environments_creation_using_Kameleon_and_Puppet)
+* [Grid'5000 Kadeploy](https://www.grid5000.fr/w/Advanced_Kadeploy)
 * [Kameleon Documentation](http://kameleon.imag.fr/grid5000_tutorial.html)
 
 ## Images
@@ -11,18 +11,20 @@ Official documentation can be found on these links:
 There are two variants, debian11 and centos8 please change into the respective 
directory 
 before running any commands. Please do also replace debian11 with the 
corresponding name.
 
-**NOTE** Centos8 does not have nfs enabled (yet) and can only be used to run 
Postgresql
+**NOTE** Centos8 does not have nfs enabled and can only be used to run 
Postgresql currently.
 
 ## Manual Build
 
-Replace `<G5K_USER>` with your Grid5000 username.
+Replace `<G5K_USER>` with your Grid'5000 username.
 This variable is required, if not specified the build will fail.
 
 ```bash
 kameleon build -g g5k_user:<G5K_USER> taler-debian11
 ```
 
-**NOTE** Make sure that all dependencies listed in `Grid5000 Environment` are 
installed
+**NOTE** Make sure that all dependencies listed in
+[Grid'5000 
Environment](https://www.grid5000.fr/w/Environments_creation_using_Kameleon_and_Puppet)
+are installed
 
 ### Additional Variables
 
@@ -33,15 +35,19 @@ which should be built with the following variables (default 
`master`):
 
 `gnunet_commit_sha`, `exchange_commit_sha`, `merchant_commit_sha`, 
`wallet_commit_sha` and `grid5k_commit_sha`
 
+(All except `gid5k_commit_sha` are for the debian11 environment only)
+
 #### Build Flags
 
 For each package built from source there are CFLAG variables which can be 
passed to the image build:
 
 `libmicrohttpd_cflags`, `gnunet_cflags`, `exchange_cflags` and 
`merchant_cflags`
 
+(debian11 only)
+
 #### Usage
 
-To override them you must add them to the `-g` option of `kameleon build`:
+To override them you must pass them with the `-g` option of `kameleon build`:
 
 ```bash
 kameleon build -g g5k_user:<G5K_USER> gnunet_commit_sha:master 
libmicrohttpd_cflags:"-O0 -g" taler-debian11
@@ -51,20 +57,20 @@ For more information please run `kameleon build --help`
 
 ### Deploy
 
-Copy the image to a Grid5000 site:
+Copy the image to a Grid'5000 site:
 
 ```bash
 cd build/taler-debian11
 scp taler-debian11.* <G5K_USER>@access.grid5000.fr:<G5K_SITE>/public/
 ```
 
-**NOTE** `G5K_USER` and `G5K_SITE` should match the ones in taler-debian11.dsc
+**NOTE** `G5K_USER` and `G5K_SITE` must match the ones in taler-debian11.dsc
 `G5K_SITE` defaults to `lyon`.
 
 ## Usage
 
 Place `http://public.lyon.grid5000.fr/~<G5K_USER>/taler-debian11.dsc` in the 
nodes disk image field
-in jFed.
+in jFed or replace them directly in the `rspec` files.
 
 ## Automated Build
 
diff --git a/image/centos8/taler-centos8.yaml b/image/centos8/taler-centos8.yaml
index 2cc0af9..6099991 100644
--- a/image/centos8/taler-centos8.yaml
+++ b/image/centos8/taler-centos8.yaml
@@ -31,14 +31,14 @@ global:
   ## Environment postinstall path, compression, and script command
   # g5k_postinst_path: server:///grid5000/postinstalls/g5k-postinstall.tgz
   # g5k_postinst_compression: "gzip"
-  # g5k_postinst_script: g5k-postinstall --net debian
+  # g5k_postinst_script: g5k-postinstall --net redhat
   ## Environment kernel path and params
   # g5k_kernel_path: "/vmlinuz"
   # g5k_initrd_path: "/initrd.img"
   # g5k_kernel_params: ""
   ## Environment visibility
   # g5k_visibility: "shared"
-  taler_packages: "postgresql13 postgresql-contrib postgresql13-contrib curl 
wget jq bc sudo git bind-utils bind net-tools netcat parallel dnsmasq 
bash-completion pgstats_13 vim"
+  taler_packages: "postgresql13 postgresql-contrib postgresql13-contrib 
pgstats_13 curl wget jq bc sudo git bind-utils bind net-tools netcat parallel 
dnsmasq bash-completion vim"
   taler_node_exporter_version: "1.3.1"
   taler_path: /root/taler
   grid5k_commit_sha: master
@@ -64,6 +64,7 @@ setup:
          dnf -qy module disable postgresql
          yum update -y
          yum upgrade -y
+
          yum install -y $${taler_packages} 
 
          mkdir -p $${taler_path}
diff --git a/notes.txt b/notes.txt
deleted file mode 100644
index e69de29..0000000

-- 
To stop receiving notification emails like this one, please contact
gnunet@gnunet.org.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]