Skip to content
Snippets Groups Projects
Commit 89523023 authored by Pierre-Alain Loizeau's avatar Pierre-Alain Loizeau
Browse files

[MQ] add slurm and sh scripts for 2022 mFLES online + install them

parent eea1bee7
No related branches found
No related tags found
1 merge request!882Changes from mCBM 2022 prod to mCBM MQ devices and execution
Showing
with 2883 additions and 0 deletions
......@@ -13,6 +13,13 @@ Install(FILES ${_resolvedRichFile}
DESTINATION share/cbmroot/macro/beamtime/mcbm2022
)
# SLURM scripts, bash scripts
Install(DIRECTORY online
DESTINATION share/cbmroot/macro/beamtime/mcbm2022
FILES_MATCHING PATTERN "*.sbatch"
PATTERN "*.sh"
)
# Just the empty folder for output
Install(DIRECTORY data
DESTINATION share/cbmroot/macro/beamtime/mcbm2022
......
slurm-*.out
# Submit scripts starting the slurm jobs
4 scripts are provided, 3 to start topologies and 1 to stop topologies:
- start_topology.sh
- start_topology_array.sh
- start_topology_servers.sh
- stop_topology.sh
All of these scripts assume that we have 4 processing nodes available in slurm, named `en[13-16]`.
Each processing ndoe is connected to a tsclient publisher with 1/2 of the TS of a TS builder node, resulting in a full
processing with equal sharing in case of 2 builder nodes and in a 2/3 processing with equal sharing in case of 3 builder
nodes.
For the case with 4 TS builder nodes, new scripts should be developed, either connecting one processing node to each or
running the processing topology directly on the builder node (with a strong attention to memory consumption).
## start_topology.sh
This starts a job based on `mq_processing_node.sbatch` on each of the 4 processing nodes.
It expects 4 parameters in the following order:
- the `<Run Id>`, as reported by flesctl
- the `<Number of branches to be started per node>`, leading to a total parallel capability of `4 x n` timeslices
- the `<Trigger set>` in the range `[0-14]`, with `[0-6]` corresponding to the trigger settings tested by N. Herrmann
and `[7-14]` those used for development by P.-A. Loizeau
- the `<Disk index>` in the range `[0-8]`, with `0` indicating the `/local/mcbm2022` disk-folder pair and `[1-8]`
indicating the `/storage/<n>/` disks.
Each process in the topology is started in the background, so in order to avoid the full job being killed when reaching
the end of the startup phase, an infinite loop has to be started which is exited only when the number of process in the
current session goes under a predefined threshold.
## start_topology_array.sh
This starts a job based on `mq_processing_node_array.sbatch` on each of the 4 processing nodes.
It expects 4 parameters in the following order:
- the `<Run Id>`, as reported by flesctl
- the `<Number of branches to be started per node>`, leading to a total parallel capability of `4 x n` timeslices
- the `<Trigger set>` in the range `[0-14]`, with `[0-6]` corresponding to the trigger settings tested by N. Herrmann
and `[7-14]` those used for development by P.-A. Loizeau
- the `<Disk index>` in the range `[0-8]`, with `0` indicating the `/local/mcbm2022` disk-folder pair and `[1-8]`
indicating the `/storage/<n>/` disks.
The difference with the previous scripts is that these sbatch jobs try to make use of the array functionality of SLURM
to start the topology processes instead of starting them in the background.
This would have simplified the process management as they then each appear as a sub-job in the SLURM interface.
This however cannot be used on the mFLES for the time being as the SLURM server there does not have ressource allocation
and management enabled.
## start_topology_servers.sh
In addition to the 4 processing nodes, this script assumes that we have a 5th node available for running the common
parts of the topologies:
- parameter server
- histogram server
It then starts the main processes of the topology on the processing nodes with one job for each level (source, sink,
unpackers, event builders), making use of the `oversubscribe` sbatch option which allows to run up to 4 jobs per node
without checking the available ressources.
It also expects 4 parameters in the following order:
- the `<Run Id>`, as reported by flesctl
- the `<Number of branches to be started per node>`, leading to a total parallel capability of `4 x n` timeslices
- the `<Trigger set>` in the range `[0-14]`, with `[0-6]` corresponding to the trigger settings tested by N. Herrmann
and `[7-14]` those used for development by P.-A. Loizeau
- the `<Disk index>` in the range `[0-8]`, with `0` indicating the `/local/mcbm2022` disk-folder pair and `[1-8]`
indicating the `/storage/<n>/` disks.
Internally two more parameters are provided to each of the SLURM jobs in order to share with them the address on which
the common services are available.
## stop_topology.
This starts a job based on `mq_shutdown.sbatch` on each of the 4 processing nodes.
This *sbatch* script will send a SIGINT signal to the processes started by a topology in the following order, trying
thus to achieve a clean shutdown:
1. RepReqTsSampler;
1. Unpackers
1. Event Builders
1. Event Sink
1. Histogram server (if any)
1. Parameter server (if any)
In each case, it will wait until all process matching the expected name for a given level are gone before sending the
next signal.
It expects a single parameter:
- the `<Run Id>`, as reported by flesctl
This script is meant to be run after the the `start_topology.sh` one.
It will also work in the case of the `start_topology_array.sh` and `start_topology_servers.sh`, but in these case using
the `scancel` command of SLURM is a cleaner solution.
# SBATCH scripts
In total 10 SBATCH scripts are used for these various methods of starting the smae topology:
- create_log_folder.sbatch
- mq_processing_node.sbatch
- mq_processing_node_array.sbatch
- mq_shutdown.sbatch
- mq_parserv.sbatch
- mq_histoserv.sbatch
- mq_source.sbatch
- mq_sink.sbatch
- mq_unpackers.sbatch
- mq_builders.sbatch
For all parameter server devices, the set of parameter files and setup files is picked based on the provided `<Run Id>`
(see lists of parameters).
## create_log_folder.sbatch
This script is used to prepare all necessary log folders on the `/local` disk in case not already done, irrespective of
the target disk for the selected data and before starting the topology itself.
It is used in all of the startup scripts
It expects a single parameter:
- the `<Run Id>`, as reported by flesctl
## mq_processing_node.sbatch
This is the only topology script in the `start_topology.sh` case.
It expects 5 parameters in the following order:
- the `<Run Id>`, as reported by flesctl
- the `<Number of branches to be started per node>`, leading to a total parallel capability of `4 x n` timeslices
- the `<Trigger set>` in the range `[0-14]`, with `[0-6]` corresponding to the trigger settings tested by N. Herrmann
and `[7-14]` those used for development by P.-A. Loizeau
- the `<Disk index>` in the range `[0-8]`, with `0` indicating the `/local/mcbm2022` disk-folder pair and `[1-8]`
indicating the `/storage/<n>/` disks.
- the `<TS source full hostname>` which should be a combination `hostname:port`. In order to avoid overloading the
standard network, it is critical here to target an `ibX`, e.g. `node8ib2:5561`
In addition, 2 optional parameters can be provided:
- the `<histogram server host>`, which would allow to use a common server
- the `<parameter server host>` (only if the histo server is provided), which would allow to use a common server
These two servers will be started by this script only if not overwritten by the user parameters.
The script will start the following processes in the background in this order:
- histogram server device, if no hostname provided
- source device
- parameter server device, if no hostname provided
- sink device
- `N` pairs of unpacker and event builder devices
It then enters an infinite loop until the number of processes in the attached sessions goes down to less than `6`.
The check is done every `5 seconds` and to monitor this the list of processes and total count is written to a file in
the log folder called `still_running.txt`.
## mq_processing_node_array.sbatch
This is the only topology script in the `start_topology_array.sh` case.
It expects 5 parameters in the following order:
- the `<Run Id>`, as reported by flesctl
- the `<Number of branches to be started per node>`, leading to a total parallel capability of `4 x n` timeslices
- the `<Trigger set>` in the range `[0-14]`, with `[0-6]` corresponding to the trigger settings tested by N. Herrmann
and `[7-14]` those used for development by P.-A. Loizeau
- the `<Disk index>` in the range `[0-8]`, with `0` indicating the `/local/mcbm2022` disk-folder pair and `[1-8]`
indicating the `/storage/<n>/` disks.
- the `<TS source full hostname>` which should be a combination `hostname:port`. In order to avoid overloading the
standard network, it is critical here to target an `ibX`, e.g. `node8ib2:5561`
Depending on the sub-job index provided by SLURM, it will start either the histogram server (index 1), sampler (index
2), parameter server (index 3), event sink (index 4)
## mq_shutdown.sbatch
It does not expect any parameters.
This script will follow this sequence:
1. Send SIGINT to all processes named `RepReqTsSampler` (Source device)
1. Wait until all such processes are gone (check every `1 second`)
1. Send SIGINT to all processes named `MqUnpack` (Unpackers)
1. Wait until all such processes are gone (check every `1 second`)
1. Send SIGINT to all processes named `BuildDig` (Event Builders)
1. Wait until all such processes are gone (check every `1 second`)
1. Send SIGINT to all processes named `DigiEventSink` (Event Sink)
1. Wait until all such processes are gone (check every `1 second`)
1. Send SIGINT to all processes named `MqHistoServer` , if any (Histogram server)
1. Wait until all such processes are gone (check every `1 second`)
1. Send SIGINT to all processes named `parmq-server` , if any (Parameter server)
1. Wait until all such processes are gone (check every `1 second`)
## mq_histoserv.sbatch
It expects 5 parameters in the following order:
- the `<Run Id>`, as reported by flesctl
- the `<Number of branches to be started per node>`, leading to a total parallel capability of `4 x n` timeslices
- the `<Trigger set>` in the range `[0-14]`, with `[0-6]` corresponding to the trigger settings tested by N. Herrmann
and `[7-14]` those used for development by P.-A. Loizeau
- the `<Disk index>` in the range `[0-8]`, with `0` indicating the `/local/mcbm2022` disk-folder pair and `[1-8]`
indicating the `/storage/<n>/` disks.
- the `<TS source full hostname>` which should be a combination `hostname:port`. In order to avoid overloading the
standard network, it is critical here to target an `ibX`, e.g. `node8ib2:5561`
The full list of parameters is just for compatibility reasons, in the end only the `<Run Id>` is used to go in the right
log folder and the `<Trigger set>` to name the log file.
The process could be started in the foreground, therefore blocking the SLURM job until it returns.
But in order to be as close as possible to the "all-in-one" version, it is started in the background with an infinite
check loop behind identical to the latter.
## mq_parserv.sbatch
It expects 5 parameters in the following order:
- the `<Run Id>`, as reported by flesctl
- the `<Number of branches to be started per node>`, leading to a total parallel capability of `4 x n` timeslices
- the `<Trigger set>` in the range `[0-14]`, with `[0-6]` corresponding to the trigger settings tested by N. Herrmann
and `[7-14]` those used for development by P.-A. Loizeau
- the `<Disk index>` in the range `[0-8]`, with `0` indicating the `/local/mcbm2022` disk-folder pair and `[1-8]`
indicating the `/storage/<n>/` disks.
- the `<TS source full hostname>` which should be a combination `hostname:port`. In order to avoid overloading the
standard network, it is critical here to target an `ibX`, e.g. `node8ib2:5561`
The full list of parameters is just for compatibility reasons, in the end only the `<Run Id>` is used in this script to
select the parameter files and setup and to go in the right log folder and the `<Trigger set>` to name the log file.
The process could be started in the foreground, therefore blocking the SLURM job until it returns.
But in order to be as close as possible to the "all-in-one" version, it is started in the background with an infinite
check loop behind identical to the latter.
## mq_source.sbatch
It expects 5 parameters in the following order:
- the `<Run Id>`, as reported by flesctl
- the `<Number of branches to be started per node>`, leading to a total parallel capability of `4 x n` timeslices
- the `<Trigger set>` in the range `[0-14]`, with `[0-6]` corresponding to the trigger settings tested by N. Herrmann
and `[7-14]` those used for development by P.-A. Loizeau
- the `<Disk index>` in the range `[0-8]`, with `0` indicating the `/local/mcbm2022` disk-folder pair and `[1-8]`
indicating the `/storage/<n>/` disks.
- the `<TS source full hostname>` which should be a combination `hostname:port`. In order to avoid overloading the
standard network, it is critical here to target an `ibX`, e.g. `node8ib2:5561`
In addition, 2 optional parameters can be provided:
- the `<histogram server host>`, which would allow to use a common server
- the `<parameter server host>` (only if the histo server is provided), which would allow to use a common server
If not used, the script will expect both servers to be running on the localhost interface at `127.0.0.1`
The parameters for `<Number of branches>` and `<Disk index>` are not used in this script.
The process could be started in the foreground, therefore blocking the SLURM job until it returns.
But in order to be as close as possible to the "all-in-one" version, it is started in the background with an infinite
check loop behind identical to the latter.
## mq_sink.sbatch
It expects 5 parameters in the following order:
- the `<Run Id>`, as reported by flesctl
- the `<Number of branches to be started per node>`, leading to a total parallel capability of `4 x n` timeslices
- the `<Trigger set>` in the range `[0-14]`, with `[0-6]` corresponding to the trigger settings tested by N. Herrmann
and `[7-14]` those used for development by P.-A. Loizeau
- the `<Disk index>` in the range `[0-8]`, with `0` indicating the `/local/mcbm2022` disk-folder pair and `[1-8]`
indicating the `/storage/<n>/` disks.
- the `<TS source full hostname>` which should be a combination `hostname:port`. In order to avoid overloading the
standard network, it is critical here to target an `ibX`, e.g. `node8ib2:5561`
In addition, 2 optional parameters can be provided:
- the `<histogram server host>`, which would allow to use a common server
- the `<parameter server host>` (only if the histo server is provided), which would allow to use a common server
If not used, the script will expect both servers to be running on the localhost interface at `127.0.0.1`
The parameter for `<Nb branches>` is used to set the limit for the size of the ZMQ buffer of processed timeslices (1 per
branch).
The parameter for `<source hostname>` is not used in this script.
The process could be started in the foreground, therefore blocking the SLURM job until it returns.
But in order to be as close as possible to the "all-in-one" version, it is started in the background with an infinite
check loop behind identical to the latter.
## mq_unpackers.sbatch
It expects 5 parameters in the following order:
- the `<Run Id>`, as reported by flesctl
- the `<Number of branches to be started per node>`, leading to a total parallel capability of `4 x n` timeslices
- the `<Trigger set>` in the range `[0-14]`, with `[0-6]` corresponding to the trigger settings tested by N. Herrmann
and `[7-14]` those used for development by P.-A. Loizeau
- the `<Disk index>` in the range `[0-8]`, with `0` indicating the `/local/mcbm2022` disk-folder pair and `[1-8]`
indicating the `/storage/<n>/` disks.
- the `<TS source full hostname>` which should be a combination `hostname:port`. In order to avoid overloading the
standard network, it is critical here to target an `ibX`, e.g. `node8ib2:5561`
In addition, 2 optional parameters can be provided:
- the `<histogram server host>`, which would allow to use a common server
- the `<parameter server host>` (only if the histo server is provided), which would allow to use a common server
If not used, the script will expect both servers to be running on the localhost interface at `127.0.0.1`
The parameters for `<source hostname>` and `<Disk index>` are not used in this script.
The limit for the size of the ZMQ buffer of processed timeslices at the output of each branch is set to 2.
The processes cannot be started in the foreground, as multiple need to be created (1 per branch.
So in order to be as close as possible to the "all-in-one" version, they are started in the background with an infinite
check loop behind identical to the latter.
## mq_builders.sbatch
It expects 5 parameters in the following order:
- the `<Run Id>`, as reported by flesctl
- the `<Number of branches to be started per node>`, leading to a total parallel capability of `4 x n` timeslices
- the `<Trigger set>` in the range `[0-14]`, with `[0-6]` corresponding to the trigger settings tested by N. Herrmann
and `[7-14]` those used for development by P.-A. Loizeau
- the `<Disk index>` in the range `[0-8]`, with `0` indicating the `/local/mcbm2022` disk-folder pair and `[1-8]`
indicating the `/storage/<n>/` disks.
- the `<TS source full hostname>` which should be a combination `hostname:port`. In order to avoid overloading the
standard network, it is critical here to target an `ibX`, e.g. `node8ib2:5561`
In addition, 2 optional parameters can be provided:
- the `<histogram server host>`, which would allow to use a common server
- the `<parameter server host>` (only if the histo server is provided), which would allow to use a common server
If not used, the script will expect both servers to be running on the localhost interface at `127.0.0.1`
The parameters for `<source hostname>` and `<Disk index>` are not used in this script.
The limit for the size of the ZMQ buffer of processed timeslices at both the input and the output of each branch is set
to 2.
The processes cannot be started in the foreground, as multiple need to be created (1 per branch.
So in order to be as close as possible to the "all-in-one" version, they are started in the background with an infinite
check loop behind identical to the latter.
# Known problems
1. Some memory leak in the Sink leads to a final memory usage of `~12 GB` even after the events of all TS are pushed to
disk
1. Something fishy is happening with the ZMQ buffering, as even without re-ordering and missing TS insertion, the memory
usage of the sink increase up to `180 GB`, which is far more than expected with the HWM of 2 messages at input
1. The plots generated by the sink for the buffer monitoring and processed TS/Event counting have messed up scales
#!/bin/bash
mkdir -p /local/mcbm2022/online_logs/$1
#!/bin/bash
#SBATCH -J McbmEvts
#SBATCH --oversubscribe
# Copyright (C) 2022 Facility for Antiproton and Ion Research in Europe, Darmstadt
# SPDX-License-Identifier: GPL-3.0-only
# author: Pierre-Alain Loizeau [committer]
_histServHost="127.0.0.1"
_parServHost="127.0.0.1"
if [ $# -ge 5 ]; then
_run_id=$1
_nbbranch=$2
_TriggSet=$3
_Disk=$4
_hostname=$5
if [ $# -ge 6 ]; then
_histServHost=$6
if [ $# -eq 7 ]; then
_parServHost=$7
fi
fi
else
echo 'Missing parameters. Only following pattern allowed:'
echo 'mq_builders.sbatch <Run Id> <Nb // branches> <Trigger set> <Storage disk index> <hostname:port>'
echo 'mq_builders.sbatch <Run Id> <Nb // branches> <Trigger set> <Storage disk index> <hostname:port> <hist serv host>'
echo 'mq_builders.sbatch <Run Id> <Nb // branches> <Trigger set> <Storage disk index> <hostname:port> <hist serv host> <par. serv host>'
return -1
fi
# Prepare log folder variables
_log_folder="/local/mcbm2022/online_logs/${_run_id}/"
_localhost=`hostname`
echo $SLURM_ARRAY_TASK_ID ${_localhost} ${_run_id} ${_nbbranch} ${_TriggSet} ${_hostname}
# CBMROOT + FAIRMQ initialisation
_BuildDir=/scratch/loizeau/cbmroot_mcbm/build
source ${_BuildDir}/config.sh
# source /local/mcbm2022/install/config.sh
if [ -e $SIMPATH/bin/fairmq-shmmonitor ]; then
$SIMPATH/bin/fairmq-shmmonitor --cleanup
fi
# Only one Processing branch is monitoring, and the full topology gets 2.5 TS/s, so with 10 branches pub may be ~10s
_pubfreqts=3
_pubminsec=1.0
_pubmaxsec=10.0
########################################################################################################################
# Apply sets of settings for different triggers
_TriggerMinNumberBmon=0
_TriggerMinNumberSts=0
_TriggerMinNumberTrd1d=0
_TriggerMinNumberTrd2d=0
_TriggerMinNumberTof=4
_TriggerMinNumberRich=0
_TriggerMaxNumberBMon=-1
_TriggerMaxNumberSts=-1
_TriggerMaxNumberTrd1d=-1
_TriggerMaxNumberTrd2d=-1
_TriggerMaxNumberTof=-1
_TriggerMaxNumberRich=-1
_TriggerMinLayersNumberTof=0
_TriggerMinLayersNumberSts=0
_TrigWinMinBMon=-10
_TrigWinMaxBMon=10
_TrigWinMinSts=-40
_TrigWinMaxSts=40
_TrigWinMinTrd1d=-50
_TrigWinMaxTrd1d=400
_TrigWinMinTrd2d=-60
_TrigWinMaxTrd2d=350
_TrigWinMinTof=-10
_TrigWinMaxTof=70
_TrigWinMinRich=-10
_TrigWinMaxRich=40
bTrigSet=true;
case ${_TriggSet} in
0)
# NH: default, any Tof hit
_TriggerMaxNumberBMon=1000
_TriggerMinNumberTof=1
_TrigWinMinBMon=-50
_TrigWinMaxBMon=50
_TrigWinMinSts=-60
_TrigWinMaxSts=60
_TrigWinMinTrd1d=-300
_TrigWinMaxTrd1d=300
_TrigWinMinTrd2d=-200
_TrigWinMaxTrd2d=200
_TrigWinMinTof=-80
_TrigWinMaxTof=120
_TrigWinMinRich=-60
_TrigWinMaxRich=60
;;
1)
# NH: default, Tof - T0 concidences (pulser)
_TriggerMinNumberBmon=1
_TriggerMaxNumberBMon=1000
_TriggerMinNumberTof=2
_TriggerMinLayersNumberTof=1
_TrigWinMinBMon=-50
_TrigWinMaxBMon=50
_TrigWinMinSts=-60
_TrigWinMaxSts=60
_TrigWinMinTrd1d=-300
_TrigWinMaxTrd1d=300
_TrigWinMinTrd2d=-200
_TrigWinMaxTrd2d=200
_TrigWinMinTof=-180
_TrigWinMaxTof=220
_TrigWinMinRich=-60
_TrigWinMaxRich=60
;;
2)
# NH: Tof standalone track trigger (cosmic)
_TriggerMaxNumberBMon=1000
_TriggerMinNumberTof=8
_TriggerMinLayersNumberTof=4
_TrigWinMinBMon=-50
_TrigWinMaxBMon=50
_TrigWinMinSts=-60
_TrigWinMaxSts=60
_TrigWinMinTrd1d=-300
_TrigWinMaxTrd1d=300
_TrigWinMinTrd2d=-200
_TrigWinMaxTrd2d=200
_TrigWinMinTof=-30
_TrigWinMaxTof=70
_TrigWinMinRich=-60
_TrigWinMaxRich=60
;;
3)
# NH: Tof track trigger with T0
_TriggerMinNumberBmon=1
_TriggerMaxNumberBMon=2
_TriggerMinNumberTof=8
_TriggerMinLayersNumberTof=4
_TrigWinMinBMon=-50
_TrigWinMaxBMon=50
_TrigWinMinSts=-60
_TrigWinMaxSts=60
_TrigWinMinTrd1d=-300
_TrigWinMaxTrd1d=300
_TrigWinMinTrd2d=-200
_TrigWinMaxTrd2d=200
_TrigWinMinTof=-20
_TrigWinMaxTof=60
_TrigWinMinRich=-60
_TrigWinMaxRich=60
;;
4)
# NH: mCbm track trigger Tof, T0 & STS
_TriggerMinNumberBmon=1
_TriggerMaxNumberBMon=2
_TriggerMinNumberSts=2
_TriggerMinLayersNumberSts=1
_TriggerMinNumberTof=8
_TriggerMinLayersNumberTof=4
_TrigWinMinBMon=-50
_TrigWinMaxBMon=50
_TrigWinMinSts=-60
_TrigWinMaxSts=60
_TrigWinMinTrd1d=-300
_TrigWinMaxTrd1d=300
_TrigWinMinTrd2d=-200
_TrigWinMaxTrd2d=200
_TrigWinMinTof=-20
_TrigWinMaxTof=60
_TrigWinMinRich=-60
_TrigWinMaxRich=60
;;
5)
# NH: mCbm lambda trigger
_TriggerMinNumberBmon=1
_TriggerMaxNumberBMon=2
_TriggerMinNumberSts=8
_TriggerMinLayersNumberSts=2
_TriggerMinNumberTof=16
_TriggerMinLayersNumberTof=8
_TrigWinMinBMon=-50
_TrigWinMaxBMon=50
_TrigWinMinSts=-60
_TrigWinMaxSts=60
_TrigWinMinTrd1d=-300
_TrigWinMaxTrd1d=300
_TrigWinMinTrd2d=-200
_TrigWinMaxTrd2d=200
_TrigWinMinTof=-20
_TrigWinMaxTof=60
_TrigWinMinRich=-60
_TrigWinMaxRich=60
;;
6)
# NH: One hit per detector system w/ big acceptance=mCbm full track trigger
_TriggerMinNumberBmon=1
_TriggerMaxNumberBMon=1;
_TriggerMinNumberSts=4
_TriggerMinLayersNumberSts=0
_TriggerMinNumberTrd1d=2
_TriggerMinNumberTof=8
_TriggerMinLayersNumberTof=4
_TrigWinMinBMon=-50
_TrigWinMaxBMon=50
_TrigWinMinSts=-60
_TrigWinMaxSts=60
_TrigWinMinTrd1d=-300
_TrigWinMaxTrd1d=300
_TrigWinMinTrd2d=-200
_TrigWinMaxTrd2d=200
_TrigWinMinTof=-20
_TrigWinMaxTof=60
_TrigWinMinRich=-60
_TrigWinMaxRich=60
;;
7)
# PAL default: T0 + STS + TOF, only digi cut
_TriggerMinNumberBmon=1
_TriggerMinNumberSts=2
_TriggerMinNumberTof=4
;;
8)
# PAL: default, Tof - T0 concidences (pulser)
_TriggerMinNumberBmon=4
_TriggerMinNumberTof=2
_TriggerMinLayersNumberTof=1
;;
9)
# PAL: Tof standalone track trigger (cosmic)
_TriggerMinNumberTof=8
_TriggerMinLayersNumberTof=4
;;
10)
# PAL: Tof track trigger with T0
_TriggerMinNumberBmon=1
_TriggerMinNumberTof=8
_TriggerMinLayersNumberTof=4
;;
11)
# PAL: mCbm track trigger Tof, T0 & STS
_TriggerMinNumberBmon=1
_TriggerMinNumberSts=2
_TriggerMinNumberTof=8
_TriggerMinLayersNumberTof=4
;;
12)
# PAL: mCbm lambda trigger
_TriggerMinNumberBmon=1
_TriggerMinNumberSts=8
_TriggerMinNumberTof=16
_TriggerMinLayersNumberTof=8
;;
13)
# PAL: One hit per detector system w/ big acceptance=mCbm full track trigger
TriggerMinNumberBmon=1
_TriggerMinNumberSts=4
_TriggerMinNumberTrd1d=2
_TriggerMinNumberTrd1d=1
_TriggerMinNumberTof=8
_TriggerMinNumberRich=1
;;
14)
# PAL: mCbm track trigger Tof, T0 & STS
_TriggerMinNumberBmon=1
_TriggerMinNumberSts=4
_TriggerMinNumberTof=8
_TriggerMinLayersNumberTof=4
_TriggerMinLayersNumberSts=2
;;
*)
bTrigSet=false;
;;
esac
echo Using MQ trigger par set: ${_TriggSet}
########################################################################################################################
_ratelog=0 # hides ZMQ messages rates and bandwidth
#_ratelog=1 # display ZMQ messages rates and bandwidth
# ProcessName_runid_trigset_hostname_yyyy_mm_dd_hh_mm_ss.log
LOGFILETAG="${_run_id}_${_TriggSet}_${_localhost}_"
LOGFILETAG+=`date +%Y_%m_%d_%H_%M_%S`
LOGFILETAG+=".log"
########################################################################################################################
############################
# Processing branches #
############################
_iBranch=0
while (( _iBranch < _nbbranch )); do
(( _iPort = 11680 + _iBranch ))
##########################
# Event Builder #
##########################
EVTBUILDER_LOG="${_log_folder}build${_iBranch}_${LOGFILETAG}"
EVTBUILDER="BuildDigiEvents"
EVTBUILDER+=" --control static"
EVTBUILDER+=" --id build$_iBranch"
EVTBUILDER+=" --severity info"
# EVTBUILDER+=" --severity debug"
EVTBUILDER+=" --PubFreqTs $_pubfreqts"
EVTBUILDER+=" --PubTimeMin $_pubminsec"
EVTBUILDER+=" --PubTimeMax $_pubmaxsec"
if [ ${_iBranch} -eq 0 ]; then
EVTBUILDER+=" --FillHistos true"
else
EVTBUILDER+=" --FillHistos false"
fi
EVTBUILDER+=" --IgnTsOver false"
EVTBUILDER+=" --EvtOverMode AllowOverlap"
EVTBUILDER+=" --RefDet kT0"
EVTBUILDER+=" --DelDet kMuch"
EVTBUILDER+=" --DelDet kPsd"
EVTBUILDER+=" --SetTrigWin kT0,${_TrigWinMinBMon},${_TrigWinMaxBMon}"
EVTBUILDER+=" --SetTrigWin kSts,${_TrigWinMinSts},${_TrigWinMaxSts}"
EVTBUILDER+=" --SetTrigWin kTrd,${_TrigWinMinTrd1d},${_TrigWinMaxTrd1d}"
EVTBUILDER+=" --SetTrigWin kTrd2D,${_TrigWinMinTrd2d},${_TrigWinMaxTrd2d}"
EVTBUILDER+=" --SetTrigWin kTof,${_TrigWinMinTof},${_TrigWinMaxTof}"
EVTBUILDER+=" --SetTrigWin kRich,${_TrigWinMinRich},${_TrigWinMaxRich}"
EVTBUILDER+=" --SetTrigMinNb kT0,${_TriggerMinNumberBmon}"
EVTBUILDER+=" --SetTrigMinNb kSts,${_TriggerMinNumberSts}"
EVTBUILDER+=" --SetTrigMinNb kTrd,${_TriggerMinNumberTrd1d}"
EVTBUILDER+=" --SetTrigMinNb kTrd2D,${_TriggerMinNumberTrd2d}"
EVTBUILDER+=" --SetTrigMinNb kTof,${_TriggerMinNumberTof}"
EVTBUILDER+=" --SetTrigMinNb kRich,${_TriggerMinNumberRich}"
EVTBUILDER+=" --SetTrigMaxNb kT0,${_TriggerMaxNumberBMon}"
EVTBUILDER+=" --SetTrigMaxNb kSts,${_TriggerMaxNumberSts}"
EVTBUILDER+=" --SetTrigMaxNb kTrd,${_TriggerMaxNumberTrd1d}"
EVTBUILDER+=" --SetTrigMaxNb kTrd2D,${_TriggerMaxNumberTrd2d}"
EVTBUILDER+=" --SetTrigMaxNb kTof,${_TriggerMaxNumberTof}"
EVTBUILDER+=" --SetTrigMaxNb kRich,${_TriggerMaxNumberRich}"
EVTBUILDER+=" --SetTrigMinLayersNb kTof,${_TriggerMinLayersNumberTof}"
EVTBUILDER+=" --SetTrigMinLayersNb kSts,${_TriggerMinLayersNumberSts}"
EVTBUILDER+=" --TsNameIn unpts$_iBranch"
EVTBUILDER+=" --EvtNameOut events"
# EVTBUILDER+=" --DoNotSend true"
EVTBUILDER+=" --channel-config name=unpts$_iBranch,type=pull,method=connect,transport=zeromq,rcvBufSize=2,address=tcp://127.0.0.1:$_iPort,rateLogging=$_ratelog"
EVTBUILDER+=" --channel-config name=events,type=push,method=connect,transport=zeromq,sndBufSize=2,address=tcp://127.0.0.1:11556,rateLogging=$_ratelog"
# EVTBUILDER+=" --channel-config name=commands,type=sub,method=connect,transport=zeromq,address=tcp://127.0.0.1:11007"
EVTBUILDER+=" --channel-config name=parameters,type=req,method=connect,transport=zeromq,address=tcp://${_parServHost}:11005,rateLogging=0"
EVTBUILDER+=" --channel-config name=histogram-in,type=pub,method=connect,transport=zeromq,address=tcp://${_histServHost}:11666,rateLogging=$_ratelog"
EVTBUILDER+=" --transport zeromq"
echo ${_BuildDir}/bin/MQ/mcbm/$EVTBUILDER &> $EVTBUILDER_LOG &
${_BuildDir}/bin/MQ/mcbm/$EVTBUILDER &> $EVTBUILDER_LOG &
(( _iBranch += 1 ))
done
STILL_RUNNING=`ps | wc -l`
STILL_RUNNING_OUT="${STILL_RUNNING}\n"
STILL_RUNNING_OUT+=`ps`
echo `date` "${STILL_RUNNING_OUT}" > ${_log_folder}/still_running_evtbuilders.txt
while [ 6 -lt $STILL_RUNNING ]; do
sleep 5
# ps
# echo `ps | wc -l`
STILL_RUNNING=`ps | wc -l`
STILL_RUNNING_OUT="${STILL_RUNNING}\n"
STILL_RUNNING_OUT+=`ps`
echo `date` "${STILL_RUNNING_OUT}" > ${_log_folder}/still_running_evtbuilders.txt
done
#!/bin/bash
#SBATCH -J McbmHist
#SBATCH --oversubscribe
# Copyright (C) 2022 Facility for Antiproton and Ion Research in Europe, Darmstadt
# SPDX-License-Identifier: GPL-3.0-only
# author: Pierre-Alain Loizeau [committer]
if [ $# -eq 5 ]; then
_run_id=$1
_nbbranch=$2
_TriggSet=$3
_Disk=$4
_hostname=$5
else
echo 'Missing parameters. Only following pattern allowed:'
echo 'mq_processing_node.sbatch <Run Id> <Nb // branches> <Trigger set> <Storage disk index> <hostname:port>'
return -1
fi
# Prepare log folder variable
_log_folder="/local/mcbm2022/online_logs/${_run_id}/"
echo $SLURM_ARRAY_TASK_ID ${_run_id} ${_nbbranch} ${_TriggSet} ${_hostname}
# CBMROOT + FAIRMQ initialisation
_BuildDir=/scratch/loizeau/cbmroot_mcbm/build
source ${_BuildDir}/config.sh
# source /local/mcbm2022/install/config.sh
if [ -e $SIMPATH/bin/fairmq-shmmonitor ]; then
$SIMPATH/bin/fairmq-shmmonitor --cleanup
fi
########################################################################################################################
_ratelog=0 # hides ZMQ messages rates and bandwidth
#_ratelog=1 # display ZMQ messages rates and bandwidth
# ProcessName_runid_trigset_hostname_yyyy_mm_dd_hh_mm_ss.log
LOGFILETAG="${_run_id}_${_TriggSet}_"
LOGFILETAG+=`hostname`
LOGFILETAG+="_"
LOGFILETAG+=`date +%Y_%m_%d_%H_%M_%S`
LOGFILETAG+=".log"
########################################################################################################################
############################
# Histogram server #
############################
HISTSRV_LOG="${_log_folder}server1_${LOGFILETAG}"
HISTSERVER="MqHistoServer"
HISTSERVER+=" --control static"
HISTSERVER+=" --id server1"
HISTSERVER+=" --severity info"
HISTSERVER+=" --histport 8080"
HISTSERVER+=" --channel-config name=histogram-in,type=sub,method=bind,transport=zeromq,address=tcp://127.0.0.1:11666,rateLogging=$_ratelog"
HISTSERVER+=" --channel-config name=histo-conf,type=sub,method=bind,transport=zeromq,address=tcp://127.0.0.1:11667,rateLogging=0"
HISTSERVER+=" --channel-config name=canvas-conf,type=sub,method=bind,transport=zeromq,address=tcp://127.0.0.1:11668,rateLogging=0"
echo ${_BuildDir}/bin/MQ/histogramServer/$HISTSERVER &> $HISTSRV_LOG &
${_BuildDir}/bin/MQ/histogramServer/$HISTSERVER &> $HISTSRV_LOG &
STILL_RUNNING=`ps | wc -l`
STILL_RUNNING_OUT="${STILL_RUNNING}\n"
STILL_RUNNING_OUT+=`ps`
echo `date` "${STILL_RUNNING_OUT}" > ${_log_folder}/still_running_histoserv.txt
while [ 6 -lt $STILL_RUNNING ]; do
sleep 5
# ps
# echo `ps | wc -l`
STILL_RUNNING=`ps | wc -l`
STILL_RUNNING_OUT="${STILL_RUNNING}\n"
STILL_RUNNING_OUT+=`ps`
echo `date` "${STILL_RUNNING_OUT}" > ${_log_folder}/still_running_histoserv.txt
done
#!/bin/bash
#SBATCH -J McbmPars
#SBATCH --oversubscribe
# Copyright (C) 2022 Facility for Antiproton and Ion Research in Europe, Darmstadt
# SPDX-License-Identifier: GPL-3.0-only
# author: Pierre-Alain Loizeau [committer]
if [ $# -eq 5 ]; then
_run_id=$1
_nbbranch=$2
_TriggSet=$3
_Disk=$4
_hostname=$5
else
echo 'Missing parameters. Only following pattern allowed:'
echo 'mq_processing_node.sbatch <Run Id> <Nb // branches> <Trigger set> <Storage disk index> <hostname:port>'
return -1
fi
# Prepare log folder variable
_log_folder="/local/mcbm2022/online_logs/${_run_id}/"
echo $SLURM_ARRAY_TASK_ID ${_run_id} ${_nbbranch} ${_TriggSet} ${_hostname}
# CBMROOT + FAIRMQ initialisation
_BuildDir=/scratch/loizeau/cbmroot_mcbm/build
source ${_BuildDir}/config.sh
# source /local/mcbm2022/install/config.sh
if [ -e $SIMPATH/bin/fairmq-shmmonitor ]; then
$SIMPATH/bin/fairmq-shmmonitor --cleanup
fi
########################################################################################################################
# Setup file and parameter files for parameter server
_setup_name=mcbm_beam_2022_03_22_iron
_parfileBmon=$VMCWORKDIR/macro/beamtime/mcbm2022/mBmonCriPar.par
_parfileSts=$VMCWORKDIR/macro/beamtime/mcbm2022/mStsPar.par
_parfileTrdAsic=$VMCWORKDIR/parameters/trd/trd_v22d_mcbm.asic.par
_parfileTrdDigi=$VMCWORKDIR/parameters/trd/trd_v22d_mcbm.digi.par
_parfileTrdGas=$VMCWORKDIR/parameters/trd/trd_v22d_mcbm.gas.par
_parfileTrdGain=$VMCWORKDIR/parameters/trd/trd_v22d_mcbm.gain.par
_parfileTof=$VMCWORKDIR/macro/beamtime/mcbm2022/mTofCriPar.par
_parfileRich=$VMCWORKDIR/macro/beamtime/mcbm2021/mRichPar_70.par
# Parameter files => Update depending on run ID!!!
if [ $_run_id -ge 2060 ]; then
if [ $_run_id -le 2065 ]; then
_setup_name=mcbm_beam_2022_03_09_carbon
_parfileTof=$VMCWORKDIR/macro/beamtime/mcbm2022/mTofCriParCarbon.par
elif [ $_run_id -le 2160 ]; then # Potentially wrong setup between 2065 and 2150 but not official runs
_setup_name=mcbm_beam_2022_03_22_iron
_parfileTof=$VMCWORKDIR/macro/beamtime/mcbm2022/mTofCriParIron.par
elif [ $_run_id -le 2310 ]; then # Potentially wrong setup between 2160 and 2176 but not official runs
_setup_name=mcbm_beam_2022_03_28_uranium
_parfileTrdAsic=$VMCWORKDIR/parameters/trd/trd_v22g_mcbm.asic.par
_parfileTrdDigi=$VMCWORKDIR/parameters/trd/trd_v22g_mcbm.digi.par
_parfileTrdGas=$VMCWORKDIR/parameters/trd/trd_v22g_mcbm.gas.par
_parfileTrdGain=$VMCWORKDIR/parameters/trd/trd_v22g_mcbm.gain.par
_parfileTof=$VMCWORKDIR/macro/beamtime/mcbm2022/mTofCriParUranium.par
elif [ $_run_id -ge 2350 ]; then
_setup_name=mcbm_beam_2022_05_23_nickel
_parfileTrdAsic=$VMCWORKDIR/parameters/trd/trd_v22h_mcbm.asic.par
_parfileTrdDigi=$VMCWORKDIR/parameters/trd/trd_v22h_mcbm.digi.par
_parfileTrdGas=$VMCWORKDIR/parameters/trd/trd_v22h_mcbm.gas.par
_parfileTrdGain=$VMCWORKDIR/parameters/trd/trd_v22h_mcbm.gain.par
_parfileTof=$VMCWORKDIR/macro/beamtime/mcbm2022/mTofCriParUranium.par
fi
fi
########################################################################################################################
_ratelog=0 # hides ZMQ messages rates and bandwidth
#_ratelog=1 # display ZMQ messages rates and bandwidth
# ProcessName_runid_trigset_hostname_yyyy_mm_dd_hh_mm_ss.log
LOGFILETAG="${_run_id}_${_TriggSet}_"
LOGFILETAG+=`hostname`
LOGFILETAG+="_"
LOGFILETAG+=`date +%Y_%m_%d_%H_%M_%S`
LOGFILETAG+=".log"
########################################################################################################################
############################
# Parameter server #
############################
PARAMSRV_LOG="${_log_folder}parmq_${LOGFILETAG}"
PARAMETERSERVER="parmq-server"
PARAMETERSERVER+=" --control static"
PARAMETERSERVER+=" --id parmq-server"
PARAMETERSERVER+=" --severity info"
PARAMETERSERVER+=" --channel-name parameters"
PARAMETERSERVER+=" --channel-config name=parameters,type=rep,method=bind,transport=zeromq,address=tcp://127.0.0.1:11005,rateLogging=0"
PARAMETERSERVER+=" --first-input-name $_parfileSts;$_parfileTrdAsic;$_parfileTrdDigi;$_parfileTrdGas;$_parfileTrdGain;$_parfileTof;$_parfileBmon;$_parfileRich"
PARAMETERSERVER+=" --first-input-type ASCII"
PARAMETERSERVER+=" --setup $_setup_name"
echo ${_BuildDir}/bin/MQ/parmq/$PARAMETERSERVER &> $PARAMSRV_LOG &
${_BuildDir}/bin/MQ/parmq/$PARAMETERSERVER &> $PARAMSRV_LOG &
STILL_RUNNING=`ps | wc -l`
STILL_RUNNING_OUT="${STILL_RUNNING}\n"
STILL_RUNNING_OUT+=`ps`
echo `date` "${STILL_RUNNING_OUT}" > ${_log_folder}/still_running_parserv.txt
while [ 6 -lt $STILL_RUNNING ]; do
sleep 5
# ps
# echo `ps | wc -l`
STILL_RUNNING=`ps | wc -l`
STILL_RUNNING_OUT="${STILL_RUNNING}\n"
STILL_RUNNING_OUT+=`ps`
echo `date` "${STILL_RUNNING_OUT}" > ${_log_folder}/still_running_parserv.txt
done
This diff is collapsed.
#!/bin/bash
#SBATCH -J McbmOnline
#SBATCH --oversubscribe
# Copyright (C) 2022 Facility for Antiproton and Ion Research in Europe, Darmstadt
# SPDX-License-Identifier: GPL-3.0-only
# author: Pierre-Alain Loizeau [committer]
if [ $# -eq 5 ]; then
_run_id=$1
_nbbranch=$2
_TriggSet=$3
_Disk=$4
_hostname=$5
else
echo 'Missing parameters. Only following pattern allowed:'
echo 'mq_processing_node.sbatch <Run Id> <Nb // branches> <Trigger set> <Storage disk index> <hostname:port>'
return -1
fi
# Prepare log folder variables
_log_folder="/local/mcbm2022/online_logs/${_run_id}/"
_localhost=`hostname`
echo $SLURM_ARRAY_TASK_ID ${_run_id} ${_nbbranch} ${_TriggSet} ${_hostname}
# CBMROOT + FAIRMQ initialisation
_BuildDir=/scratch/loizeau/cbmroot_mcbm/build
source ${_BuildDir}/config.sh
# source /local/mcbm2022/install/config.sh
if [ -e $SIMPATH/bin/fairmq-shmmonitor ]; then
$SIMPATH/bin/fairmq-shmmonitor --cleanup
fi
# Only one Processing branch is monitoring, and the full topology gets 2.5 TS/s, so with 10 branches pub may be ~10s
_pubfreqts=3
_pubminsec=1.0
_pubmaxsec=10.0
########################################################################################################################
# Setup file and parameter files for parameter server
_setup_name=mcbm_beam_2022_03_22_iron
_parfileBmon=$VMCWORKDIR/macro/beamtime/mcbm2022/mBmonCriPar.par
_parfileSts=$VMCWORKDIR/macro/beamtime/mcbm2022/mStsPar.par
_parfileTrdAsic=$VMCWORKDIR/parameters/trd/trd_v22d_mcbm.asic.par
_parfileTrdDigi=$VMCWORKDIR/parameters/trd/trd_v22d_mcbm.digi.par
_parfileTrdGas=$VMCWORKDIR/parameters/trd/trd_v22d_mcbm.gas.par
_parfileTrdGain=$VMCWORKDIR/parameters/trd/trd_v22d_mcbm.gain.par
_parfileTof=$VMCWORKDIR/macro/beamtime/mcbm2022/mTofCriPar.par
_parfileRich=$VMCWORKDIR/macro/beamtime/mcbm2021/mRichPar_70.par
# Parameter files => Update depending on run ID!!!
if [ $_run_id -ge 2060 ]; then
if [ $_run_id -le 2065 ]; then
_setup_name=mcbm_beam_2022_03_09_carbon
_parfileTof=$VMCWORKDIR/macro/beamtime/mcbm2022/mTofCriParCarbon.par
elif [ $_run_id -le 2160 ]; then # Potentially wrong setup between 2065 and 2150 but not official runs
_setup_name=mcbm_beam_2022_03_22_iron
_parfileTof=$VMCWORKDIR/macro/beamtime/mcbm2022/mTofCriParIron.par
elif [ $_run_id -le 2310 ]; then # Potentially wrong setup between 2160 and 2176 but not official runs
_setup_name=mcbm_beam_2022_03_28_uranium
_parfileTrdAsic=$VMCWORKDIR/parameters/trd/trd_v22g_mcbm.asic.par
_parfileTrdDigi=$VMCWORKDIR/parameters/trd/trd_v22g_mcbm.digi.par
_parfileTrdGas=$VMCWORKDIR/parameters/trd/trd_v22g_mcbm.gas.par
_parfileTrdGain=$VMCWORKDIR/parameters/trd/trd_v22g_mcbm.gain.par
_parfileTof=$VMCWORKDIR/macro/beamtime/mcbm2022/mTofCriParUranium.par
elif [ $_run_id -ge 2350 ]; then
_setup_name=mcbm_beam_2022_05_23_nickel
_parfileTrdAsic=$VMCWORKDIR/parameters/trd/trd_v22h_mcbm.asic.par
_parfileTrdDigi=$VMCWORKDIR/parameters/trd/trd_v22h_mcbm.digi.par
_parfileTrdGas=$VMCWORKDIR/parameters/trd/trd_v22h_mcbm.gas.par
_parfileTrdGain=$VMCWORKDIR/parameters/trd/trd_v22h_mcbm.gain.par
_parfileTof=$VMCWORKDIR/macro/beamtime/mcbm2022/mTofCriParUranium.par
fi
fi
########################################################################################################################
# Apply sets of settings for different triggers
_UnpTimeOffsBMon=0
_UnpTimeOffsSts=-970
_UnpTimeOffsTrd1d=1225
_UnpTimeOffsTrd2d=-525
_UnpTimeOffsTof=45
_UnpTimeOffsRich=95
########################################################################################################################
# Apply sets of settings for different triggers
_TriggerMinNumberBmon=0
_TriggerMinNumberSts=0
_TriggerMinNumberTrd1d=0
_TriggerMinNumberTrd2d=0
_TriggerMinNumberTof=4
_TriggerMinNumberRich=0
_TriggerMaxNumberBMon=-1
_TriggerMaxNumberSts=-1
_TriggerMaxNumberTrd1d=-1
_TriggerMaxNumberTrd2d=-1
_TriggerMaxNumberTof=-1
_TriggerMaxNumberRich=-1
_TriggerMinLayersNumberTof=0
_TriggerMinLayersNumberSts=0
_TrigWinMinBMon=-10
_TrigWinMaxBMon=10
_TrigWinMinSts=-40
_TrigWinMaxSts=40
_TrigWinMinTrd1d=-50
_TrigWinMaxTrd1d=400
_TrigWinMinTrd2d=-60
_TrigWinMaxTrd2d=350
_TrigWinMinTof=-10
_TrigWinMaxTof=70
_TrigWinMinRich=-10
_TrigWinMaxRich=40
bTrigSet=true;
case ${_TriggSet} in
0)
# NH: default, any Tof hit
_TriggerMaxNumberBMon=1000
_TriggerMinNumberTof=1
_TrigWinMinBMon=-50
_TrigWinMaxBMon=50
_TrigWinMinSts=-60
_TrigWinMaxSts=60
_TrigWinMinTrd1d=-300
_TrigWinMaxTrd1d=300
_TrigWinMinTrd2d=-200
_TrigWinMaxTrd2d=200
_TrigWinMinTof=-80
_TrigWinMaxTof=120
_TrigWinMinRich=-60
_TrigWinMaxRich=60
;;
1)
# NH: default, Tof - T0 concidences (pulser)
_TriggerMinNumberBmon=1
_TriggerMaxNumberBMon=1000
_TriggerMinNumberTof=2
_TriggerMinLayersNumberTof=1
_TrigWinMinBMon=-50
_TrigWinMaxBMon=50
_TrigWinMinSts=-60
_TrigWinMaxSts=60
_TrigWinMinTrd1d=-300
_TrigWinMaxTrd1d=300
_TrigWinMinTrd2d=-200
_TrigWinMaxTrd2d=200
_TrigWinMinTof=-180
_TrigWinMaxTof=220
_TrigWinMinRich=-60
_TrigWinMaxRich=60
;;
2)
# NH: Tof standalone track trigger (cosmic)
_TriggerMaxNumberBMon=1000
_TriggerMinNumberTof=8
_TriggerMinLayersNumberTof=4
_TrigWinMinBMon=-50
_TrigWinMaxBMon=50
_TrigWinMinSts=-60
_TrigWinMaxSts=60
_TrigWinMinTrd1d=-300
_TrigWinMaxTrd1d=300
_TrigWinMinTrd2d=-200
_TrigWinMaxTrd2d=200
_TrigWinMinTof=-30
_TrigWinMaxTof=70
_TrigWinMinRich=-60
_TrigWinMaxRich=60
;;
3)
# NH: Tof track trigger with T0
_TriggerMinNumberBmon=1
_TriggerMaxNumberBMon=2
_TriggerMinNumberTof=8
_TriggerMinLayersNumberTof=4
_TrigWinMinBMon=-50
_TrigWinMaxBMon=50
_TrigWinMinSts=-60
_TrigWinMaxSts=60
_TrigWinMinTrd1d=-300
_TrigWinMaxTrd1d=300
_TrigWinMinTrd2d=-200
_TrigWinMaxTrd2d=200
_TrigWinMinTof=-20
_TrigWinMaxTof=60
_TrigWinMinRich=-60
_TrigWinMaxRich=60
;;
4)
# NH: mCbm track trigger Tof, T0 & STS
_TriggerMinNumberBmon=1
_TriggerMaxNumberBMon=2
_TriggerMinNumberSts=2
_TriggerMinLayersNumberSts=1
_TriggerMinNumberTof=8
_TriggerMinLayersNumberTof=4
_TrigWinMinBMon=-50
_TrigWinMaxBMon=50
_TrigWinMinSts=-60
_TrigWinMaxSts=60
_TrigWinMinTrd1d=-300
_TrigWinMaxTrd1d=300
_TrigWinMinTrd2d=-200
_TrigWinMaxTrd2d=200
_TrigWinMinTof=-20
_TrigWinMaxTof=60
_TrigWinMinRich=-60
_TrigWinMaxRich=60
;;
5)
# NH: mCbm lambda trigger
_TriggerMinNumberBmon=1
_TriggerMaxNumberBMon=2
_TriggerMinNumberSts=8
_TriggerMinLayersNumberSts=2
_TriggerMinNumberTof=16
_TriggerMinLayersNumberTof=8
_TrigWinMinBMon=-50
_TrigWinMaxBMon=50
_TrigWinMinSts=-60
_TrigWinMaxSts=60
_TrigWinMinTrd1d=-300
_TrigWinMaxTrd1d=300
_TrigWinMinTrd2d=-200
_TrigWinMaxTrd2d=200
_TrigWinMinTof=-20
_TrigWinMaxTof=60
_TrigWinMinRich=-60
_TrigWinMaxRich=60
;;
6)
# NH: One hit per detector system w/ big acceptance=mCbm full track trigger
_TriggerMinNumberBmon=1
_TriggerMaxNumberBMon=1;
_TriggerMinNumberSts=4
_TriggerMinLayersNumberSts=0
_TriggerMinNumberTrd1d=2
_TriggerMinNumberTof=8
_TriggerMinLayersNumberTof=4
_TrigWinMinBMon=-50
_TrigWinMaxBMon=50
_TrigWinMinSts=-60
_TrigWinMaxSts=60
_TrigWinMinTrd1d=-300
_TrigWinMaxTrd1d=300
_TrigWinMinTrd2d=-200
_TrigWinMaxTrd2d=200
_TrigWinMinTof=-20
_TrigWinMaxTof=60
_TrigWinMinRich=-60
_TrigWinMaxRich=60
;;
7)
# PAL default: T0 + STS + TOF, only digi cut
_TriggerMinNumberBmon=1
_TriggerMinNumberSts=2
_TriggerMinNumberTof=4
;;
8)
# PAL: default, Tof - T0 concidences (pulser)
_TriggerMinNumberBmon=4
_TriggerMinNumberTof=2
_TriggerMinLayersNumberTof=1
;;
9)
# PAL: Tof standalone track trigger (cosmic)
_TriggerMinNumberTof=8
_TriggerMinLayersNumberTof=4
;;
10)
# PAL: Tof track trigger with T0
_TriggerMinNumberBmon=1
_TriggerMinNumberTof=8
_TriggerMinLayersNumberTof=4
;;
11)
# PAL: mCbm track trigger Tof, T0 & STS
_TriggerMinNumberBmon=1
_TriggerMinNumberSts=2
_TriggerMinNumberTof=8
_TriggerMinLayersNumberTof=4
;;
12)
# PAL: mCbm lambda trigger
_TriggerMinNumberBmon=1
_TriggerMinNumberSts=8
_TriggerMinNumberTof=16
_TriggerMinLayersNumberTof=8
;;
13)
# PAL: One hit per detector system w/ big acceptance=mCbm full track trigger
TriggerMinNumberBmon=1
_TriggerMinNumberSts=4
_TriggerMinNumberTrd1d=2
_TriggerMinNumberTrd1d=1
_TriggerMinNumberTof=8
_TriggerMinNumberRich=1
;;
14)
# PAL: mCbm track trigger Tof, T0 & STS
_TriggerMinNumberBmon=1
_TriggerMinNumberSts=4
_TriggerMinNumberTof=8
_TriggerMinLayersNumberTof=4
_TriggerMinLayersNumberSts=2
;;
*)
bTrigSet=false;
;;
esac
echo Using MQ trigger par set: ${_TriggSet}
########################################################################################################################
_ratelog=0 # hides ZMQ messages rates and bandwidth
#_ratelog=1 # display ZMQ messages rates and bandwidth
# ProcessName_runid_trigset_hostname_yyyy_mm_dd_hh_mm_ss.log
LOGFILETAG="${_run_id}_${_TriggSet}_"
LOGFILETAG+=`hostname`
LOGFILETAG+="_"
LOGFILETAG+=`date +%Y_%m_%d_%H_%M_%S`
LOGFILETAG+=".log"
########################################################################################################################
# Each slurm job has a singledevice instance
case $SLURM_ARRAY_TASK_ID in
1) ############################
# Histogram server #
############################
HISTSERVER="MqHistoServer"
HISTSERVER+=" --control static"
HISTSERVER+=" --id server1"
HISTSERVER+=" --severity info"
HISTSERVER+=" --histport 8081"
HISTSERVER+=" --channel-config name=histogram-in,type=sub,method=bind,transport=zeromq,address=tcp://127.0.0.1:11666,rateLogging=$_ratelog"
HISTSERVER+=" --channel-config name=histo-conf,type=sub,method=bind,transport=zeromq,address=tcp://127.0.0.1:11667,rateLogging=0"
HISTSERVER+=" --channel-config name=canvas-conf,type=sub,method=bind,transport=zeromq,address=tcp://127.0.0.1:11668,rateLogging=0"
echo ${_BuildDir}/bin/MQ/histogramServer/$HISTSERVER
${_BuildDir}/bin/MQ/histogramServer/$HISTSERVER
;;
2) ############################
# Sampler #
############################
SAMPLER="RepReqTsSampler"
SAMPLER+=" --control static"
SAMPLER+=" --id sampler1"
SAMPLER+=" --max-timeslices -1"
SAMPLER+=" --severity info"
SAMPLER+=" --fles-host $_hostname"
SAMPLER+=" --high-water-mark 10"
SAMPLER+=" --no-split-ts 1"
SAMPLER+=" --ChNameMissTs missedts"
SAMPLER+=" --ChNameCmds commands"
SAMPLER+=" --PubFreqTs $_pubfreqts"
SAMPLER+=" --PubTimeMin $_pubminsec"
SAMPLER+=" --PubTimeMax $_pubmaxsec"
SAMPLER+=" --channel-config name=ts-request,type=rep,method=bind,transport=zeromq,address=tcp://127.0.0.1:11555,rateLogging=$_ratelog"
SAMPLER+=" --channel-config name=histogram-in,type=pub,method=connect,transport=zeromq,address=tcp://127.0.0.1:11666,rateLogging=$_ratelog"
SAMPLER+=" --channel-config name=missedts,type=pub,method=bind,address=tcp://127.0.0.1:11006,rateLogging=$_ratelog"
SAMPLER+=" --channel-config name=commands,type=pub,method=bind,address=tcp://127.0.0.1:11007,rateLogging=$_ratelog"
SAMPLER+=" --transport zeromq"
echo ${_BuildDir}/bin/MQ/source/$SAMPLER
${_BuildDir}/bin/MQ/source/$SAMPLER
;;
3) ############################
# Parameter server #
############################
PARAMETERSERVER="parmq-server"
PARAMETERSERVER+=" --control static"
PARAMETERSERVER+=" --id parmq-server"
PARAMETERSERVER+=" --severity info"
PARAMETERSERVER+=" --channel-name parameters"
PARAMETERSERVER+=" --channel-config name=parameters,type=rep,method=bind,transport=zeromq,address=tcp://127.0.0.1:11005,rateLogging=0"
PARAMETERSERVER+=" --first-input-name $_parfileSts;$_parfileMuch;$_parfileTrdAsic;$_parfileTrdDigi;$_parfileTrdGas;$_parfileTrdGain;$_parfileTof;$_parfileBmon;$_parfileRich;$_parfilePsd"
PARAMETERSERVER+=" --first-input-type ASCII"
PARAMETERSERVER+=" --setup $_setup_name"
echo ${_BuildDir}/bin/MQ/parmq/$PARAMETERSERVER
${_BuildDir}/bin/MQ/parmq/$PARAMETERSERVER
;;
4) ############################
# Event Sink #
############################
EVTSINK="DigiEventSink"
EVTSINK+=" --control static"
EVTSINK+=" --id evtsink1"
EVTSINK+=" --severity info"
# EVTSINK+=" --severity debug"
EVTSINK+=" --StoreFullTs 0"
# EVTSINK+=" --BypassConsecutiveTs 1"
# EVTSINK+=" --BypassConsecutiveTs true"
if [ ${_Disk} -eq 0 ]; then
EVTSINK+=" --OutFileName /local/mcbm2022/data/${_run_id}_${_TriggSet}_${_localhost}.digi_events.root"
else
EVTSINK+=" --OutFileName /storage/${_Disk}/data/${_run_id}_${_TriggSet}_${_localhost}.digi_events.root"
fi
EVTSINK+=" --FillHistos true"
EVTSINK+=" --PubFreqTs $_pubfreqts"
EVTSINK+=" --PubTimeMin $_pubminsec"
EVTSINK+=" --PubTimeMax $_pubmaxsec"
EVTSINK+=" --EvtNameIn events"
EVTSINK+=" --channel-config name=events,type=pull,method=bind,transport=zeromq,rcvBufSize=$_nbbranch,address=tcp://127.0.0.1:11556,rateLogging=$_ratelog"
EVTSINK+=" --channel-config name=missedts,type=sub,method=connect,transport=zeromq,address=tcp://127.0.0.1:11006,rateLogging=$_ratelog"
EVTSINK+=" --channel-config name=commands,type=sub,method=connect,transport=zeromq,address=tcp://127.0.0.1:11007,rateLogging=$_ratelog"
EVTSINK+=" --channel-config name=histogram-in,type=pub,method=connect,transport=zeromq,address=tcp://127.0.0.1:11666,rateLogging=$_ratelog"
echo ${_BuildDir}/bin/MQ/mcbm/$EVTSINK
${_BuildDir}/bin/MQ/mcbm/$EVTSINK
;;
*) # Processing branches
(( _iBranch = SLURM_ARRAY_TASK_ID%2 ))
(( _iBranch -= 4 ))
(( _iPort = 11680 + _iBranch ))
if [ $((SLURM_ARRAY_TASK_ID%2)) -eq 1 ]; then
##########################
# Unpacker #
##########################
UNPACKER="MqUnpack"
UNPACKER+=" --control static"
UNPACKER+=" --id unp$_iBranch"
# UNPACKER+=" --severity error"
UNPACKER+=" --severity info"
# UNPACKER+=" --severity debug"
UNPACKER+=" --Setup $_setup_name"
UNPACKER+=" --RunId $_run_id"
UNPACKER+=" --IgnOverMs false"
UNPACKER+=" --UnpBmon true"
UNPACKER+=" --UnpMuch false"
UNPACKER+=" --UnpPsd false"
UNPACKER+=" --SetTimeOffs kT0,${_UnpTimeOffsBMon}"
UNPACKER+=" --SetTimeOffs kSTS,${_UnpTimeOffsSts}"
UNPACKER+=" --SetTimeOffs kTRD,${_UnpTimeOffsTrd1d}"
UNPACKER+=" --SetTimeOffs kTRD2D,${_UnpTimeOffsTrd2d}"
UNPACKER+=" --SetTimeOffs kTOF,${_UnpTimeOffsTof}"
UNPACKER+=" --SetTimeOffs kRICH,${_UnpTimeOffsRich}"
UNPACKER+=" --PubFreqTs $_pubfreqts"
UNPACKER+=" --PubTimeMin $_pubminsec"
UNPACKER+=" --PubTimeMax $_pubmaxsec"
# if [ ${_iBranch} -eq 0 ]; then
# UNPACKER+=" --FillHistos true"
# else
# UNPACKER+=" --FillHistos false"
# fi
UNPACKER+=" --TsNameOut unpts$_iBranch"
UNPACKER+=" --channel-config name=ts-request,type=req,method=connect,transport=zeromq,address=tcp://127.0.0.1:11555,rateLogging=$_ratelog"
UNPACKER+=" --channel-config name=parameters,type=req,method=connect,transport=zeromq,address=tcp://127.0.0.1:11005,rateLogging=0"
UNPACKER+=" --channel-config name=unpts$_iBranch,type=push,method=bind,transport=zeromq,sndBufSize=2,address=tcp://127.0.0.1:$_iPort,rateLogging=$_ratelog"
# UNPACKER+=" --channel-config name=commands,type=sub,method=connect,transport=zeromq,address=tcp://127.0.0.1:11007"
UNPACKER+=" --channel-config name=histogram-in,type=pub,method=connect,transport=zeromq,address=tcp://127.0.0.1:11666,rateLogging=$_ratelog"
UNPACKER+=" --transport zeromq"
echo ${_BuildDir}/bin/MQ/mcbm/$UNPACKER
${_BuildDir}/bin/MQ/mcbm/$UNPACKER
else
##########################
# Event Builder #
##########################
EVTBUILDER="BuildDigiEvents"
EVTBUILDER+=" --control static"
EVTBUILDER+=" --id build$_iBranch"
EVTBUILDER+=" --severity info"
# EVTBUILDER+=" --severity debug"
EVTBUILDER+=" --PubFreqTs $_pubfreqts"
EVTBUILDER+=" --PubTimeMin $_pubminsec"
EVTBUILDER+=" --PubTimeMax $_pubmaxsec"
if [ ${_iBranch} -eq 0 ]; then
EVTBUILDER+=" --FillHistos true"
else
EVTBUILDER+=" --FillHistos false"
fi
EVTBUILDER+=" --IgnTsOver false"
EVTBUILDER+=" --EvtOverMode AllowOverlap"
EVTBUILDER+=" --RefDet kT0"
EVTBUILDER+=" --DelDet kMuch"
EVTBUILDER+=" --DelDet kPsd"
EVTBUILDER+=" --SetTrigWin kT0,$_TrigWinMinBMon,$_TrigWinMaxBMon"
EVTBUILDER+=" --SetTrigWin kSts,$_TrigWinMinSts,$_TrigWinMaxSts"
EVTBUILDER+=" --SetTrigWin kTrd,$_TrigWinMinTrd1d,$_TrigWinMaxTrd1d"
EVTBUILDER+=" --SetTrigWin kTrd2D,$_TrigWinMinTrd2d,$_TrigWinMaxTrd2d"
EVTBUILDER+=" --SetTrigWin kTof,$_TrigWinMinTof,$_TrigWinMaxTof"
EVTBUILDER+=" --SetTrigWin kRich,$_TrigWinMinRich,$_TrigWinMaxRich"
EVTBUILDER+=" --SetTrigMinNb kT0,$_BmonMin"
EVTBUILDER+=" --SetTrigMinNb kSts,$_StsMin"
EVTBUILDER+=" --SetTrigMinNb kTrd,$_Trd1dMin"
EVTBUILDER+=" --SetTrigMinNb kTrd2D,$_Trd2dMin"
EVTBUILDER+=" --SetTrigMinNb kTof,$_TofMin"
EVTBUILDER+=" --SetTrigMinNb kRich,$_RichMin"
EVTBUILDER+=" --SetTrigMaxNb kT0,$_BmonMax"
EVTBUILDER+=" --SetTrigMaxNb kSts,$_StsMax"
EVTBUILDER+=" --SetTrigMaxNb kTrd,$_Trd1dMax"
EVTBUILDER+=" --SetTrigMaxNb kTrd2D,$_Trd2dMax"
EVTBUILDER+=" --SetTrigMaxNb kTof,$_TofMax"
EVTBUILDER+=" --SetTrigMaxNb kRich,$_RichMax"
EVTBUILDER+=" --SetTrigMinLayersNb kTof,$_TofMinLay"
EVTBUILDER+=" --SetTrigMinLayersNb kSts,$_StsMinLay"
EVTBUILDER+=" --TsNameIn unpts$_iBranch"
EVTBUILDER+=" --EvtNameOut events"
EVTBUILDER+=" --DoNotSend true"
EVTBUILDER+=" --channel-config name=unpts$_iBranch,type=pull,method=connect,transport=zeromq,rcvBufSize=2,address=tcp://127.0.0.1:$_iPort,rateLogging=$_ratelog"
EVTBUILDER+=" --channel-config name=events,type=push,method=connect,transport=zeromq,sndBufSize=2,address=tcp://127.0.0.1:11556,rateLogging=$_ratelog"
# EVTBUILDER+=" --channel-config name=commands,type=sub,method=connect,transport=zeromq,address=tcp://127.0.0.1:11007"
EVTBUILDER+=" --channel-config name=parameters,type=req,method=connect,transport=zeromq,address=tcp://127.0.0.1:11005,rateLogging=0"
EVTBUILDER+=" --channel-config name=histogram-in,type=pub,method=connect,transport=zeromq,address=tcp://127.0.0.1:11666,rateLogging=$_ratelog"
EVTBUILDER+=" --transport zeromq"
echo ${_BuildDir}/bin/MQ/mcbm/$EVTBUILDER
${_BuildDir}/bin/MQ/mcbm/$EVTBUILDER
fi
;;
esac
sleep 10
#!/bin/bash
#SBATCH -J McbmServers
# Copyright (C) 2022 Facility for Antiproton and Ion Research in Europe, Darmstadt
# SPDX-License-Identifier: GPL-3.0-only
# author: Pierre-Alain Loizeau [committer]
if [ $# -eq 4 ]; then
_run_id=$1
_nbbranch=$2
_TriggSet=$3
_Disk=$4
else
echo 'Missing parameters. Only following pattern allowed:'
echo 'mq_servers.sbatch <Run Id> <Nb // branches> <Trigger set> <Storage disk index>'
return -1
fi
# Prepare log folder variable
_log_folder="/local/mcbm2022/online_logs/${_run_id}/"
echo $SLURM_ARRAY_TASK_ID ${_run_id} ${_nbbranch} ${_TriggSet} ${_hostname}
# CBMROOT + FAIRMQ initialisation
_BuildDir=/scratch/loizeau/cbmroot_mcbm/build
source ${_BuildDir}/config.sh
# source /local/mcbm2022/install/config.sh
if [ -e $SIMPATH/bin/fairmq-shmmonitor ]; then
$SIMPATH/bin/fairmq-shmmonitor --cleanup
fi
########################################################################################################################
# Setup file and parameter files for parameter server
_setup_name=mcbm_beam_2022_03_22_iron
_parfileBmon=$VMCWORKDIR/macro/beamtime/mcbm2022/mBmonCriPar.par
_parfileSts=$VMCWORKDIR/macro/beamtime/mcbm2022/mStsPar.par
_parfileTrdAsic=$VMCWORKDIR/parameters/trd/trd_v22d_mcbm.asic.par
_parfileTrdDigi=$VMCWORKDIR/parameters/trd/trd_v22d_mcbm.digi.par
_parfileTrdGas=$VMCWORKDIR/parameters/trd/trd_v22d_mcbm.gas.par
_parfileTrdGain=$VMCWORKDIR/parameters/trd/trd_v22d_mcbm.gain.par
_parfileTof=$VMCWORKDIR/macro/beamtime/mcbm2022/mTofCriPar.par
_parfileRich=$VMCWORKDIR/macro/beamtime/mcbm2021/mRichPar_70.par
# Parameter files => Update depending on run ID!!!
if [ $_run_id -ge 2060 ]; then
if [ $_run_id -le 2065 ]; then
_setup_name=mcbm_beam_2022_03_09_carbon
_parfileTof=$VMCWORKDIR/macro/beamtime/mcbm2022/mTofCriParCarbon.par
elif [ $_run_id -le 2160 ]; then # Potentially wrong setup between 2065 and 2150 but not official runs
_setup_name=mcbm_beam_2022_03_22_iron
_parfileTof=$VMCWORKDIR/macro/beamtime/mcbm2022/mTofCriParIron.par
elif [ $_run_id -le 2310 ]; then # Potentially wrong setup between 2160 and 2176 but not official runs
_setup_name=mcbm_beam_2022_03_28_uranium
_parfileTrdAsic=$VMCWORKDIR/parameters/trd/trd_v22g_mcbm.asic.par
_parfileTrdDigi=$VMCWORKDIR/parameters/trd/trd_v22g_mcbm.digi.par
_parfileTrdGas=$VMCWORKDIR/parameters/trd/trd_v22g_mcbm.gas.par
_parfileTrdGain=$VMCWORKDIR/parameters/trd/trd_v22g_mcbm.gain.par
_parfileTof=$VMCWORKDIR/macro/beamtime/mcbm2022/mTofCriParUranium.par
elif [ $_run_id -ge 2350 ]; then
_setup_name=mcbm_beam_2022_05_23_nickel
_parfileTrdAsic=$VMCWORKDIR/parameters/trd/trd_v22h_mcbm.asic.par
_parfileTrdDigi=$VMCWORKDIR/parameters/trd/trd_v22h_mcbm.digi.par
_parfileTrdGas=$VMCWORKDIR/parameters/trd/trd_v22h_mcbm.gas.par
_parfileTrdGain=$VMCWORKDIR/parameters/trd/trd_v22h_mcbm.gain.par
_parfileTof=$VMCWORKDIR/macro/beamtime/mcbm2022/mTofCriParUranium.par
fi
fi
########################################################################################################################
_ratelog=0 # hides ZMQ messages rates and bandwidth
#_ratelog=1 # display ZMQ messages rates and bandwidth
# ProcessName_runid_trigset_hostname_yyyy_mm_dd_hh_mm_ss.log
LOGFILETAG="${_run_id}_${_TriggSet}_"
LOGFILETAG+=`hostname`
LOGFILETAG+="_"
LOGFILETAG+=`date +%Y_%m_%d_%H_%M_%S`
LOGFILETAG+=".log"
########################################################################################################################
############################
# Histogram server #
############################
HISTSRV_LOG="${_log_folder}histos_${LOGFILETAG}"
HISTSERVER="MqHistoServer"
HISTSERVER+=" --control static"
HISTSERVER+=" --id histo-server"
HISTSERVER+=" --severity info"
HISTSERVER+=" --histport 8080"
HISTSERVER+=" --channel-config name=histogram-in,type=sub,method=bind,transport=zeromq,address=tcp://127.0.0.1:11666,rateLogging=$_ratelog"
HISTSERVER+=" --channel-config name=histo-conf,type=sub,method=bind,transport=zeromq,address=tcp://127.0.0.1:11667,rateLogging=0"
HISTSERVER+=" --channel-config name=canvas-conf,type=sub,method=bind,transport=zeromq,address=tcp://127.0.0.1:11668,rateLogging=0"
echo ${_BuildDir}/bin/MQ/histogramServer/$HISTSERVER &> $HISTSRV_LOG &
${_BuildDir}/bin/MQ/histogramServer/$HISTSERVER &> $HISTSRV_LOG &
############################
# Parameter server #
############################
PARAMSRV_LOG="${_log_folder}parmq_${LOGFILETAG}"
PARAMETERSERVER="parmq-server"
PARAMETERSERVER+=" --control static"
PARAMETERSERVER+=" --id parmq-server"
PARAMETERSERVER+=" --severity info"
PARAMETERSERVER+=" --channel-name parameters"
PARAMETERSERVER+=" --channel-config name=parameters,type=rep,method=bind,transport=zeromq,address=tcp://127.0.0.1:11005,rateLogging=0"
PARAMETERSERVER+=" --first-input-name $_parfileSts;$_parfileTrdAsic;$_parfileTrdDigi;$_parfileTrdGas;$_parfileTrdGain;$_parfileTof;$_parfileBmon;$_parfileRich"
PARAMETERSERVER+=" --first-input-type ASCII"
PARAMETERSERVER+=" --setup $_setup_name"
echo ${_BuildDir}/bin/MQ/parmq/$PARAMETERSERVER &> $PARAMSRV_LOG &
${_BuildDir}/bin/MQ/parmq/$PARAMETERSERVER &> $PARAMSRV_LOG &
STILL_RUNNING=`ps | wc -l`
STILL_RUNNING_OUT="${STILL_RUNNING}\n"
STILL_RUNNING_OUT+=`ps`
echo `date` "${STILL_RUNNING_OUT}" > ${_log_folder}/still_running.txt
while [ 6 -lt $STILL_RUNNING ]; do
sleep 5
# ps
# echo `ps | wc -l`
STILL_RUNNING=`ps | wc -l`
STILL_RUNNING_OUT="${STILL_RUNNING}\n"
STILL_RUNNING_OUT+=`ps`
echo `date` "${STILL_RUNNING_OUT}" > ${_log_folder}/still_running.txt
done
#!/bin/bash
#SBATCH -J MqStop
#SBATCH --oversubscribe
# Copyright (C) 2022 Facility for Antiproton and Ion Research in Europe, Darmstadt
# SPDX-License-Identifier: GPL-3.0-only
# author: Pierre-Alain Loizeau [committer]
pkill -SIGINT RepReqTsSampler;
STILL_RUNNING=`ps | grep RepReqTsSampler | wc -l`
while [ 0 -gt $STILL_RUNNING ]; do
sleep 1
STILL_RUNNING=`ps | grep RepReqTsSampler | wc -l`
done
pkill -SIGINT MqUnpack;
STILL_RUNNING=`ps | grep MqUnpack | wc -l`
while [ 0 -gt $STILL_RUNNING ]; do
sleep 1
STILL_RUNNING=`ps | grep MqUnpack | wc -l`
done
pkill -SIGINT BuildDig;
STILL_RUNNING=`ps | grep BuildDigiEvents | wc -l`
while [ 0 -gt $STILL_RUNNING ]; do
sleep 1
STILL_RUNNING=`ps | grep BuildDigiEvents | wc -l`
done
pkill -SIGINT DigiEventSink;
STILL_RUNNING=`ps | grep DigiEventSink | wc -l`
while [ 0 -gt $STILL_RUNNING ]; do
sleep 1
STILL_RUNNING=`ps | grep DigiEventSink | wc -l`
done
pkill -SIGINT MqHistoServer;
STILL_RUNNING=`ps | grep MqHistoServer | wc -l`
while [ 0 -gt $STILL_RUNNING ]; do
sleep 1
STILL_RUNNING=`ps | grep MqHistoServer | wc -l`
done
pkill -SIGINT parmq-server
STILL_RUNNING=`ps | grep parmq-server | wc -l`
while [ 0 -gt $STILL_RUNNING ]; do
sleep 1
STILL_RUNNING=`ps | grep parmq-server | wc -l`
done
#!/bin/bash
#SBATCH -J McbmSink
#SBATCH --oversubscribe
# Copyright (C) 2022 Facility for Antiproton and Ion Research in Europe, Darmstadt
# SPDX-License-Identifier: GPL-3.0-only
# author: Pierre-Alain Loizeau [committer]
_histServHost="127.0.0.1"
_parServHost="127.0.0.1"
if [ $# -ge 5 ]; then
_run_id=$1
_nbbranch=$2
_TriggSet=$3
_Disk=$4
_hostname=$5
if [ $# -ge 6 ]; then
_histServHost=$6
if [ $# -eq 7 ]; then
_parServHost=$7
fi
fi
else
echo 'Missing parameters. Only following pattern allowed:'
echo 'mq_sink.sbatch <Run Id> <Nb // branches> <Trigger set> <Storage disk index> <hostname:port>'
echo 'mq_sink.sbatch <Run Id> <Nb // branches> <Trigger set> <Storage disk index> <hostname:port> <hist serv host>'
echo 'mq_sink.sbatch <Run Id> <Nb // branches> <Trigger set> <Storage disk index> <hostname:port> <hist serv host> <par. serv host>'
return -1
fi
# Prepare log folder variables
_log_folder="/local/mcbm2022/online_logs/${_run_id}/"
_localhost=`hostname`
echo $SLURM_ARRAY_TASK_ID ${_localhost} ${_run_id} ${_nbbranch} ${_TriggSet} ${_hostname}
# CBMROOT + FAIRMQ initialisation
_BuildDir=/scratch/loizeau/cbmroot_mcbm/build
source ${_BuildDir}/config.sh
# source /local/mcbm2022/install/config.sh
if [ -e $SIMPATH/bin/fairmq-shmmonitor ]; then
$SIMPATH/bin/fairmq-shmmonitor --cleanup
fi
# Only one Processing branch is monitoring, and the full topology gets 2.5 TS/s, so with 10 branches pub may be ~10s
_pubfreqts=3
_pubminsec=1.0
_pubmaxsec=10.0
########################################################################################################################
_ratelog=0 # hides ZMQ messages rates and bandwidth
#_ratelog=1 # display ZMQ messages rates and bandwidth
# ProcessName_runid_trigset_hostname_yyyy_mm_dd_hh_mm_ss.log
LOGFILETAG="${_run_id}_${_TriggSet}_${_localhost}_"
LOGFILETAG+=`date +%Y_%m_%d_%H_%M_%S`
LOGFILETAG+=".log"
########################################################################################################################
############################
# Event Sink #
############################
EVTSINK_LOG="${_log_folder}evtsink1_${LOGFILETAG}"
EVTSINK="DigiEventSink"
EVTSINK+=" --control static"
EVTSINK+=" --id evtsink1"
EVTSINK+=" --severity info"
# EVTSINK+=" --severity debug"
EVTSINK+=" --StoreFullTs 0"
# EVTSINK+=" --BypassConsecutiveTs true"
EVTSINK+=" --WriteMissingTs false"
EVTSINK+=" --DisableCompression true"
EVTSINK+=" --TreeFileMaxSize 4000000000"
if [ ${_Disk} -eq 0 ]; then
EVTSINK+=" --OutFileName /local/mcbm2022/data/${_run_id}_${_TriggSet}_${_localhost}.digi_events.root"
else
EVTSINK+=" --OutFileName /storage/${_Disk}/data/${_run_id}_${_TriggSet}_${_localhost}.digi_events.root"
fi
EVTSINK+=" --FillHistos true"
EVTSINK+=" --PubFreqTs $_pubfreqts"
EVTSINK+=" --PubTimeMin $_pubminsec"
EVTSINK+=" --PubTimeMax $_pubmaxsec"
EVTSINK+=" --EvtNameIn events"
EVTSINK+=" --channel-config name=events,type=pull,method=bind,transport=zeromq,rcvBufSize=$_nbbranch,address=tcp://127.0.0.1:11556,rateLogging=$_ratelog"
EVTSINK+=" --channel-config name=missedts,type=sub,method=connect,transport=zeromq,address=tcp://127.0.0.1:11006,rateLogging=$_ratelog"
EVTSINK+=" --channel-config name=commands,type=sub,method=connect,transport=zeromq,address=tcp://127.0.0.1:11007,rateLogging=$_ratelog"
EVTSINK+=" --channel-config name=histogram-in,type=pub,method=connect,transport=zeromq,address=tcp://${_histServHost}:11666,rateLogging=$_ratelog"
echo ${_BuildDir}/bin/MQ/mcbm/$EVTSINK &> $EVTSINK_LOG &
${_BuildDir}/bin/MQ/mcbm/$EVTSINK &> $EVTSINK_LOG &
STILL_RUNNING=`ps | wc -l`
STILL_RUNNING_OUT="${STILL_RUNNING}\n"
STILL_RUNNING_OUT+=`ps`
echo `date` "${STILL_RUNNING_OUT}" > ${_log_folder}/still_running_sink.txt
while [ 6 -lt $STILL_RUNNING ]; do
sleep 5
# ps
# echo `ps | wc -l`
STILL_RUNNING=`ps | wc -l`
STILL_RUNNING_OUT="${STILL_RUNNING}\n"
STILL_RUNNING_OUT+=`ps`
echo `date` "${STILL_RUNNING_OUT}" > ${_log_folder}/still_running_sink.txt
done
#!/bin/bash
#SBATCH -J McbmSource
#SBATCH --oversubscribe
# Copyright (C) 2022 Facility for Antiproton and Ion Research in Europe, Darmstadt
# SPDX-License-Identifier: GPL-3.0-only
# author: Pierre-Alain Loizeau [committer]
_histServHost="127.0.0.1"
_parServHost="127.0.0.1"
if [ $# -ge 5 ]; then
_run_id=$1
_nbbranch=$2
_TriggSet=$3
_Disk=$4
_hostname=$5
if [ $# -ge 6 ]; then
_histServHost=$6
if [ $# -eq 7 ]; then
_parServHost=$7
fi
fi
else
echo 'Missing parameters. Only following pattern allowed:'
echo 'mq_source.sbatch <Run Id> <Nb // branches> <Trigger set> <Storage disk index> <hostname:port>'
echo 'mq_source.sbatch <Run Id> <Nb // branches> <Trigger set> <Storage disk index> <hostname:port> <hist serv host>'
echo 'mq_source.sbatch <Run Id> <Nb // branches> <Trigger set> <Storage disk index> <hostname:port> <hist serv host> <par. serv host>'
return -1
fi
# Prepare log folder variables
_log_folder="/local/mcbm2022/online_logs/${_run_id}/"
_localhost=`hostname`
echo $SLURM_ARRAY_TASK_ID ${_localhost} ${_run_id} ${_nbbranch} ${_TriggSet} ${_hostname}
# CBMROOT + FAIRMQ initialisation
_BuildDir=/scratch/loizeau/cbmroot_mcbm/build
source ${_BuildDir}/config.sh
# source /local/mcbm2022/install/config.sh
if [ -e $SIMPATH/bin/fairmq-shmmonitor ]; then
$SIMPATH/bin/fairmq-shmmonitor --cleanup
fi
# Only one Processing branch is monitoring, and the full topology gets 2.5 TS/s, so with 10 branches pub may be ~10s
_pubfreqts=3
_pubminsec=1.0
_pubmaxsec=10.0
########################################################################################################################
_ratelog=0 # hides ZMQ messages rates and bandwidth
#_ratelog=1 # display ZMQ messages rates and bandwidth
# ProcessName_runid_trigset_hostname_yyyy_mm_dd_hh_mm_ss.log
LOGFILETAG="${_run_id}_${_TriggSet}_${_localhost}_"
LOGFILETAG+=`date +%Y_%m_%d_%H_%M_%S`
LOGFILETAG+=".log"
########################################################################################################################
############################
# Sampler #
############################
SAMPLER_LOG="${_log_folder}sampler1_${LOGFILETAG}"
SAMPLER="RepReqTsSampler"
SAMPLER+=" --control static"
SAMPLER+=" --id sampler1"
SAMPLER+=" --max-timeslices -1"
SAMPLER+=" --severity info"
SAMPLER+=" --fles-host $_hostname"
SAMPLER+=" --high-water-mark 10"
SAMPLER+=" --no-split-ts 1"
SAMPLER+=" --ChNameMissTs missedts"
SAMPLER+=" --ChNameCmds commands"
SAMPLER+=" --PubFreqTs $_pubfreqts"
SAMPLER+=" --PubTimeMin $_pubminsec"
SAMPLER+=" --PubTimeMax $_pubmaxsec"
SAMPLER+=" --channel-config name=ts-request,type=rep,method=bind,transport=zeromq,address=tcp://127.0.0.1:11555,rateLogging=$_ratelog"
SAMPLER+=" --channel-config name=histogram-in,type=pub,method=connect,transport=zeromq,address=tcp://${_histServHost}:11666,rateLogging=$_ratelog"
SAMPLER+=" --channel-config name=missedts,type=pub,method=bind,address=tcp://127.0.0.1:11006,rateLogging=$_ratelog"
SAMPLER+=" --channel-config name=commands,type=pub,method=bind,address=tcp://127.0.0.1:11007,rateLogging=$_ratelog"
SAMPLER+=" --transport zeromq"
echo ${_BuildDir}/bin/MQ/source/$SAMPLER &> $SAMPLER_LOG &
${_BuildDir}/bin/MQ/source/$SAMPLER &> $SAMPLER_LOG &
STILL_RUNNING=`ps | wc -l`
STILL_RUNNING_OUT="${STILL_RUNNING}\n"
STILL_RUNNING_OUT+=`ps`
echo `date` "${STILL_RUNNING_OUT}" > ${_log_folder}/still_running_source.txt
while [ 6 -lt $STILL_RUNNING ]; do
sleep 5
# ps
# echo `ps | wc -l`
STILL_RUNNING=`ps | wc -l`
STILL_RUNNING_OUT="${STILL_RUNNING}\n"
STILL_RUNNING_OUT+=`ps`
echo `date` "${STILL_RUNNING_OUT}" > ${_log_folder}/still_running_source.txt
done
#!/bin/bash
#SBATCH -J McbmUnps
#SBATCH --oversubscribe
# Copyright (C) 2022 Facility for Antiproton and Ion Research in Europe, Darmstadt
# SPDX-License-Identifier: GPL-3.0-only
# author: Pierre-Alain Loizeau [committer]
_histServHost="127.0.0.1"
_parServHost="127.0.0.1"
if [ $# -ge 5 ]; then
_run_id=$1
_nbbranch=$2
_TriggSet=$3
_Disk=$4
_hostname=$5
if [ $# -ge 6 ]; then
_histServHost=$6
if [ $# -eq 7 ]; then
_parServHost=$7
fi
fi
else
echo 'Missing parameters. Only following pattern allowed:'
echo 'mq_unpackers.sbatch <Run Id> <Nb // branches> <Trigger set> <Storage disk index> <hostname:port>'
echo 'mq_unpackers.sbatch <Run Id> <Nb // branches> <Trigger set> <Storage disk index> <hostname:port> <hist serv host>'
echo 'mq_unpackers.sbatch <Run Id> <Nb // branches> <Trigger set> <Storage disk index> <hostname:port> <hist serv host> <par. serv host>'
return -1
fi
# Prepare log folder variables
_log_folder="/local/mcbm2022/online_logs/${_run_id}/"
_localhost=`hostname`
echo $SLURM_ARRAY_TASK_ID ${_localhost} ${_run_id} ${_nbbranch} ${_TriggSet} ${_hostname}
# CBMROOT + FAIRMQ initialisation
_BuildDir=/scratch/loizeau/cbmroot_mcbm/build
source ${_BuildDir}/config.sh
# source /local/mcbm2022/install/config.sh
if [ -e $SIMPATH/bin/fairmq-shmmonitor ]; then
$SIMPATH/bin/fairmq-shmmonitor --cleanup
fi
# Only one Processing branch is monitoring, and the full topology gets 2.5 TS/s, so with 10 branches pub may be ~10s
_pubfreqts=3
_pubminsec=1.0
_pubmaxsec=10.0
########################################################################################################################
# Setup file and parameter files for parameter server
_setup_name=mcbm_beam_2022_03_22_iron
_parfileBmon=$VMCWORKDIR/macro/beamtime/mcbm2022/mBmonCriPar.par
_parfileSts=$VMCWORKDIR/macro/beamtime/mcbm2022/mStsPar.par
_parfileTrdAsic=$VMCWORKDIR/parameters/trd/trd_v22d_mcbm.asic.par
_parfileTrdDigi=$VMCWORKDIR/parameters/trd/trd_v22d_mcbm.digi.par
_parfileTrdGas=$VMCWORKDIR/parameters/trd/trd_v22d_mcbm.gas.par
_parfileTrdGain=$VMCWORKDIR/parameters/trd/trd_v22d_mcbm.gain.par
_parfileTof=$VMCWORKDIR/macro/beamtime/mcbm2022/mTofCriPar.par
_parfileRich=$VMCWORKDIR/macro/beamtime/mcbm2021/mRichPar_70.par
# Parameter files => Update depending on run ID!!!
if [ $_run_id -ge 2060 ]; then
if [ $_run_id -le 2065 ]; then
_setup_name=mcbm_beam_2022_03_09_carbon
_parfileTof=$VMCWORKDIR/macro/beamtime/mcbm2022/mTofCriParCarbon.par
elif [ $_run_id -le 2160 ]; then # Potentially wrong setup between 2065 and 2150 but not official runs
_setup_name=mcbm_beam_2022_03_22_iron
_parfileTof=$VMCWORKDIR/macro/beamtime/mcbm2022/mTofCriParIron.par
elif [ $_run_id -le 2310 ]; then # Potentially wrong setup between 2160 and 2176 but not official runs
_setup_name=mcbm_beam_2022_03_28_uranium
_parfileTrdAsic=$VMCWORKDIR/parameters/trd/trd_v22g_mcbm.asic.par
_parfileTrdDigi=$VMCWORKDIR/parameters/trd/trd_v22g_mcbm.digi.par
_parfileTrdGas=$VMCWORKDIR/parameters/trd/trd_v22g_mcbm.gas.par
_parfileTrdGain=$VMCWORKDIR/parameters/trd/trd_v22g_mcbm.gain.par
_parfileTof=$VMCWORKDIR/macro/beamtime/mcbm2022/mTofCriParUranium.par
elif [ $_run_id -ge 2350 ]; then
_setup_name=mcbm_beam_2022_05_23_nickel
_parfileTrdAsic=$VMCWORKDIR/parameters/trd/trd_v22h_mcbm.asic.par
_parfileTrdDigi=$VMCWORKDIR/parameters/trd/trd_v22h_mcbm.digi.par
_parfileTrdGas=$VMCWORKDIR/parameters/trd/trd_v22h_mcbm.gas.par
_parfileTrdGain=$VMCWORKDIR/parameters/trd/trd_v22h_mcbm.gain.par
_parfileTof=$VMCWORKDIR/macro/beamtime/mcbm2022/mTofCriParUranium.par
fi
fi
########################################################################################################################
# Apply sets of settings for different triggers
_UnpTimeOffsBMon=0
_UnpTimeOffsSts=-970
_UnpTimeOffsTrd1d=1225
_UnpTimeOffsTrd2d=-525
_UnpTimeOffsTof=45
_UnpTimeOffsRich=95
########################################################################################################################
_ratelog=0 # hides ZMQ messages rates and bandwidth
#_ratelog=1 # display ZMQ messages rates and bandwidth
# ProcessName_runid_trigset_hostname_yyyy_mm_dd_hh_mm_ss.log
LOGFILETAG="${_run_id}_${_TriggSet}_${_localhost}_"
LOGFILETAG+=`date +%Y_%m_%d_%H_%M_%S`
LOGFILETAG+=".log"
########################################################################################################################
echo ${_BuildDir}/bin/MQ/mcbm/$EVTSINK &> $EVTSINK_LOG &
${_BuildDir}/bin/MQ/mcbm/$EVTSINK &> $EVTSINK_LOG &
############################
# Processing branches #
############################
_iBranch=0
while (( _iBranch < _nbbranch )); do
(( _iPort = 11680 + _iBranch ))
##########################
# Unpacker #
##########################
UNPACKER_LOG="${_log_folder}unp${_iBranch}_${LOGFILETAG}"
UNPACKER="MqUnpack"
UNPACKER+=" --control static"
UNPACKER+=" --id unp$_iBranch"
# UNPACKER+=" --severity error"
UNPACKER+=" --severity info"
# UNPACKER+=" --severity debug"
UNPACKER+=" --Setup $_setup_name"
UNPACKER+=" --RunId $_run_id"
UNPACKER+=" --IgnOverMs false"
UNPACKER+=" --UnpBmon true"
UNPACKER+=" --UnpMuch false"
UNPACKER+=" --UnpPsd false"
UNPACKER+=" --SetTimeOffs kT0,${_UnpTimeOffsBMon}"
UNPACKER+=" --SetTimeOffs kSTS,${_UnpTimeOffsSts}"
UNPACKER+=" --SetTimeOffs kTRD,${_UnpTimeOffsTrd1d}"
UNPACKER+=" --SetTimeOffs kTRD2D,${_UnpTimeOffsTrd2d}"
UNPACKER+=" --SetTimeOffs kTOF,${_UnpTimeOffsTof}"
UNPACKER+=" --SetTimeOffs kRICH,${_UnpTimeOffsRich}"
UNPACKER+=" --PubFreqTs $_pubfreqts"
UNPACKER+=" --PubTimeMin $_pubminsec"
UNPACKER+=" --PubTimeMax $_pubmaxsec"
# if [ ${_iBranch} -eq 0 ]; then
# UNPACKER+=" --FillHistos true"
# else
# UNPACKER+=" --FillHistos false"
# fi
UNPACKER+=" --TsNameOut unpts$_iBranch"
UNPACKER+=" --channel-config name=ts-request,type=req,method=connect,transport=zeromq,address=tcp://127.0.0.1:11555,rateLogging=$_ratelog"
UNPACKER+=" --channel-config name=unpts$_iBranch,type=push,method=bind,transport=zeromq,sndBufSize=2,address=tcp://127.0.0.1:$_iPort,rateLogging=$_ratelog"
# UNPACKER+=" --channel-config name=commands,type=sub,method=connect,transport=zeromq,address=tcp://127.0.0.1:11007"
UNPACKER+=" --channel-config name=parameters,type=req,method=connect,transport=zeromq,address=tcp://${_parServHost}:11005,rateLogging=0"
UNPACKER+=" --channel-config name=histogram-in,type=pub,method=connect,transport=zeromq,address=tcp://${_histServHost}:11666,rateLogging=$_ratelog"
UNPACKER+=" --transport zeromq"
echo ${_BuildDir}/bin/MQ/mcbm/$UNPACKER &> $UNPACKER_LOG &
${_BuildDir}/bin/MQ/mcbm/$UNPACKER &> $UNPACKER_LOG &
(( _iBranch += 1 ))
done
STILL_RUNNING=`ps | wc -l`
STILL_RUNNING_OUT="${STILL_RUNNING}\n"
STILL_RUNNING_OUT+=`ps`
echo `date` "${STILL_RUNNING_OUT}" > ${_log_folder}/still_running_unpackers.txt
while [ 6 -lt $STILL_RUNNING ]; do
sleep 5
# ps
# echo `ps | wc -l`
STILL_RUNNING=`ps | wc -l`
STILL_RUNNING_OUT="${STILL_RUNNING}\n"
STILL_RUNNING_OUT+=`ps`
echo `date` "${STILL_RUNNING_OUT}" > ${_log_folder}/still_running_unpackers.txt
done
#!/bin/bash
# Copyright (C) 2022 Facility for Antiproton and Ion Research in Europe, Darmstadt
# SPDX-License-Identifier: GPL-3.0-only
# author: Pierre-Alain Loizeau [committer]
if [ $# -eq 4 ]; then
_run_id=$1
_nbbranch=$2
_TriggSet=$3
_Disk=$4
if [ ${_nbbranch} -eq 0 ]; then
echo 'Nb branches cannot be 0! At least one branch is needed!'
return -1
fi
if [ ${_Disk} -lt 0 ] || [ ${_Disk} -gt 3 ]; then
echo 'Disk index on the en13-16 nodes can only be in [0-3]!'
return -1
fi
else
echo 'Missing parameters. Only following pattern allowed:'
echo 'start_topology.sh <Run Id> <Nb // branches> <Trigger set> <Storage disk index>'
return -1
fi
((_nbjobs = 4 + $_nbbranch*2 ))
_log_folder="/local/mcbm2022/online_logs/${_run_id}"
_log_config="-D ${_log_folder} -o ${_run_id}_%A_%a.out.log -e ${_run_id}_%A_%a.err.log"
# Create the log folders
sbatch -w en13 create_log_folder.sbatch ${_run_id}
sbatch -w en14 create_log_folder.sbatch ${_run_id}
sbatch -w en15 create_log_folder.sbatch ${_run_id}
sbatch -w en16 create_log_folder.sbatch ${_run_id}
# Online ports
sbatch -w en13 ${_log_config} mq_processing_node.sbatch ${_run_id} ${_nbbranch} ${_TriggSet} ${_Disk} node8ib2:5560
sbatch -w en14 ${_log_config} mq_processing_node.sbatch ${_run_id} ${_nbbranch} ${_TriggSet} ${_Disk} node8ib2:5561
sbatch -w en15 ${_log_config} mq_processing_node.sbatch ${_run_id} ${_nbbranch} ${_TriggSet} ${_Disk} node9ib2:5560
sbatch -w en16 ${_log_config} mq_processing_node.sbatch ${_run_id} ${_nbbranch} ${_TriggSet} ${_Disk} node9ib2:5561
# Replay ports
#sbatch -w en13 ${_log_config} mq_processing_node.sbatch ${_run_id} ${_nbbranch} ${_TriggSet} ${_Disk} node8ib2:5557
#sbatch -w en14 ${_log_config} mq_processing_node.sbatch ${_run_id} ${_nbbranch} ${_TriggSet} ${_Disk} node8ib2:5557
#sbatch -w en15 ${_log_config} mq_processing_node.sbatch ${_run_id} ${_nbbranch} ${_TriggSet} ${_Disk} node9ib2:5557
#sbatch -w en16 ${_log_config} mq_processing_node.sbatch ${_run_id} ${_nbbranch} ${_TriggSet} ${_Disk} node9ib2:5557
#!/bin/bash
# Copyright (C) 2022 Facility for Antiproton and Ion Research in Europe, Darmstadt
# SPDX-License-Identifier: GPL-3.0-only
# author: Pierre-Alain Loizeau [committer]
if [ $# -eq 4 ]; then
_run_id=$1
_nbbranch=$2
_TriggSet=$3
_Disk=$4
if [ ${_nbbranch} -eq 0 ]; then
echo 'Nb branches cannot be 0! At least one branch is needed!'
return -1
fi
if [ ${_Disk} -lt 0 ] || [ ${_Disk} -gt 3 ]; then
echo 'Disk index on the en13-16 nodes can only be in [0-3]!'
return -1
fi
else
echo 'Missing parameters. Only following pattern allowed:'
echo 'start_topology.sh <Run Id> <Nb // branches> <Trigger set> <Storage disk index>'
return -1
fi
((_nbjobs = 4 + $_nbbranch*2 ))
_log_folder="/local/mcbm2022/online_logs/${_run_id}"
_log_config="-D ${_log_folder} -o ${_run_id}_%A_%a.out.log -e ${_run_id}_%A_%a.err.log"
# Create the log folders
sbatch -w en13 create_log_folder.sbatch ${_run_id}
sbatch -w en14 create_log_folder.sbatch ${_run_id}
sbatch -w en15 create_log_folder.sbatch ${_run_id}
sbatch -w en16 create_log_folder.sbatch ${_run_id}
# Online ports
sbatch -w en13 ${_log_config} --array=1-${_nbjobs} mq_processing_node_array.sbatch ${_run_id} ${_nbbranch} ${_TriggSet} ${_Disk} node8ib2:5560
sbatch -w en14 ${_log_config} --array=1-${_nbjobs} mq_processing_node_array.sbatch ${_run_id} ${_nbbranch} ${_TriggSet} ${_Disk} node8ib2:5561
sbatch -w en15 ${_log_config} --array=1-${_nbjobs} mq_processing_node_array.sbatch ${_run_id} ${_nbbranch} ${_TriggSet} ${_Disk} node9ib2:5560
sbatch -w en16 ${_log_config} --array=1-${_nbjobs} mq_processing_node_array.sbatch ${_run_id} ${_nbbranch} ${_TriggSet} ${_Disk} node9ib2:5561
#!/bin/bash
# Copyright (C) 2022 Facility for Antiproton and Ion Research in Europe, Darmstadt
# SPDX-License-Identifier: GPL-3.0-only
# author: Pierre-Alain Loizeau [committer]
if [ $# -eq 4 ]; then
_run_id=$1
_nbbranch=$2
_TriggSet=$3
_Disk=$4
if [ ${_nbbranch} -eq 0 ]; then
echo 'Nb branches cannot be 0! At least one branch is needed!'
return -1
fi
if [ ${_Disk} -lt 0 ] || [ ${_Disk} -gt 3 ]; then
echo 'Disk index on the en13-16 nodes can only be in [0-3]!'
return -1
fi
else
echo 'Missing parameters. Only following pattern allowed:'
echo 'start_topology.sh <Run Id> <Nb // branches> <Trigger set> <Storage disk index>'
return -1
fi
((_nbjobs = 4 + $_nbbranch*2 ))
_log_folder="/local/mcbm2022/online_logs/${_run_id}"
_log_config="-D ${_log_folder} -o ${_run_id}_%A_%a.out.log -e ${_run_id}_%A_%a.err.log"
_serversHost=node12
#_serversHost=en13
# Create the log folders
sbatch -w node12 create_log_folder.sbatch ${_run_id}
sbatch -w en13 create_log_folder.sbatch ${_run_id}
sbatch -w en14 create_log_folder.sbatch ${_run_id}
sbatch -w en15 create_log_folder.sbatch ${_run_id}
sbatch -w en16 create_log_folder.sbatch ${_run_id}
sleep 2
# Common servers
sbatch -w node12 ${_log_config} mq_histoserv.sbatch ${_run_id} ${_nbbranch} ${_TriggSet} ${_Disk} ""
sbatch -w node12 ${_log_config} mq_parserv.sbatch ${_run_id} ${_nbbranch} ${_TriggSet} ${_Disk} ""
# Online ports: Sources
sbatch -w en13 ${_log_config} mq_source.sbatch ${_run_id} ${_nbbranch} ${_TriggSet} ${_Disk} node8ib2:5560 ${_serversHost}ib0 ${_serversHost}ib0
sbatch -w en14 ${_log_config} mq_source.sbatch ${_run_id} ${_nbbranch} ${_TriggSet} ${_Disk} node8ib2:5561 ${_serversHost}ib0 ${_serversHost}ib0
sbatch -w en15 ${_log_config} mq_source.sbatch ${_run_id} ${_nbbranch} ${_TriggSet} ${_Disk} node9ib2:5560 ${_serversHost}ib0 ${_serversHost}ib0
sbatch -w en16 ${_log_config} mq_source.sbatch ${_run_id} ${_nbbranch} ${_TriggSet} ${_Disk} node9ib2:5561 ${_serversHost}ib0 ${_serversHost}ib0
# Online ports: Sinks
sbatch -w en13 ${_log_config} mq_sink.sbatch ${_run_id} ${_nbbranch} ${_TriggSet} ${_Disk} node8ib2:5560 ${_serversHost}ib0 ${_serversHost}ib0
sbatch -w en14 ${_log_config} mq_sink.sbatch ${_run_id} ${_nbbranch} ${_TriggSet} ${_Disk} node8ib2:5561 ${_serversHost}ib0 ${_serversHost}ib0
sbatch -w en15 ${_log_config} mq_sink.sbatch ${_run_id} ${_nbbranch} ${_TriggSet} ${_Disk} node9ib2:5560 ${_serversHost}ib0 ${_serversHost}ib0
sbatch -w en16 ${_log_config} mq_sink.sbatch ${_run_id} ${_nbbranch} ${_TriggSet} ${_Disk} node9ib2:5561 ${_serversHost}ib0 ${_serversHost}ib0
# Online ports: unpackers
sbatch -w en13 ${_log_config} mq_unpackers.sbatch ${_run_id} ${_nbbranch} ${_TriggSet} ${_Disk} node8ib2:5560 ${_serversHost}ib0 ${_serversHost}ib0
sbatch -w en14 ${_log_config} mq_unpackers.sbatch ${_run_id} ${_nbbranch} ${_TriggSet} ${_Disk} node8ib2:5561 ${_serversHost}ib0 ${_serversHost}ib0
sbatch -w en15 ${_log_config} mq_unpackers.sbatch ${_run_id} ${_nbbranch} ${_TriggSet} ${_Disk} node9ib2:5560 ${_serversHost}ib0 ${_serversHost}ib0
sbatch -w en16 ${_log_config} mq_unpackers.sbatch ${_run_id} ${_nbbranch} ${_TriggSet} ${_Disk} node9ib2:5561 ${_serversHost}ib0 ${_serversHost}ib0
# Online ports: Event builders
sbatch -w en13 ${_log_config} mq_builders.sbatch ${_run_id} ${_nbbranch} ${_TriggSet} ${_Disk} node8ib2:5560 ${_serversHost}ib0 ${_serversHost}ib0
sbatch -w en14 ${_log_config} mq_builders.sbatch ${_run_id} ${_nbbranch} ${_TriggSet} ${_Disk} node8ib2:5561 ${_serversHost}ib0 ${_serversHost}ib0
sbatch -w en15 ${_log_config} mq_builders.sbatch ${_run_id} ${_nbbranch} ${_TriggSet} ${_Disk} node9ib2:5560 ${_serversHost}ib0 ${_serversHost}ib0
sbatch -w en16 ${_log_config} mq_builders.sbatch ${_run_id} ${_nbbranch} ${_TriggSet} ${_Disk} node9ib2:5561 ${_serversHost}ib0 ${_serversHost}ib0
# Replay ports
#sbatch -w en13 ${_log_config} mq_processing_node.sbatch ${_run_id} ${_nbbranch} ${_TriggSet} ${_Disk} node8ib2:5557
#sbatch -w en14 ${_log_config} mq_processing_node.sbatch ${_run_id} ${_nbbranch} ${_TriggSet} ${_Disk} node8ib2:5557
#sbatch -w en15 ${_log_config} mq_processing_node.sbatch ${_run_id} ${_nbbranch} ${_TriggSet} ${_Disk} node9ib2:5557
#sbatch -w en16 ${_log_config} mq_processing_node.sbatch ${_run_id} ${_nbbranch} ${_TriggSet} ${_Disk} node9ib2:5557
#!/bin/bash
# Copyright (C) 2022 Facility for Antiproton and Ion Research in Europe, Darmstadt
# SPDX-License-Identifier: GPL-3.0-only
# author: Pierre-Alain Loizeau [committer]
if [ $# -eq 1 ]; then
_run_id=$1
else
echo 'Missing parameters. Only following pattern allowed:'
echo 'stop_topology.sh <Run Id> '
return -1
fi
((_nbjobs = 4 + $_nbbranch*2 ))
_log_folder="/local/mcbm2022/online_logs/${_run_id}"
_log_config="-D ${_log_folder} -o ${_run_id}_%A_%a.out.log -e ${_run_id}_%A_%a.err.log"
# Online ports
sbatch -w en13 ${_log_config} mq_shutdown.sbatch ${_run_id}
sbatch -w en14 ${_log_config} mq_shutdown.sbatch ${_run_id}
sbatch -w en15 ${_log_config} mq_shutdown.sbatch ${_run_id}
sbatch -w en16 ${_log_config} mq_shutdown.sbatch ${_run_id}
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment