Use CATALYST with your simulations
Let's try incorporating CATALYST In-Situ visualisation with PLUTO
for the Sedov Taylor Blast Problem We Tested Just Now
Dowload Paraview OSMesa version and Catalyst
Paraview's website : https://www.paraview.org/download/
mkdir /scratch/<username>/modules
cd /scratch/<username>/modules
wget 'https://www.paraview.org/paraview-downloads/download.php?submit=Download&version=v5.11&type=binary&os=Linux&downloadFile=ParaView-5.11.0-osmesa-MPI-Linux-Python3.9-x86_64.tar.gz'
mv https://www.paraview.org/paraview-downloads/download.php?submit=Download&version=v5.11&type=binary&os=Linux&downloadFile=ParaView-5.11.0-osmesa-MPI-Linux-Python3.9-x86_64.tar.gz ParaView-5.11.0-osmesa-MPI-Linux-Python3.9-x86_64.tar.gz
tar -xzvf ParaView-5.11.0-osmesa-MPI-Linux-Python3.9-x86_64.tar.gz
ls /scratch/<username>/modules/ParaView-5.11.0-osmesa-MPI-Linux-Python3.9-x86_64/
>> bin lib share
Catalyst Git page : https://gitlab.kitware.com/paraview/catalyst
mkdir /scratch/<username>/modules/paraCatalyst
git clone https://gitlab.kitware.com/paraview/catalyst.git
cd catalyst
mkdir build
cd build
cmake -DCMAKE_INSTALL_PREFIX=/scratch/<username>/modules/paraCatalyst -DCATALYST_USE_MPI=ON ..
make -j16 && make install
cd /scratch/<username>/modules/paraCatalyst
>> include lib64
Export the paths into LD_LIBRARY_PATH and LIBRARY_PATH
[NOTE: Always post pend the paraview and catalyst library paths otherwise there's problem with hdf5 libraries
Here's a sample ~/.bashrc for reference.
# .bashrc
# Source global definitions
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi
shopt -s direxpand
# . /home-ext/apps/spack/share/spack/setup-env.sh
export SPACK_ROOT=/scratch/<username>/spack
. $SPACK_ROOT/share/spack/setup-env.sh
spack load hwloc@2.7.0%intel@2021.4.0
spack load zlib@1.2.11%intel@2021.4.0
spack load gsl@2.7%intel@2021.4.0
spack load gmp@6.2.1%intel@2021.4.0
spack load hdf5@1.10.8%intel@2021.4.0/ssgjscn
spack load intel-mpi@2019.10.317%intel@2021.4.0/6icwzn3
spack load intel-oneapi-compilers@2021.4.0
spack load cmake@3.22.1/ozzmsd4
export SCRATCH="/scratch/<username>"
export PLUTO_DIR="/scratch/<username>/PLUTO-4.4-updated"
export CATALYST_DIR="/scratch/<username>/modules/paraCatalyst"
export PARAVIEW_DIR="/scratch/<username>/modules/ParaView-5.11.0-osmesa-MPI-Linux-Python3.9-x86_64"
export LIBRARY_PATH=$LD_LIBRARY_PATH
export CPATH=/home-ext/apps/spack/opt/spack/linux-centos7-cascadelake/intel-2021.4.0/hdf5-1.10.8-ssgjscncg3xcw7ogodzjg3whjdmvx3bg/include:$CPATH
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$PARAVIEW_DIR/lib"
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$CATALYST_DIR/lib64"
3. Generate CatalystAdaptor.h and python files
Map out the fluid variables in generateCatalystAdaptor.py and accordingly create the CatalystAdaptor.h header file. A sample python scrpit can be found here, which you may need to tweak a little bit according to your simulation.
dump_vars = ['rho', 'prs', 'vx1', 'vx2', 'vx3']
code_vars = ['RHO', 'PRS', 'VX1', 'VX2', 'VX3']
user_vars = [] #['Temp', 'ndens', 'mach', 'PbykB']
#Assumption: User defined variables are upper case version of user_vars[]
ntracers = 0
4. Create the Catalyst pipeline scripts with necessary parameters (e.g., density.py here)
First, one should change the beginning of the pipeline from data.00* to grid. Then, do all necessary visualisation steps. Here a slice is made diagonally through the cubic box, and the density is plotted in log scale. After all the steps, the grid needs to be extracted to a png format
(Extractors->Image->PNG).

Then edit the PNG extractor properties like Trigger to TimeValue, Trigger Length to 0.001 code units. Also change the Background color palette to a Single Color Mode (like Black in color) as shown in the right images.
Then, save the catalyst state as shown below, setting up the dump folder (output/catalyst/density in this case) and
the global trigger values.




5. Add compiler flags to local_make and Compile the Code
Here's a sample local_make
INCLUDE_DIRS += -I/scratch/<username>/modules/paraCatalyst/include/catalyst-2.0
LDFLAGS += -L/scratch/<username>/modules/paraCatalyst/lib64 -lcatalyst
CFLAGS += -DUSE_CATALYST # -DCATALYST_DEBUG
CFLAGS += -DPARAVIEW_IMPL_DIR=\"/scratch/<username>/modules/ParaView-5.11.0-osmesa-MPI-Linux-Python3.9-x86_64/lib/catalyst\" -DVTK_SILENCE_GET_VOID_POINTER_WARNINGS
INCLUDE_DIRS += -I/home/ext/apps/spack/opt/spack/linux-centos7-cascadelake/intel-2021.4.0/hdf5-1.10.8-ssgjscncg3xcw7ogodzjg3whjdmvx3bg/include
LDFLAGS += -lhdf5 -lz
CFLAGS += -DUSE_HDF5
OBJ += hdf5_io.o
Compile the code: make -j32 && make clean
6. Submit the slurm script
An example script:
#!/bin/bash
#SBATCH --job-name="Sedov"
#SBATCH -N 3 # Number of nodes
#SBATCH --ntasks-per-node=48 # Number of core per node
#SBATCH --mail-type=END # Mail events (NONE, BEGIN, END, FAIL, ALL)
#SBATCH --mail-user=<username>@iisc.ac.in # Where to send mail. Set this to your email address
#SBATCH -p debug
#SBATCH -t 00-00:40:00 #dd-hh:mm:ss
#SBATCH --output=%x-%j.log
#SBATCH --error=%x-%j.err
export SPACK_ROOT=/scratch/<username>/spack
. $SPACK_ROOT/share/spack/setup-env.sh
spack load intel-oneapi-compilers@2021.4.0
spack load hwloc@2.7.0%intel@2021.4.0
spack load zlib@1.2.11%intel@2021.4.0
spack load gsl@2.7%intel@2021.4.0
spack load gmp@6.2.1%intel@2021.4.0
spack load fftw@3.3.10%intel@2021.4.0/4s24afk
spack load hdf5@1.10.8%intel@2021.4.0/ssgjscn
#spack load intel-mpi@2019.10.317%intel@2021.4.0/6icwzn3
export SCRATCH="/scratch/<username>/"
export PLUTO_DIR="/scratch/<username>/modules/PLUTO-4.4-updated"
export CATALYST_DIR="/scratch/<username>/modules/paraCatalyst"
export PARAVIEW_DIR="/scratch/<username>/modules/ParaView-5.11.0-osmesa-MPI-Linux-Python3.9-x86_64"
export LIBRARY_PATH=$LD_LIBRARY_PATH
export CPATH=/home-ext/apps/spack/opt/spack/linux-centos7-cascadelake/intel-2021.4.0/hdf5-1.10.8-ssgjscncg3xcw7ogodzjg3whjdmvx3bg/include:$CPATH
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$PARAVIEW_DIR/lib"
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$CATALYST_DIR/lib64"
if [ X"$SLURM_STEP_ID" = "X" -a X"$SLURM_PROCID" = "X"0 ]
then
echo "=========================================="
echo "Date = $(date)"
echo "SLURM_JOB_ID = $SLURM_JOB_ID"
echo "Nodes Allocated = $SLURM_JOB_NUM_NODES"
echo "=========================================="
fi
echo "Working Directory = $(pwd)"
cd $SLURM_SUBMIT_DIR
export PROG="./pluto -catalyst 1 density.py"
mkdir -p ./output/Log_Files
#export I_MPI_FALLBACK=disable
#export I_MPI_FABRICS=shm:ofi
mpiexec.hydra -n $SLURM_NTASKS $PROG
Hurray!!!
(Took ~ 22 minutes on 144 processors with mpiicc)

Combine png files to a video format:
ffmpeg -pattern_type glob -i 'density_*.png' -framerate 10 -pix_fmt yuv420p -vf "pad=ceil(iw/2)*2:ceil(ih/2)*2" SedovDensity.mp4
Speed it up if required:
ffmpeg -itsscale 0.25 -i SedovDensity.mp4 -c copy SedovSpeeded.mp4

A Test Problem
Quick run to check you have reasonable run times!

The Spack Environment
Build your environment...