Download PLUTO to your scratch spaceÂ
(Find the PLUTO code here or Checkout Alankar Dutta's updated version)
Keep your PLUTO directory location added to bashrc. Load all required modules, as shown in the Homepage.
export $PLUTO_DIR="/scratch/<username>/PLUTO-4.4
     Here's a  sample ~/.bashrc for reference.
# .bashrc
# Source global definitions
if [ -f /etc/bashrc ]; then
        . /etc/bashrc
fi
shopt -s direxpand
# . /home-ext/apps/spack/share/spack/setup-env.sh
export SPACK_ROOT=/scratch/<username>/spack
. $SPACK_ROOT/share/spack/setup-env.sh
spack load hwloc@2.7.0%intel@2021.4.0
spack load zlib@1.2.11%intel@2021.4.0
spack load gsl@2.7%intel@2021.4.0
spack load gmp@6.2.1%intel@2021.4.0
spack load hdf5@1.10.8%intel@2021.4.0/ssgjscn
spack load intel-mpi@2019.10.317%intel@2021.4.0/6icwzn3
spack load intel-oneapi-compilers@2021.4.0
spack load cmake@3.22.1/ozzmsd4
export SCRATCH="/scratch/<username>"
export PLUTO_DIR="/scratch/<username>/PLUTO-4.4"
export LIBRARY_PATH=$LD_LIBRARY_PATH
export CPATH=/home-ext/apps/spack/opt/spack/linux-centos7-cascadelake/intel-2021.4.0/hdf5-1.10.8-ssgjscncg3xcw7ogodzjg3whjdmvx3bg/include:$CPATH
Here's the loaded modules checked through spack find --loaded
2. Â Create a Config File for Makefile with mpiicc
Create a file and name it PARM.parallel.defs (say) in $PLUTO_DIR/Config with these default compiler flags (mpiicc is faster than mpicc, and ASYNC_IO=TRUE enables asynchronous writing of files.
vi $PLUTO_DIR/Config/PARM.parallel.defs
########################################################################
#Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â
# Â Â Configuration file for mpicc (parallel)
#Â Â Â Â Â Â Â Â Â Â Â Â Â
CC Â Â Â = mpiicc
CFLAGS Â = -c -O3 #-Wundef
#LDFLAGSÂ = -lm
PARALLEL = TRUE
USE_HDF5 = FALSE
USE_PNGÂ = FALSE
#######################################
# MPI additional spefications
ifeq ($(strip $(PARALLEL)), TRUE)
 USE_ASYNC_IO=TRUE
endif
#Â Â HDF5 library options
ifeq ($(strip $(USE_HDF5)), TRUE)
# Â Â PNG library options
ifeq ($(strip $(USE_PNG)), TRUE)
 LDFLAGS += -lpng
3. Choose the Problem parameters
mkdir /scratch/<username>/runs/sedov-taylor
cd /scratch/<username>/runs/sedov-taylor
cp /scratch/<username>/PLUTO-4.4/Test_Problems/HD/Sedov/definitions_04.h definitions.h
cp /scratch/<username>/PLUTO-4.4/Test_Problems/HD/Sedov/pluto_04.ini pluto.iniÂ
cp /scratch/<username>/PLUTO-4.4/Test_Problems/HD/Sedov/init.c .
Changes for a 3-D run:
 In definitions.h :
#define PHYSICS            HD
#define DIMENSIONS           3
#define GEOMETRY            CARTESIAN
  In pluto.ini :
[Grid]
X1-grid  1  0.0  256  u  1.0
X2-grid  1  0.0  256  u  1.0
X3-grid  1  0.0  256  u  1.0
[Time]
tstop      1.0
[Boundary]
X3-beg    reflective
X3-end    outflow
[Static Grid Output]
output_dir  ./output/
log_dir    ./output/Log_Files
dbl    -1.0 -1  single_file
dbl.h5 Â Â 0.05Â -1
4. Add local make for including necessary directories and compile the code
Additions for including HDF5 I/Os: Add to the local_make file.
INCLUDE_DIRS += -I/home/ext/apps/spack/opt/spack/linux-centos7-cascadelake/intel-2021.4.0/hdf5-1.10.8-ssgjscncg3xcw7ogodzjg3whjdmvx3bg/include
LDFLAGSÂ Â Â += -lhdf5 -lz
CFLAGS Â Â Â += -DUSE_HDF5
OBJÂ Â Â Â Â += hdf5_io.o
Compile the code: Â Â make -j32 && make clean
    5. Submit the slurm scriptÂ
 An example script:
#!/bin/bash
#SBATCH --job-name="Sedov"
#SBATCH -N 3Â # Number of nodes
#SBATCH --ntasks-per-node=48 # Number of core per node
#SBATCH --mail-type=END Â Â Â Â # Mail events (NONE, BEGIN, END, FAIL, ALL)
#SBATCH --mail-user=<username>@iisc.ac.in  # Where to send mail. Set this to your email address
#SBATCH -p debugÂ
#SBATCH -t 00-00:40:00Â #dd-hh:mm:ss
#SBATCH --output=%x-%j.log
#SBATCH --error=%x-%j.err
spack load fftw@3.3.10%intel@2021.4.0/4s24afk
#spack load intel-mpi@2019.10.317%intel@2021.4.0/6icwzn3
export SCRATCH="/scratch/<username>/"
export PLUTO_DIR="/scratch/<username>/modules/PLUTO-4.4"
if [ X"$SLURM_STEP_ID" = "X" -a X"$SLURM_PROCID" = "X"0 ]
then
 echo "=========================================="
 echo "Date      = $(date)"
 echo "SLURM_JOB_ID  = $SLURM_JOB_ID"
 echo "Nodes Allocated = $SLURM_JOB_NUM_NODES"
echo "Working Directory = $(pwd)"
cd $SLURM_SUBMIT_DIR
export PROG="./pluto -catalyst 1 density.py"
mkdir -p ./output/Log_Files
#export I_MPI_FALLBACK=disable
#export I_MPI_FABRICS=shm:ofi
mpiexec.hydra -n $SLURM_NTASKS $PROG
Hurray!!!
(Took ~ 22 minutes on 144 processors with mpiicc)
Setting up the Spack Environment
Build your environment...
Click Here
In-situ visualisation with ParaView Catalyst on your PLUTO runs at Param Pravega
BACK TO BLOGS