Debug a PLUTO Test Problem

Let's try out a simple Sedov Taylor Blast Wave Problem


(Find the PLUTO code here or  Checkout Alankar Dutta's updated version)

Keep your PLUTO directory location added to bashrc. Load all required modules, as shown in the Homepage.

export $PLUTO_DIR="/scratch/<username>/PLUTO-4.4

     Here's a  sample ~/.bashrc for reference.

# .bashrc


# Source global definitions

if [ -f /etc/bashrc ]; then

        . /etc/bashrc

fi


shopt  -s direxpand

# . /home-ext/apps/spack/share/spack/setup-env.sh

export SPACK_ROOT=/scratch/<username>/spack

. $SPACK_ROOT/share/spack/setup-env.sh


spack load hwloc@2.7.0%intel@2021.4.0

spack load zlib@1.2.11%intel@2021.4.0

spack load gsl@2.7%intel@2021.4.0

spack load gmp@6.2.1%intel@2021.4.0

spack load hdf5@1.10.8%intel@2021.4.0/ssgjscn

spack load intel-mpi@2019.10.317%intel@2021.4.0/6icwzn3

spack load intel-oneapi-compilers@2021.4.0

spack load cmake@3.22.1/ozzmsd4


export SCRATCH="/scratch/<username>"

export PLUTO_DIR="/scratch/<username>/PLUTO-4.4"


export LIBRARY_PATH=$LD_LIBRARY_PATH

export CPATH=/home-ext/apps/spack/opt/spack/linux-centos7-cascadelake/intel-2021.4.0/hdf5-1.10.8-ssgjscncg3xcw7ogodzjg3whjdmvx3bg/include:$CPATH


Here's the loaded modules checked through spack find --loaded


2.  Create a Config File for Makefile with mpiicc

Create a file and name it PARM.parallel.defs (say) in $PLUTO_DIR/Config with these default compiler flags (mpiicc is faster than mpicc, and ASYNC_IO=TRUE enables asynchronous writing of files.


vi $PLUTO_DIR/Config/PARM.parallel.defs


########################################################################

#                                                

#     Configuration file for mpicc (parallel)

#             

########################################################################


CC       = mpiicc

CFLAGS   = -c -O3 #-Wundef

#LDFLAGS  = -lm


PARALLEL = TRUE

USE_HDF5 = FALSE

USE_PNG  = FALSE


#######################################

# MPI additional spefications

#######################################


ifeq ($(strip $(PARALLEL)), TRUE)

 USE_ASYNC_IO=TRUE

endif


#######################################

#    HDF5 library options

#######################################


ifeq ($(strip $(USE_HDF5)), TRUE)

endif


#######################################

#     PNG library options

#######################################


ifeq ($(strip $(USE_PNG)), TRUE)

 LDFLAGS += -lpng

endif



3. Choose the Problem parameters

mkdir /scratch/<username>/runs/sedov-taylor

cd /scratch/<username>/runs/sedov-taylor

cp /scratch/<username>/PLUTO-4.4/Test_Problems/HD/Sedov/definitions_04.h definitions.h

cp /scratch/<username>/PLUTO-4.4/Test_Problems/HD/Sedov/pluto_04.ini pluto.ini 

cp /scratch/<username>/PLUTO-4.4/Test_Problems/HD/Sedov/init.c .


Changes for a 3-D run:

 In definitions.h :

#define  PHYSICS                        HD

#define  DIMENSIONS                     3

#define  GEOMETRY                       CARTESIAN

  In pluto.ini :

[Grid]

X1-grid    1   0.0    256    u    1.0

X2-grid    1   0.0    256    u    1.0

X3-grid    1   0.0    256    u    1.0


[Time]

tstop            1.0


[Boundary]

X3-beg        reflective

X3-end        outflow


[Static Grid Output]

output_dir    ./output/

log_dir       ./output/Log_Files

dbl       -1.0  -1   single_file

dbl.h5     0.05  -1


4. Add local make for including necessary directories and compile the code

Additions for including HDF5 I/Os: Add to the local_make file.

INCLUDE_DIRS += -I/home/ext/apps/spack/opt/spack/linux-centos7-cascadelake/intel-2021.4.0/hdf5-1.10.8-ssgjscncg3xcw7ogodzjg3whjdmvx3bg/include


LDFLAGS      += -lhdf5 -lz

CFLAGS       += -DUSE_HDF5

OBJ          += hdf5_io.o


Compile the code:     make -j32 && make clean


    5. Submit the slurm script 

 An example script:

#!/bin/bash


#SBATCH --job-name="Sedov"

#SBATCH -N 3  # Number of nodes

#SBATCH --ntasks-per-node=48 # Number of core per node

#SBATCH --mail-type=END         # Mail events (NONE, BEGIN, END, FAIL, ALL)

#SBATCH --mail-user=<username>@iisc.ac.in    # Where to send mail.  Set this to your email address

#SBATCH -p debug 

#SBATCH -t 00-00:40:00  #dd-hh:mm:ss

#SBATCH --output=%x-%j.log

#SBATCH --error=%x-%j.err


export SPACK_ROOT=/scratch/<username>/spack

. $SPACK_ROOT/share/spack/setup-env.sh


spack load intel-oneapi-compilers@2021.4.0

spack load hwloc@2.7.0%intel@2021.4.0

spack load zlib@1.2.11%intel@2021.4.0

spack load gsl@2.7%intel@2021.4.0

spack load gmp@6.2.1%intel@2021.4.0

spack load fftw@3.3.10%intel@2021.4.0/4s24afk

spack load hdf5@1.10.8%intel@2021.4.0/ssgjscn

#spack load intel-mpi@2019.10.317%intel@2021.4.0/6icwzn3


export SCRATCH="/scratch/<username>/"

export PLUTO_DIR="/scratch/<username>/modules/PLUTO-4.4"

export LIBRARY_PATH=$LD_LIBRARY_PATH

export CPATH=/home-ext/apps/spack/opt/spack/linux-centos7-cascadelake/intel-2021.4.0/hdf5-1.10.8-ssgjscncg3xcw7ogodzjg3whjdmvx3bg/include:$CPATH


if [ X"$SLURM_STEP_ID" = "X" -a X"$SLURM_PROCID" = "X"0 ]

then

 echo "=========================================="

 echo "Date            = $(date)"

 echo "SLURM_JOB_ID    = $SLURM_JOB_ID"

 echo "Nodes Allocated = $SLURM_JOB_NUM_NODES"

 echo "=========================================="

fi

echo "Working Directory = $(pwd)"


cd $SLURM_SUBMIT_DIR

export PROG="./pluto -catalyst 1 density.py"

mkdir -p ./output/Log_Files


#export I_MPI_FALLBACK=disable

#export I_MPI_FABRICS=shm:ofi

mpiexec.hydra -n $SLURM_NTASKS $PROG


Hurray!!!

(Took ~ 22 minutes on 144 processors with mpiicc)


Setting up the Spack Environment

Build your environment...

Using Catalyst with PLUTO

In-situ visualisation with ParaView Catalyst on your PLUTO runs at Param Pravega