intro story D-Flow FM

 

D-Flow Flexible Mesh

D-Flow Flexible Mesh (D-Flow FM) is the new software engine for hydrodynamical simulations on unstructured grids in 1D-2D-3D. Together with the familiar curvilinear meshes from Delft3D 4, the unstructured grid can consist of triangles, pentagons (etc.) and 1D channel networks, all in one single mesh. It combines proven technology from the hydrodynamic engines of Delft3D 4 and SOBEK 2 and adds flexible administration, resulting in:

  • Easier 1D-2D-3D model coupling, intuitive setup of boundary conditions and meteorological forcings (amongst others).
  • More flexible 2D gridding in delta regions, river junctions, harbours, intertidal flats and more.
  • High performance by smart use of multicore architectures, and grid computing clusters.
An overview of the current developments can be found here.
 
The D-Flow FM - team would be delighted if you would participate in discussions on the generation of meshes, the specification of boundary conditions, the running of computations, and all kinds of other relevant topics. Feel free to share your smart questions and/or brilliant solutions! 

 

=======================================================
We have launched a new website (still under construction so expect continuous improvements) and a new forum dedicated to Delft3D Flexible Mesh.

Please follow this link to the new forum: 
/web/delft3dfm/forum

Post your questions, issues, suggestions, difficulties related to our Delft3D Flexible Mesh Suite on the new forum.

=======================================================

** PLEASE TAG YOUR POST! **

 

 

Sub groups
D-Flow Flexible Mesh
DELWAQ
Cohesive sediments & muddy systems

 


Message Boards

Compiling and running D3D-4 FLOW-WAVE in Linux with MPI on multiple nodes

Marcio Boechat Albernaz, modified 1 Year ago.

Compiling and running D3D-4 FLOW-WAVE in Linux with MPI on multiple nodes

Youngling Posts: 1 Join Date: 3/8/12 Recent Posts

Dear D3D users and developers,

I have been working on compiling and running D3D-4 (tag7545, initially) at SURFSara-Cartesius Dutch supercomputer. Together with some colleagues and Cartesius support we managed to successfully compile and run the example simulations of FLOW and WAVE on multi-thread. However, when attempting to run example3 with FLOW-WAVE online on multiple nodes, I only have success for the FLOW module while the WAVE only performs when constrained to one node (OMP). When calling MPI the simulation does not progress. I have the following echo:

 

"  Write SWAN depth file

  Write SWAN velocity file

  Write SWAN wind file

  Deallocate input fields

  Write SWAN input

<<Run SWAN...

 

swan.sh is /home/boech001/d3d_src/tag7545/compiled/lnx64/swan/scripts/swan.sh

Using swan executable /home/boech001/d3d_src/tag7545/compiled/lnx64/swan/bin/swan_4072ABCDE_del_l64_i11_mpi.exe

SWAN batchfile executed for Delft3D

Performing computation for: r17.swn

Start of parallel computation with MPICH2 using 24 slots (NB. We are not using MPICH2 - This is just the echo)

srun: Job step creation temporarily disabled, retrying

srun: Job step aborted: Waiting up to 32 seconds for job step to finish."

 

We have this document by John Donners (and also Menno Genseberger from Deltares): http://www.prace-ri.eu/IMG/pdf/wp177.pdf

According to John: The work presents some improvements, including modifications 'SWAN with MPI' and 'coupling through MPI'. Yet, unfortunately, these improvements have not been picked up by Deltares (and may not be easily ported as well).


My simulations have been called from sbatch/srun (SLURM) manage system due to Cartesius organization. Below I am listing the modules and softwares used during the compilation. I am also attaching (1) the script calling all modules (2) the swan.sh which were modified with Cartesius help and (3) a slurm.out file with the screen progress of one failed attempt. 

We are aware of difficulties in combining different compilers, modules and mpi however I (actually, we) could not derive a clear answer for my specific case from the forum or Deltares compiling guide. Is this a common and known general limitation or am I messing somewhere in between the compilation and running?  

 

Thanks in advance,

Regards

Márcio

 

# load all required modules

module purge

module load surfsara

module load eb

module load intel/2017b

module load netCDF/4.4.1.1-intel-2017b

module load netCDF-Fortran/4.4.4-intel-2017b

 

# set the correct variables

export FC=mpiifort

export F77=mpiifort

export CC=mpiicc

export CXX=mpiicpc

export MPICXX=mpiicpc

export MPICC=mpiicc

export MPIFC=mpiifort

export MPIF77=mpiifort

 

If we perform the 'module list', we get the following

Currently Loaded Modulefiles:

 1) bull                          8) icc/2017.4.196-GCC-6.4.0-2.28                       15) zlib/1.2.11-GCCcore-6.4.0        

 2) surfsara                      9) ifort/2017.4.196-GCC-6.4.0-2.28                     16) Szip/2.1.1-GCCcore-6.4.0         

 3) EasyBuild/3.8.0              10) iccifort/2017.4.196-GCC-6.4.0-2.28                  17) HDF5/1.10.1-intel-2017b          

 4) compilerwrappers             11) impi/2017.3.196-iccifort-2017.4.196-GCC-6.4.0-2.28  18) cURL/7.56.1-GCCcore-6.4.0        

 5) eb/3.8.0(default)            12) iimpi/2017b                                         19) netCDF/4.4.1.1-intel-2017b       

 6) GCCcore/6.4.0(default)       13) imkl/2017.3.196-iimpi-2017b                         20) netCDF-Fortran/4.4.4-intel-2017b 

 7) binutils/2.28-GCCcore-6.4.0  14) intel/2017b                                        

 

 

Adri Mourits, modified 1 Year ago.

RE: Compiling and running D3D-4 FLOW-WAVE in Linux with MPI on multiple nod

Yoda Posts: 1224 Join Date: 1/3/11 Recent Posts

Hi Marcio,

This is a limitation of the MPICH library. When you compile Delft3D with OpenMPI you will be able to run both FLOW and SWAN in parallel on multiple nodes. Some remarks:

1. Use OpenMPI version 2.1.5 or older when compiling D-Flow FM. The newer versions are not compatible with the PetSC library. When compiling Delft3D-FLOW this should not be an issue, because that doesn't use PetSC.

2. You don't have to recompile SWAN. It's enough when FLOW uses OpenMPI.

3. You have to modify the script "swan.sh", which is called by WAVE to execute a SWAN computation:

Line 49: enable "testpar=$NHOSTS" by removing the #-sign at the start of the line. Otherwise the OpenMP version of SWAN will always be used

Line 176: Be sure to use the MPICH variant of mpirun. May be some module load statements are needed here

4. I got it working with the same number of partitions, identically spread over the nodes, for both FLOW and SWAN. I didn't test with different numbers of partitions.

Regards,

Adri