intro story D-Flow FM

 

D-Flow Flexible Mesh

D-Flow Flexible Mesh (D-Flow FM) is the new software engine for hydrodynamical simulations on unstructured grids in 1D-2D-3D. Together with the familiar curvilinear meshes from Delft3D 4, the unstructured grid can consist of triangles, pentagons (etc.) and 1D channel networks, all in one single mesh. It combines proven technology from the hydrodynamic engines of Delft3D 4 and SOBEK 2 and adds flexible administration, resulting in:

  • Easier 1D-2D-3D model coupling, intuitive setup of boundary conditions and meteorological forcings (amongst others).
  • More flexible 2D gridding in delta regions, river junctions, harbours, intertidal flats and more.
  • High performance by smart use of multicore architectures, and grid computing clusters.
An overview of the current developments can be found here.
 
The D-Flow FM - team would be delighted if you would participate in discussions on the generation of meshes, the specification of boundary conditions, the running of computations, and all kinds of other relevant topics. Feel free to share your smart questions and/or brilliant solutions! 

 

=======================================================
We have launched a new website (still under construction so expect continuous improvements) and a new forum dedicated to Delft3D Flexible Mesh.

Please follow this link to the new forum: 
/web/delft3dfm/forum

Post your questions, issues, suggestions, difficulties related to our Delft3D Flexible Mesh Suite on the new forum.

=======================================================

** PLEASE TAG YOUR POST! **

 

 

Sub groups
D-Flow Flexible Mesh
DELWAQ
Cohesive sediments & muddy systems

 


Message Boards

RE: FLOW compiled with Intel MPI won't run in more than 1 core.

JL
João Lencart e Silva, modified 7 Years ago.

FLOW compiled with Intel MPI won't run in more than 1 core.

Padawan Posts: 70 Join Date: 3/30/11 Recent Posts
I moved this from a answer to another post since it was on a subject different from the post's original one.

I manage to compile FLOW for an Intel cluster suite 2012 + Intel MPI.

It compiles and runs fine with 1 core.

However when I assign more cores to the run with:
mpirun -n 3  $exedir/deltares_hydro.exe $argfile


d_hydro.exe only runs on 1 core and returns the following error for each of the other (2) cores:

Part IV   - Reading complete MD-file...                     

ERROR: forrtl: No such file or directory
forrtl: severe (28): CLOSE error, unit 33, file "Unknown"
Image              PC                Routine            Line        Source            
libirc.so          00002B3BE94C89AA  Unknown               Unknown  Unknown
libirc.so          00002B3BE94C74A6  Unknown               Unknown  Unknown
libifcore.so.5     00002B3BE88027AC  Unknown               Unknown  Unknown
libifcore.so.5     00002B3BE8775E42  Unknown               Unknown  Unknown
libifcore.so.5     00002B3BE87752E6  Unknown               Unknown  Unknown
libifcore.so.5     00002B3BE8768E36  Unknown               Unknown  Unknown
libflow2d3d.so     00002B3BE51F56AB  tdatom_                   821  tdatom.f90
libflow2d3d.so     00002B3BE4E6E071  tdatmain_                  82  tdatmain.F90
libflow2d3d.so     00002B3BE4E83389  mod_trisim_mp_tri         247  trisim_mod.F90
libflow2d3d.so     00002B3BE4E827FA  trisim_                    96  trisim.F90
libflow2d3d.so     00002B3BE4DEC433  Unknown               Unknown  Unknown
d_hydro.exe        00000000004018E0  Unknown               Unknown  Unknown
d_hydro.exe        0000000000403481  Unknown               Unknown  Unknown
libc.so.6          000000383961D994  Unknown               Unknown  Unknown
d_hydro.exe        0000000000401649  Unknown               Unknown  Unknown
       - Starting "d_hydro.exe" may give more information:
         - Run "deltares_hydro.exe <INI-inputfile> -keepXML".
         - Run "d_hydro.exe TMP_config_flow2d3d_<processId>.xml".


When I ldd libflow2d3d.so it seems mpi is linked:

$ ldd  /home/jdl/software/delft3d/5.00.00.1234/bin/lnx/flow2d3d/bin/libflow2d3d.so
    linux-vdso.so.1 =>  (0x00007fff11bfd000)
    libmpi.so.4 => /opt/intel/impi/4.0.3.008/intel64/lib/libmpi.so.4 (0x00002b4ccd8f3000)
    libifport.so.5 => /opt/intel/composer_xe_2011_sp1.6.233/compiler/lib/intel64/libifport.so.5 (0x00002b4ccddbd000)
    libifcore.so.5 => /opt/intel/composer_xe_2011_sp1.6.233/compiler/lib/intel64/libifcore.so.5 (0x00002b4ccdef2000)
    libimf.so => /opt/intel/composer_xe_2011_sp1.6.233/compiler/lib/intel64/libimf.so (0x00002b4cce135000)
    libsvml.so => /opt/intel/composer_xe_2011_sp1.6.233/compiler/lib/intel64/libsvml.so (0x00002b4cce500000)
    libirc.so => /opt/intel/composer_xe_2011_sp1.6.233/compiler/lib/intel64/libirc.so (0x00002b4ccec73000)
    libpthread.so.0 => /lib64/libpthread.so.0 (0x00002b4cceddb000)
    libdl.so.2 => /lib64/libdl.so.2 (0x00002b4cceff6000)
    libexpat.so.0 => /lib64/libexpat.so.0 (0x00002b4ccf1fa000)
    libstdc++.so.6 => /usr/lib64/libstdc++.so.6 (0x00002b4ccf41e000)
    libm.so.6 => /lib64/libm.so.6 (0x00002b4ccf71e000)
    libc.so.6 => /lib64/libc.so.6 (0x00002b4ccf9a1000)
    libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00002b4ccfcf9000)
    librt.so.1 => /lib64/librt.so.1 (0x00002b4ccff07000)
    libintlc.so.5 => /opt/intel/composer_xe_2011_sp1.6.233/compiler/lib/intel64/libintlc.so.5 (0x00002b4cd0110000)
    /lib64/ld-linux-x86-64.so.2 (0x0000003839200000)


The full specs of the cluster and versions follow:
504-core Infiniband-interconnected cluster of dual socket blade servers.
Each blade has two Intel Xeon 6/8-core 54xx/55xx processors.

Centos 5.7

Intel cluster suite 2012:
ifort and icc 12.1
intel mpi  4.0.3.008
intel mkl 10.3


Any clues on what might be happening?

João.
CE
Christopher Esposito, modified 7 Years ago.

RE: FLOW compiled with Intel MPI won't run in more than 1 core.

Padawan Posts: 35 Join Date: 10/16/12 Recent Posts
We are having the same problem on our system. Has anyone found a solution to this issue?

-Chris
Qinghua Ye, modified 7 Years ago.

RE: FLOW compiled with Intel MPI won't run in more than 1 core.

Jedi Council Member Posts: 612 Join Date: 3/2/11 Recent Posts
Hi guys,

Can you list also which version of code are you compiling and using?

Thanks,

Qinghua
JL
João Lencart e Silva, modified 7 Years ago.

RE: FLOW compiled with Intel MPI won't run in more than 1 core.

Padawan Posts: 70 Join Date: 3/30/11 Recent Posts
Hi Qinghua,

The version I tried compiling was 5.00.00.1234.

Regards,

João.
Bert Jagers, modified 7 Years ago.

RE: FLOW compiled with Intel MPI won't run in more than 1 core. (Answer)

Jedi Knight Posts: 201 Join Date: 12/22/10 Recent Posts
Hi João,

You are using Intel MPI. The Delft3D code was developed and tested for the MPICH2 MPI version.
Most of MPI is standardized but not the environment variables. I know that for OpenMPI you need to change the following line in dfinit.f90.

call get_environment_variable('PMI_RANK', rankstr, len)

Intel MPI probably also doesn't set PMI_RANK but some other environment variable; if the check returns an empty string then Delft3D will not activate parallel mode. This check is used to verify whether the engine is running in an MPI setting.

Success,

Bert