intro story D-Flow FM

 

D-Flow Flexible Mesh

D-Flow Flexible Mesh (D-Flow FM) is the new software engine for hydrodynamical simulations on unstructured grids in 1D-2D-3D. Together with the familiar curvilinear meshes from Delft3D 4, the unstructured grid can consist of triangles, pentagons (etc.) and 1D channel networks, all in one single mesh. It combines proven technology from the hydrodynamic engines of Delft3D 4 and SOBEK 2 and adds flexible administration, resulting in:

  • Easier 1D-2D-3D model coupling, intuitive setup of boundary conditions and meteorological forcings (amongst others).
  • More flexible 2D gridding in delta regions, river junctions, harbours, intertidal flats and more.
  • High performance by smart use of multicore architectures, and grid computing clusters.
An overview of the current developments can be found here.
 
The D-Flow FM - team would be delighted if you would participate in discussions on the generation of meshes, the specification of boundary conditions, the running of computations, and all kinds of other relevant topics. Feel free to share your smart questions and/or brilliant solutions! 

 

=======================================================
We have launched a new website (still under construction so expect continuous improvements) and a new forum dedicated to Delft3D Flexible Mesh.

Please follow this link to the new forum: 
/web/delft3dfm/forum

Post your questions, issues, suggestions, difficulties related to our Delft3D Flexible Mesh Suite on the new forum.

=======================================================

** PLEASE TAG YOUR POST! **

 

 

Sub groups
D-Flow Flexible Mesh
DELWAQ
Cohesive sediments & muddy systems

 


Message Boards

Error with multicore run for Engelund and Hansen transport formula

GB
Gyan Basyal, modified 6 Years ago.

Error with multicore run for Engelund and Hansen transport formula

Youngling Posts: 7 Join Date: 5/16/12 Recent Posts
Hi everyone,

Using Engelund and Hansen transport formula I was able to run model with single core both in windows and Linux environment (using MPI). But it throws error when the number of cores is two or more.

In Windows environment I get following output:

MPI process number 001 has host unknown and is running on processor ab
MPI process number 000 has host unknown and is running on processor ab
--------------------------------------------------------------------------------
Deltares, FLOW2D3D Version 6.01.02.000000, Sep 20 2013, 16:54:25
flow2d3d.dll entry Flow2D3D::Run
--------------------------------------------------------------------------------

Part I - Initialisation Time Dep. Data module...
runid : gyan2
Part II - Creating intermediate files...
Part III - Initialisation of the Execution module...
Part IV - Reading complete MD-file...
Part V - Initialisation & checking input...
Part VI - Initialisation & checking second part...
Part VII - Initialisation output...
Part VIII - Start Simulation...

Time to finish 0s, 0.0% completed, time steps left 702720
unable to read the cmd header on the pmi context, generic socket failure, error stack:
MPIDU_Sock_wait(2603): The specified network name is no longer available. (errno 64).
unable to read the cmd header on the pmi context, generic socket failure, error stack:
MPIDU_Sock_wait(2603): The specified network name is no longer available. (errno 64).

job aborted:
rank: node: exit code[: error message]
0: ab: -1073741819: process 0 exited without calling finalize
1: ab-dl: -1073741819: process 1 exited without calling finalize
received suspend command for a pmi context that doesn't exist: unmatched id = 0


In Linux cluster (Red Hat Enterprise) I am getting following error code:
[mike410:mpi_rank_0][error_sighandler] Caught error: Segmentation fault (signal 11)


I would appreciate your help. Thanks.

Gyan
Qinghua Ye, modified 6 Years ago.

RE: Error with multicore run for Engelund and Hansen transport formula

Jedi Council Member Posts: 612 Join Date: 3/2/11 Recent Posts
Dear Gyan,

It seems to me that the error you showed is about MPI processes is not on. run smpd -install in the running directory first.

However, indeed there is some a problem with bedload running in parallel there. We will solve it soon.

Regards,

Qinghua
GB
Gyan Basyal, modified 6 Years ago.

RE: Error with multicore run for Engelund and Hansen transport formula

Youngling Posts: 7 Join Date: 5/16/12 Recent Posts
Hi Qinghua,

Thanks for your reply. MPI process is running indeed. I am able to run Van Rijn transport problem on multi-cores. In fact, it runs even when I wrongly specify sediment type as "sand" for Engelund and Hansen formula. As soon as I switch sediment type to "bedload", it doesn't work. Is there anything I should change when I change sediment type to bedload from sand?

I have also tested with mpich2 installed separately on the system (activated MPI using smpd -install and mpiexec -register from the installation directory). No luck yet. I am very much looking forward to the new revision or fix. Hope it comes soon.

Lastly, could you please provide little more detail on what sort of problem are you facing currently with bedload in parallel mode?

Thanks,
Gyan
Qinghua Ye, modified 6 Years ago.

RE: Error with multicore run for Engelund and Hansen transport formula

Jedi Council Member Posts: 612 Join Date: 3/2/11 Recent Posts
Hi Gyan,

As I said in email, it seems like there are some de-allocation problems at the finishing stage when running parallel with bedload. This is registered in our tracking system now. We will be sure to solve it.

Regards,

Qinghua
Qinghua Ye, modified 6 Years ago.

RE: Error with multicore run for Engelund and Hansen transport formula

Jedi Council Member Posts: 612 Join Date: 3/2/11 Recent Posts
Dear Ryan,

I am glad to inform you that the bug has been resolved since Rev. 3962.

In the coming release 4.01.01, this is for sure correct for bedload transport running in parallel.

Qinghua