intro story D-Flow FM

 

D-Flow Flexible Mesh

D-Flow Flexible Mesh (D-Flow FM) is the new software engine for hydrodynamical simulations on unstructured grids in 1D-2D-3D. Together with the familiar curvilinear meshes from Delft3D 4, the unstructured grid can consist of triangles, pentagons (etc.) and 1D channel networks, all in one single mesh. It combines proven technology from the hydrodynamic engines of Delft3D 4 and SOBEK 2 and adds flexible administration, resulting in:

  • Easier 1D-2D-3D model coupling, intuitive setup of boundary conditions and meteorological forcings (amongst others).
  • More flexible 2D gridding in delta regions, river junctions, harbours, intertidal flats and more.
  • High performance by smart use of multicore architectures, and grid computing clusters.
An overview of the current developments can be found here.
 
The D-Flow FM - team would be delighted if you would participate in discussions on the generation of meshes, the specification of boundary conditions, the running of computations, and all kinds of other relevant topics. Feel free to share your smart questions and/or brilliant solutions! 

 

=======================================================
We have launched a new website (still under construction so expect continuous improvements) and a new forum dedicated to Delft3D Flexible Mesh.

Please follow this link to the new forum: 
/web/delft3dfm/forum

Post your questions, issues, suggestions, difficulties related to our Delft3D Flexible Mesh Suite on the new forum.

=======================================================

** PLEASE TAG YOUR POST! **

 

 

Sub groups
D-Flow Flexible Mesh
DELWAQ
Cohesive sediments & muddy systems

 


Message Boards

Running Delft 3D in Parallel

JK
Jonathan King, modified 4 Years ago.

Running Delft 3D in Parallel

Youngling Posts: 16 Join Date: 9/8/15 Recent Posts
The Delft3D-FLOW user manual states that for MPI based parallel model runs the domain is split automatically in stripwise partitions", p.159, section 6.1.1.2

Could anyone please explain this process and how it is carried out, for example how the model decides where to split the domain, if these 'strips' are in the m or n direction etc?

Alternatively could anyone please direct me to the section of the source code where I can find the relative subroutines?

When I try and run my model on Linux in parallel I get the error ** ERROR Non existing BC at begin row, ROW: 41 42 2 873 1. May there be something wrong with my BCs?
The location this refers to does not have a BC and the model run's fine in serial mode on 1 core.

Any advice / help would be appreciated.

Thanks
Jonathan
JK
Jonathan King, modified 4 Years ago.

RE: Running Delft 3D in Parallel

Youngling Posts: 16 Join Date: 9/8/15 Recent Posts
I have just found this *.ddb file in outputs and it looks like it has information on how the domain is split. Can anyone explain how to interpret it? In this case I'm trying to split 873x305 domain on 8 processors.

csm_all-001.grd 39 1 39 873 csm_all-002.grd 3 1 3 873
csm_all-002.grd 63 1 63 873 csm_all-003.grd 3 1 3 873
csm_all-003.grd 49 1 49 873 csm_all-004.grd 3 1 3 873
csm_all-004.grd 43 1 43 873 csm_all-005.grd 3 1 3 873
csm_all-005.grd 48 1 48 873 csm_all-006.grd 3 1 3 873
csm_all-006.grd 56 1 56 873 csm_all-007.grd 3 1 3 873
csm_all-007.grd 226 1 226 873 csm_all-008.grd 3 1 3 873
JK
Jonathan King, modified 4 Years ago.

RE: Running Delft 3D in Parallel

Youngling Posts: 16 Join Date: 9/8/15 Recent Posts
A colleague and I have figured out the structure of the .ddb file created when submitting the job to run in parallel.

We think the problem is caused by gaps in the grid at the sub-domain interfaces caused by islands.

We do not want to create another grid with the island gaps substituted by elevated bathymetry. Is there any way to force Delft 3D to accept a manually written .ddb file to tell it where to split the global domain into sub-domains?

We understand this is possible when doing domain decomposition but we don't many to do this as we are trying to carry out MPI runs on our local HPC system using a single job file.
Qinghua Ye, modified 4 Years ago.

RE: Running Delft 3D in Parallel (Answer)

Jedi Council Member Posts: 612 Join Date: 3/2/11 Recent Posts
Hi Jonathan,

Indeed, as you described, the error indicated that the auto-partitioning processing happened to hit an island in the domain. Unfortunately so far it is not possible to spefity the partition boundary manually and in a short time, this partitioning algorithm is not likely to update. It is not useful to switch the M,N direction since the algorithm will take the bigger one to split.

However, there might a walk-round for practical projects, if there are not many islands in your model domain, you might be able to change the partition boundary position by adjusting the cpu/host you want to use. I mean, for example, if 4 partitions hits the island, try to use 5 or 6 or 3. This might be helpful sometimes.

Enjoy,

Qinghua
JK
Jonathan King, modified 4 Years ago.

RE: Running Delft 3D in Parallel

Youngling Posts: 16 Join Date: 9/8/15 Recent Posts
Hi Qinghua,

Thank you, I ended up settling with splitting it over 6 cores. Although not as fast as I would have liked it still reduced the job time significantly.

However I would add that when I used simplified BCs (a single time invariant water level value along all domain boundaries) I was able to run the job on more cores, I tested it using up to 8. The original BCs were made of water level time series specified in successive sections each covering 10 boundary cells. This suggests that the BCs also effect the model partitioning which I was unaware of.

Thanks
Jonathan