intro story D-Flow FM

 

D-Flow Flexible Mesh

D-Flow Flexible Mesh (D-Flow FM) is the new software engine for hydrodynamical simulations on unstructured grids in 1D-2D-3D. Together with the familiar curvilinear meshes from Delft3D 4, the unstructured grid can consist of triangles, pentagons (etc.) and 1D channel networks, all in one single mesh. It combines proven technology from the hydrodynamic engines of Delft3D 4 and SOBEK 2 and adds flexible administration, resulting in:

  • Easier 1D-2D-3D model coupling, intuitive setup of boundary conditions and meteorological forcings (amongst others).
  • More flexible 2D gridding in delta regions, river junctions, harbours, intertidal flats and more.
  • High performance by smart use of multicore architectures, and grid computing clusters.
An overview of the current developments can be found here.
 
The D-Flow FM - team would be delighted if you would participate in discussions on the generation of meshes, the specification of boundary conditions, the running of computations, and all kinds of other relevant topics. Feel free to share your smart questions and/or brilliant solutions! 

 

=======================================================
We have launched a new website (still under construction so expect continuous improvements) and a new forum dedicated to Delft3D Flexible Mesh.

Please follow this link to the new forum: 
/web/delft3dfm/forum

Post your questions, issues, suggestions, difficulties related to our Delft3D Flexible Mesh Suite on the new forum.

=======================================================

** PLEASE TAG YOUR POST! **

 

 

Sub groups
D-Flow Flexible Mesh
DELWAQ
Cohesive sediments & muddy systems

 


Message Boards

MPI and automated division of domain

SW
Simon Wulp, modified 5 Years ago.

MPI and automated division of domain

Youngling Posts: 1 Join Date: 9/6/13 Recent Posts
Dear Members,

I am currently trying to run FLOW on a fairly complex domain including dry points and (river) discharge points.
I have got a Linux cluster to my disposal with a copious amount of cores. I apply MPI with 4 to 8 cores, which for computation speed is our optimum.
Delft3D automatically cuts the domain in the corresponding number of pieces along the M direction. The model keeps crashing because the cuts between sub domains occur at unfortunate locations.

My question is, whether there is an (easy) option to make Delft3D cut the model domain along the N direction, which in my case, would result in a more fortunate subdivision and (hopefully) result in a more stable MPI simulation. I was thinking in first instance to rotate the administration grid, but I would prefer a less time consuming option.

Thank you in advance.
Qinghua Ye, modified 5 Years ago.

RE: MPI and automated division of domain (Answer)

Jedi Council Member Posts: 612 Join Date: 3/2/11 Recent Posts
Hi Simon,

I would also suggest you start with rotate the grid administration. Other options we had the plan to implement, but not in a short time.

Hope this would help a bit,

Cheers,

Qinghua