intro story D-Flow FM

 

D-Flow Flexible Mesh

D-Flow Flexible Mesh (D-Flow FM) is the new software engine for hydrodynamical simulations on unstructured grids in 1D-2D-3D. Together with the familiar curvilinear meshes from Delft3D 4, the unstructured grid can consist of triangles, pentagons (etc.) and 1D channel networks, all in one single mesh. It combines proven technology from the hydrodynamic engines of Delft3D 4 and SOBEK 2 and adds flexible administration, resulting in:

  • Easier 1D-2D-3D model coupling, intuitive setup of boundary conditions and meteorological forcings (amongst others).
  • More flexible 2D gridding in delta regions, river junctions, harbours, intertidal flats and more.
  • High performance by smart use of multicore architectures, and grid computing clusters.
An overview of the current developments can be found here.
 
The D-Flow FM - team would be delighted if you would participate in discussions on the generation of meshes, the specification of boundary conditions, the running of computations, and all kinds of other relevant topics. Feel free to share your smart questions and/or brilliant solutions! 

 

=======================================================
We have launched a new website (still under construction so expect continuous improvements) and a new forum dedicated to Delft3D Flexible Mesh.

Please follow this link to the new forum: 
/web/delft3dfm/forum

Post your questions, issues, suggestions, difficulties related to our Delft3D Flexible Mesh Suite on the new forum.

=======================================================

** PLEASE TAG YOUR POST! **

 

 

Sub groups
D-Flow Flexible Mesh
DELWAQ
Cohesive sediments & muddy systems

 


Message Boards

MPI questions

SA
Steven Ayres, modified 6 Years ago.

MPI questions

Padawan Posts: 33 Join Date: 4/21/11 Recent Posts
Questions for the development team:

I have a 30 cell wide river discharging into a delta grid that is 250 cells wide. The MPI algorithm subdivides the domain along the longest grid direction which is downriver. So in effect, I end up with very long skinny sub-domain strips in the delta portion of the grid. Is there any way the user could have more influence on the way MPI sub-divides the grid? I would think a square sub-domain would be most computationally efficient, so if the modeller could direct the algorithm to sub-divide the grid into square blocks, there should be a performance gain in the computation time.

Also, Are there any plans for the Z-model code to support MPI?

Thanks in advance,
Steve
Adri Mourits, modified 6 Years ago.

RE: MPI questions

Yoda Posts: 1224 Join Date: 1/3/11 Recent Posts
Hi Steven,

There are plans to have more control on the subdivision in strips. Meanwhile the only way to work around this problem is to change the source code yourself. Please let me know if you want this, then I can guide you through.

I agree that sub-division in blocks is more efficient. But to implement that will be a major effort. It is not planned yet.

The combination of Z-layers and MPI is implemented in a branch (https://svn.oss.deltares.nl/repos/delft3d/branches/research/Deltares/20130801_16494_z-model_parallel_from_tag_2703) and has been tested on a few cases. It is planned to be merged into the trunk this month.

Regards,

Adri
TR
Tobias Rothhardt, modified 6 Years ago.

RE: MPI questions

Youngling Posts: 17 Join Date: 7/25/11 Recent Posts
Hi Adri,

we have the same problem with our large models (posted it couple of months ago). Since we need to too more simulations in the shortest possible time I would like to try to optimize the domains. One Domain ist at least 4 times larger then the others, so if you might be able to lead me through the process of adapting the code I would give it a try. Sorry I discovered this post only today.

Greetings
Tobias
Adri Mourits, modified 6 Years ago.

RE: MPI questions

Yoda Posts: 1224 Join Date: 1/3/11 Recent Posts
Hi Tobias,

In subroutine https://svn.oss.deltares.nl/repos/delft3d/trunk/src/engines_gpl/flow2d3d/packages/data/src/parallel_mpi/dfbladm.F90, line 170 to 177 currently is:
    nfg = partition_dims(infg, inode)
    nlg = partition_dims(inlg, inode)
    mfg = partition_dims(imfg, inode)
    mlg = partition_dims(imlg, inode)
    !
    if (inode == master) then
       !
       ! Write the related ddb file. Needed as input by DDCOUPLE to prepare a WAQ calculation


If you want to change the sizes of the partitions for a specific model, you can add and modify the following lines, after setting the nfg/nlg/mfg/mlg and before the "if inode==master" part in the code above:
    nfg = partition_dims(infg, inode)
    nlg = partition_dims(inlg, inode)
    mfg = partition_dims(imfg, inode)
    mlg = partition_dims(imlg, inode)
    !
    select case(inode)
        case (1)
            nfg = 1
            nlg = nmax
            mfg = 1
            mlg = 10
        case (2)
            nfg = 1
            nlg = nmax
            mfg = 11
            mlg = 20
        case (3)
            nfg = 1
            nlg = nmax
            mfg = 21
            mlg = 30
        default
    end select
    write(lundia,*) inode, nfg, nlg, mfg, mlg
    !
    if (inode == master) then
       !
       ! Write the related ddb file. Needed as input by DDCOUPLE to prepare a WAQ calculation

Start by adding just the write statement and check that that line appears in all the tri-diag files with the expected integers. Then add the "select case" part, with a "case" statement for all the partitions you want to alter. If "m" is the splitting direction, you only have to set mfg and mlg, if "n" is the splitting direction, you only have to set nfg and nlg. Double check in the tri-diag files that the partitions have the expected sizes.

Regards,

Adri
XW
xia wei, modified 3 Years ago.

RE: MPI questions

Youngling Posts: 16 Join Date: 10/24/16 Recent Posts
Hi Adri,

Is there a version of Delft3d open source without MPI code. Or this there a way to block the MPI function when compiling the Delft3d opensource. Since the MPI code inside the flow model are mixed up with the MPI code written by myself, I don't want the MPI code inside the delft3d open source.

Thanks,
Xia Wei.
Qinghua Ye, modified 3 Years ago.

RE: MPI questions

Jedi Council Member Posts: 612 Join Date: 3/2/11 Recent Posts
Hi Wei,

On linux platform, when compile the Linux version, you may add a keyword: ./build.sh -intel14 -64bit -nompi, then the MPI functions inside the code might be ignored. But I am not sure if your own MPI will work as the alternative.

Greetings,

Qinghua
XW
xia wei, modified 3 Years ago.

RE: MPI questions

Youngling Posts: 16 Join Date: 10/24/16 Recent Posts
Hi Qinghua,

Thanks for your suggestion, I will try that and give you a feedback.

Best,
Xia Wei.
XW
xia wei, modified 3 Years ago.

RE: MPI questions

Youngling Posts: 16 Join Date: 10/24/16 Recent Posts
Hi, Qinghua,

When I tried to build the open source file, with the command "./build.sh -intel14 -64bit -nompi" there is no args for the "-nompi", so should I add some settings into the build.sh file. And it should also work with gnu compiler right if I do not use the intel compiler.

Best,
Xia Wei.
Adri Mourits, modified 3 Years ago.

RE: MPI questions

Yoda Posts: 1224 Join Date: 1/3/11 Recent Posts
Hi Xia Wei,

You can try to do the configure step with:
--with-mpi=no
But the chance that it works is very small; it has never been tested.

Regards,

Adri