Running Delft3D on a cluster - D-Flow Flexible Mesh - Delft3D
intro story D-Flow FM
D-Flow Flexible MeshD-Flow Flexible Mesh (D-Flow FM) is the new software engine for hydrodynamical simulations on unstructured grids in 1D-2D-3D. Together with the familiar curvilinear meshes from Delft3D 4, the unstructured grid can consist of triangles, pentagons (etc.) and 1D channel networks, all in one single mesh. It combines proven technology from the hydrodynamic engines of Delft3D 4 and SOBEK 2 and adds flexible administration, resulting in:
An overview of the current developments can be found here. The D-Flow FM - team would be delighted if you would participate in discussions on the generation of meshes, the specification of boundary conditions, the running of computations, and all kinds of other relevant topics. Feel free to share your smart questions and/or brilliant solutions!
======================================================= | Sub groups
|
Message Boards
Running Delft3D on a cluster
Ben Williams, modified 9 Years ago.
Running Delft3D on a cluster
Jedi Knight Posts: 114 Join Date: 3/23/11 Recent Posts 00
Hello,
I am sure that someone has already posted on a similar subject. However I thought I would ask anyway.
Is there any documentaion for running Delft3D on a linux cluster? And what options are available? Is it the equivalent of using a deslktop (everything runs on one node), or is it possible to use multiple nodes: one node for the wave model (using multiple cores), and another node for the hydrodynamics and morphology calculation (multiple cores)? Is the speedup on a cluster (either one node, 6 cores per node, or two nodes, 6 cores per node) significantly faster than using the equivalent CPU on a 'normal' workstation?
I appreciate that some of this is possibly advice that Deltares would provide to a paying customer, however I would like to see what is freely available 'out there'.
Best regards,
Ben
I am sure that someone has already posted on a similar subject. However I thought I would ask anyway.
Is there any documentaion for running Delft3D on a linux cluster? And what options are available? Is it the equivalent of using a deslktop (everything runs on one node), or is it possible to use multiple nodes: one node for the wave model (using multiple cores), and another node for the hydrodynamics and morphology calculation (multiple cores)? Is the speedup on a cluster (either one node, 6 cores per node, or two nodes, 6 cores per node) significantly faster than using the equivalent CPU on a 'normal' workstation?
I appreciate that some of this is possibly advice that Deltares would provide to a paying customer, however I would like to see what is freely available 'out there'.
Best regards,
Ben
Adri Mourits, modified 9 Years ago.
RE: Running Delft3D on a cluster
Yoda Posts: 1221 Join Date: 1/3/11 Recent Posts 00
Hi Ben,
There are a lot of questions about running parallel. I collected the information at "http://oss.deltares.nl/web/opendelft3d/faq-running-a-calculation#parallel".
Please let me know if you miss information; then I will add it there.
Regards,
Adri
There are a lot of questions about running parallel. I collected the information at "http://oss.deltares.nl/web/opendelft3d/faq-running-a-calculation#parallel".
Please let me know if you miss information; then I will add it there.
Regards,
Adri
TR
Tobias Rothhardt, modified 9 Years ago.
RE: Running Delft3D on a cluster
Youngling Posts: 17 Join Date: 7/25/11 Recent Posts 00
Hi Ben,
after quite some testing we now have a cluster of eight i7-2600K systems partly running. Our testcase (hd+morph at the moment) runs on all systems using 4 processes on each system. I did not try to split the hd and morph calc (if this is possible at all, which I can´t imagine), I only controlled how many jobs and in which order will be started on each system. On the "master" system I always started one job less then on the other systems, since I noticed that one process there is using about twice as much memory than the other jobs.
As far as I can see the performance loss due to the overhead using MPI is low, at least while using up to 4 systems parallel. But I guess it will depend on your simulation as well. If you need more information feel free to ask.
Greetings
Toby
after quite some testing we now have a cluster of eight i7-2600K systems partly running. Our testcase (hd+morph at the moment) runs on all systems using 4 processes on each system. I did not try to split the hd and morph calc (if this is possible at all, which I can´t imagine), I only controlled how many jobs and in which order will be started on each system. On the "master" system I always started one job less then on the other systems, since I noticed that one process there is using about twice as much memory than the other jobs.
As far as I can see the performance loss due to the overhead using MPI is low, at least while using up to 4 systems parallel. But I guess it will depend on your simulation as well. If you need more information feel free to ask.
Greetings
Toby