Migration status

  • Home Page: Activity map.

intro story DELWAQ

DELWAQ

DELWAQ is the engine of the D-Water Quality and D-Ecology programmes of the Delft3D suite. It is based on a rich library from which relevant substances and processes can be selected to quickly put water and sediment quality models together.

The processes library covers many aspects of water quality and ecology, from basic tracers, dissolved oxygen, nutrients, organic matter, inorganic suspended matter, heavy metals, bacteria and organic micro-pollutants, to complex algae and macrophyte dynamics. High performance solvers enable the simulation of long periods, often required to capture the full cycles of the processes being modelled.

The finite volume approach underlying DELWAQ allows it to be coupled to both the structured grid hydrodynamics of the current Delft3D-FLOW engine and the upcoming D-Flow Flexible Mesh engine (1D-2D-3D) of the Delft3D Flexible Mesh Suite (or even other models such as TELEMAC).

'DELWAQ in open source' is our invitation to all leading experts to collaborate in further development and research in the field of water quality, ecology and morphology using Delft3D. Feel free to post your DELWAQ related questions or comments in this dedicated forum space. If you are new to DELWAQ, the tutorial (in the user manual) is a good place to start. A list of DELWAQ related publications is available here.

** PLEASE TAG YOUR POST! **

 

 

Sub groups
D-Flow Flexible Mesh
DELWAQ

Cohesive sediments & muddy systems

 


Message Boards

Running Delft 3D in Parallel

JK
Jonathan King, modified 4 Years ago.

Running Delft 3D in Parallel

Youngling Posts: 16 Join Date: 9/8/15 Recent Posts
The Delft3D-FLOW user manual states that for MPI based parallel model runs the domain is split automatically in stripwise partitions", p.159, section 6.1.1.2

Could anyone please explain this process and how it is carried out, for example how the model decides where to split the domain, if these 'strips' are in the m or n direction etc?

Alternatively could anyone please direct me to the section of the source code where I can find the relative subroutines?

When I try and run my model on Linux in parallel I get the error ** ERROR Non existing BC at begin row, ROW: 41 42 2 873 1. May there be something wrong with my BCs?
The location this refers to does not have a BC and the model run's fine in serial mode on 1 core.

Any advice / help would be appreciated.

Thanks
Jonathan
JK
Jonathan King, modified 4 Years ago.

RE: Running Delft 3D in Parallel

Youngling Posts: 16 Join Date: 9/8/15 Recent Posts
I have just found this *.ddb file in outputs and it looks like it has information on how the domain is split. Can anyone explain how to interpret it? In this case I'm trying to split 873x305 domain on 8 processors.

csm_all-001.grd 39 1 39 873 csm_all-002.grd 3 1 3 873
csm_all-002.grd 63 1 63 873 csm_all-003.grd 3 1 3 873
csm_all-003.grd 49 1 49 873 csm_all-004.grd 3 1 3 873
csm_all-004.grd 43 1 43 873 csm_all-005.grd 3 1 3 873
csm_all-005.grd 48 1 48 873 csm_all-006.grd 3 1 3 873
csm_all-006.grd 56 1 56 873 csm_all-007.grd 3 1 3 873
csm_all-007.grd 226 1 226 873 csm_all-008.grd 3 1 3 873
JK
Jonathan King, modified 4 Years ago.

RE: Running Delft 3D in Parallel

Youngling Posts: 16 Join Date: 9/8/15 Recent Posts
A colleague and I have figured out the structure of the .ddb file created when submitting the job to run in parallel.

We think the problem is caused by gaps in the grid at the sub-domain interfaces caused by islands.

We do not want to create another grid with the island gaps substituted by elevated bathymetry. Is there any way to force Delft 3D to accept a manually written .ddb file to tell it where to split the global domain into sub-domains?

We understand this is possible when doing domain decomposition but we don't many to do this as we are trying to carry out MPI runs on our local HPC system using a single job file.
Qinghua Ye, modified 4 Years ago.

RE: Running Delft 3D in Parallel (Answer)

Jedi Council Member Posts: 612 Join Date: 3/2/11 Recent Posts
Hi Jonathan,

Indeed, as you described, the error indicated that the auto-partitioning processing happened to hit an island in the domain. Unfortunately so far it is not possible to spefity the partition boundary manually and in a short time, this partitioning algorithm is not likely to update. It is not useful to switch the M,N direction since the algorithm will take the bigger one to split.

However, there might a walk-round for practical projects, if there are not many islands in your model domain, you might be able to change the partition boundary position by adjusting the cpu/host you want to use. I mean, for example, if 4 partitions hits the island, try to use 5 or 6 or 3. This might be helpful sometimes.

Enjoy,

Qinghua
JK
Jonathan King, modified 4 Years ago.

RE: Running Delft 3D in Parallel

Youngling Posts: 16 Join Date: 9/8/15 Recent Posts
Hi Qinghua,

Thank you, I ended up settling with splitting it over 6 cores. Although not as fast as I would have liked it still reduced the job time significantly.

However I would add that when I used simplified BCs (a single time invariant water level value along all domain boundaries) I was able to run the job on more cores, I tested it using up to 8. The original BCs were made of water level time series specified in successive sections each covering 10 boundary cells. This suggests that the BCs also effect the model partitioning which I was unaware of.

Thanks
Jonathan