delwaq1 error reading block 5 - Home - Delft3D
delwaq1 error reading block 5
I have a very large delft3D model domain to study water quality issues. Currently doing some test runs to understand the coupling between FLOW and DELWAQ. I had no problem with the depth-averaged version of the model. However, after I switched to 3D z-layer, delwaq1 came up with error reading block 5 of the delwaq input file *.inp. It looks like the code has restrictions on the number of boundaries. In the delwaq input file, there are 34725 boundaries, but in the lst file, it stops at 33678, and the error massage follows:
ERROR reading file on unit: 39, filename: c04_EColi.inp
Line on input file was (at line no. 33769):
'33679 (Top-bottom 2)' '33679 (Top-bottom 2)' '2/25 (Top-bottom 2)'
Expected was an integer !
Detected was a character !
ERROR. Reading block 5 !! Check input file !!
The inp and lst files are attached for your reference. It occurred when I ran delwaq1 on Windows. I have Delft3D 4.02. Any insight on what happening?
The intriguing question however is how this came about. Did you change the underlying hydrodynamic model grid and then use an existing input file? That is one reason I can think of - the input file was created with the user-interface judging from the style, but then it would have maintained a consistency between these bits of information. So something else happened.
I did not change the hydrodynamic model grid. The files were all created by ddcouple module by using
ddcouple myrunid.ddb -parallel
where myrunid.ddb was created after I ran the FLOW model on the cluster.
Is this an issue due to z-layer? I noticed that the individual hyd files created by FLOW model run including "geometry curvilinear-grid z-layers"; however, the combined hyd file by ddcouple including "geometry curvilinear-grid". When I ran delwaq1, I manually added "z-layers" at this line.
I did another test run with less layers, same delwaq1 error occurred. So it is definitely not the code restrictions on the number of boundaries.
WaqAgg = #full matrix#
When the ran the FLOW model, the following error occurred:
*** ERROR File not found: full matrix
Is this option not supported anymore? Or I missed something.
I would not expect this type of difficulties from a z-layers model. I know there are/have been problems with getting that right, but that is more on the side of mass balance errors and the like.
If this works (no aggregation at all), that would be great, otherwise we will have to dig into the cause of the trouble.
So yeah, please help me dig into the cause of the trouble. I really need to have this work as soon as possible
For the same HD model, the sigma layer and z-layer should have different number of open boundaries. But the inp files created by the WAQ GUI have the same number of open boundaries in the fifth block of model input. For the z-layer model, I already added "geometry curvilinear-grid z-layers" in the hyd file. The version of the WAQ GUI is 3.33.01.48675 which came with the service package of Delft3D 4.02 Suite.
Do you think maybe there was a bug for this version when dealing with z-layer models?
- The hyd-file
- The lga- and cco-files
- The pointer table (.poi)
Perhaps I will need more files later, but this should suffice to make a first step in the analysis.
Thanks for helping me with this issue.
In this set-up each domain may have a different number of z-layers - that causes a different number of boundary cells, as the boundaries extend over some horizontal stretch as well as over the vertical. I assume that this is the case.
Now, since the overall hyd-file contains just a single number of layers, 10 in this case, the GUI decides that each boundary section stretches over 10 layers - independent of whether that is actually the case for the domain that section belongs to (in pathologically complicated cases, the boundary section could even belong to several domains at the same time, each with its own number of layers). This can not be helped - the GUI knows only the entire grid, not the individual domains.
The simplest solution is to make the z-layers the same for all domains. That may not be suitable for your modeling situation, though.
Another solution would be to make the number of z-layers the same for all domains - I am merely reasoning from the DELWAQ/Delft3D-WAQ point of view - I am not sure at the moment what the restrictions from Delft3D-FLOW are.
Yet another, but for that I need to examine things a bit more closely, is to fool DELWAQ into thinking there are enough boundaries, by adjusting the pointer table. It is mayhap the trickiest one, but from a modelling point of view the "best".
From my understanding of the DELWAQ manual, it can deal with 3D models with z-layers as well as sigma-layers. In my model, due to the varying bathymetry, the number of z-layers is not uniform across the model domain. Thus, the number of z-layers in each boundary segments is not uniform either.
I am a little bit confused by your comments:
"The simplest solution is to make the z-layers the same for all domains." - Did you mean layer thickness? In my z-layer HD model setup, the thickness of each z-layers is the same (e.g. 10% of the total depth for a 10 layer model). I am not using domain-decomposition. I ran my FLOW model on a cluster using MPI approach. Then used "ddcouple *.ddb -parallel" to combine the HD results for DELWAQ.
"Another solution would be to make the number of z-layers the same for all domains" - This would be the case for a sigma-layer 3D model, am I right? For my study, we thought z-layer model would be better to use to capture the salinity wedge along the river.
In addition to modify the hyd file with "geometry curvilinear-grid z-layers" after combing the HD files, is there anything else need to be modified? I am using ddcouple version 1.02.00.47707, Sep 20 2016, 15:46:03. Has ddcouple been updated to better deal with z-layer models?
Thank you so much for help. I need you to keep finding solutions for me. I am in a critical time frame for my study.
The GUI sees 10 layers and 1389 boundary cells per layers and therefore writes 13890 boundary cells to the input file. But according to the pointer table there are only 11112 boundary cells. Hence the mismatch.
The solution I came up with is to replace one of the boundary cell numbers in the pointer table by the value 13890. That will satisfy the input processor DELWAQ1 (though you get a lot of informational messages about boundary cells not having actual effect). As the boundary cell which is treated in this way, is the cell with number 11112 (i.e. the cell in layer 8 which would be above the boundary cell 13890 in layer 10 if it were there), the effect on the calculation will be non-existent if the boundary condition is uniform over the vertical and negligeable if it does vary.
I have attached the corrected pointer table. To use it in the calculations, just replace the original name in the hyd-file by "corrected.poi" and regenerate the input file via the GUI. Or edit the input file directly - replace the name of the pointer table in there.
Please let us know if this still gives problems - I could not accurately test it (I lack a few files for that)
The corrected poi file passed by delwaq1, but when I ran delwaq2, the model crashed. I have attached all the files needed to run DELWAQ here for your further testing.
- It occurs with option 21 as well as option 15 (they both use the lsolve routine, but crash at a different location)
- It also occurs if I use the original pointer table and remove the offensive lines from the input (that would make a somewhat inconsistent set-up, but for this test that does not matter)
So the conclusion is that there is something fishy going on and we need to sort this out
I think the issue is related to the handling of ddcouple for MPI model runs.
I did test runs with the "friesian_tidal_inlet" case from the tutorial. One run conducted on my own windows computer without mpi; the other one conducted on linux cluster usin two cores with MPI. Both runs used same 10 z-layers, so setups were exactly the same. The one run on my pc went through all the DELWAQ processes without any issue. However, the one on the cluster had the same delwaq1 error. All the DELWAQ files are attached for both runs. I did another test using only one subdomain from the MPI runs, DELWAQ had no issue at all.
Has ddcouple been updated ever since 2016?
What is more interesting is the fact that DELWAQ2 crashes in routines that have been in intensive use for many years now on a wide variety of models.
As for the last update of ddcouple: if a program works fine or seems to work fine and there are no new features to implement, it makes no sense to change the source code. This has been the case with ddcouple, though in february 2017 there was an enhancement regarding the options you can specify.
The number of segments is much larger than the total number of exchanges in the horizontal. That mean that the segments are not all connected in the horizontal direction. The same holds for the vertical. Again the segments are not all connected in that direction. It means that the matrix that is set up using the exchanges is singular. Now the next question is: where is this coming from?
I noticed there is a large number of domains and the files are going to be big - but could you provide me with the raw files anyway - just two output times will suffice, like you did the other day.
- The number of domains is 111 - that is very large, this means you have very narrow elongated domains
- The grid cells in the river section are very narrow (down to half a meter, it seems) and elongated (10 meters orlonger). That is not recommended. Such small grid cells actually violate the assumptions underlying the numerical method in the FLOW model.
While the ddcouple program ought to be capable of handling so many domains, there is a tricky issue - each domain boundary has a number of "ghost cells" connected to it to enable transfer of information to the adjacent domain. These ghost cells should probably be not lie in more than two domains. With the very narrow domains we have seen now there is a remote chance that there is some overlap.
Could you try the hydrodynamic calculation with a smaller number of domains? Say, 20 domains instead of 111? The communication between all these domains is likely to consume quite a bit of computational time, so that 20 domains will probably not be that much less efficient than 111. With 20 domains there is certainly no risk of overlaps and we want to preclude that this is the cause of the strange issues we see now.
The same issue happened to the tutorial case of "friesian_tidal_inlet" I posted. In that case, there are only two sundomains when I ran it in MPI mode. You can check on that model run. It's small and very quickly to run.
I attached the files here again for the friesian_tidal_inlet model runs. The run on my PC has no problem. However, the one used MPI mode got the same issue as my large domain model.
Is ddcouple code still not open source? Is there any chance I can have it and check?
Could you be more specifc about the enhancement of ddcouple in February 2017?
I used ddcouple this way: "ddcouple *.ddb -parallel". What other options for ddcouple that I can try? Maybe that can easily solve my issue.
I can arrange for an FTP site - that may be easier to transfer large files (and access is restricted).
Can you check the tutorial case? I have the same delwaq1 issue for the MPI mode on that one.
Thank you very much, Arjen!
Hi, I am also having many issues when trying to run Delwaq in Linux after doing ddcouple in Linux. However, when doing ddcoupling in Windows, Delwaq runs properly in Linux, but the files are too big to copy them to Windows for the ddcoupling. Can you please provide me the most recent ddcouple version to check if it solves my problem?
Attached are the latest Windows and Linux ddcouple versions. I am not sure it this will help your problem. What I read from your screen is that the attributes file (atr) is to long. It continues to read a typical attribute value (31), while it should read the volume option (usually in the range of -1 to 4). It could be that you point to the wrong atr-file...
Hi Michelle, many thanks for sending the files. I have tried the Linux ones and you were right, it didn't solve my problem, I got exactly the same errors as before. I do not understand when you say that it could be pointing to the wrong atr file, as the atr file being used in inp file is the one created by running the ddcouple. I also do not understand how can I have too many data. Do you have any suggestions I can try, in order to solve my problem?
To be more clear, I am sending you attached the lsp file with the errors. The strange thing is that if I do the ddcouple in Windows, instead of Linux, then delwaq runs properly on Linux.
Thank you very much.
I'm sorry, but my remark about the wrong atr file was merely a suggestion what might be wrong based on the little information that I have from your post, not that it was wrong. Some for the length of the file. With a new look at your screenshot, I think something is wrong in the inp-file, just after #2 in the file.
It would be better if you post the inp-file and the lst-file. That would give some more information. And I don't know what is the size of the atr-file, but that might be useful too.
yesterday I wanted to send you the lst file with the errors, but by mistake I have sent the lsp, sorry. Please find the inp, lst and atr files attached.
Thanks for your help,
I don't know exactly why this goes wrong, but the GUI should have recognised the keyword 'DELWAQ_COMPLETE_ATTRIBUTES' at the top of the atr file, and the have left out some of the settings that are in this type of file (as written by ddcouple) that are not in single domain (sigma layer) atr-files. Attached a modified inp-file with the lines commented out around the mentioning of the atr file that the GUI should have left out:
; 1 ; one time-independent contribution
; 1 ; number of items
; 2 ; only feature 2 is specified
; 1 ; input in this file
INCLUDE 'com-Long_run_discharge_60_year2010_adjusted_BC2.atr' ; attributes file
; 0 ; no time-dependent contributions
Hope this helps!
yes, the problem was definitely those lines, now Delwaq runs properly! I have been checking and those lines do not appear in the inp file when I did the ddcouple in Windows, and for that reason Delwaq was running in that situation. Although I still do not understand why those lines are created when I do ddcouple in Linux, I do not mind to comment the lines every time I run a Delwaq simulation. That will be much faster than copying all the files into my pc for doing the ddcouple in Windows!!
Many thanks for your help!