Replies: 4 comments 19 replies
-
This is a bit like me asking you why my car did not start this morning. I would need an input file (one only) and I would run it on my computer. |
Beta Was this translation helpful? Give feedback.
-
Thanks for that, Jason, much appreciated. So I guess at this stage I can't really rule anything out.
With a variant of this model where the only thing I have changed is the criteria for freezing the HRR, I have had it stall at around 30-40s of run time. So it seems independent of the sprinkler activation. I might try the nightly; I am using a third party for hardware, so I'll have to get them to sort that out. I don't personally have access to hardware to run this size model in reasonable time. You say you were running it on 6.9.1, I'm assuming that is the official release, or was it a nightly with a fix for the potential memory leak ? If I have particles in my model is there memory allocation for them before they are injected into the domain? Could I still get a memory leak problem before the sprinklers are activated?
Do you mean an indefinite hang, or just a delay? In my case, when the stall occurs it will last for days, so I assume its indefinite. Thanks again! |
Beta Was this translation helpful? Give feedback.
-
This is the current state of my run, with 42 MPI processes, 2 OpenMP threads per process |
Beta Was this translation helpful? Give feedback.
-
Thanks for all your help, I'll go ahead with one OPENMP thread from here on out. So neither Kevin or Jason had the simulation hang? Whereas, I've had it hang everytime but at different simulation times. I've been running a version with one OPENMP thread and it is up to 900 odd out of 1200s. One of the previous runs made it this far before stalling, though... Is not the speed of simulation a separate issue to the stalling? I guess it's hard for you to comment on the stalling when the issue hasn't been replicated. |
Beta Was this translation helpful? Give feedback.
-
Hi all,
I have some simulations with multiple meshes being run on Sabalcore. Some of them have seemed to have halted, no output files are being updated. For example: one run with identical geometry except for fire location has run 1200s in the time it has taken others to run 65s and 185s.
I have sprinklers with particles in the files, but the last outputted temperatures in the csv do not indicate that sprinkler activation is imminent, and i did not have problems with activation in the other one.
I am using hvac to model leakage in a warehouse, don't know if that is relevant.
The last entry in the out file gives time step as about 0.008s. This is on par with the simulation that ran successfully.
What steps should I take to identify the issue here, particularly between if it is my model inputs causing the problem or a hardware issue?
What outputs should I be looking at?
Would FDS hang like this if it couldn't communicate with one of the nodes, or would the run terminate with an error code?
Thanks in advance.
From out file:
Current Date : February 25, 2025 08:01:24
Revision : FDS-6.9.1-0-g889da6a-HEAD
Revision Date : Sun Apr 7 17:05:06 2024 -0400
Compiler : Intel(R) Fortran Compiler for applications running on Intel(R) 64, Version 2024.1.0 Build 20240308
Compilation Date : Jul 22, 2024 21:27:27
Number of MPI Processes: 42
Number of OpenMP Threads: 2
MPI version: 3.1
MPI library version: Intel(R) MPI Library 2021.12 for Linux* OS
Beta Was this translation helpful? Give feedback.
All reactions