I have access to a supercomputer (and maybe more upcoming) and I’d like to try out making my workflow parallel. Back when I used HTCondor, I broke down my script to hypothetically do it, but ran into some snafus and didn’t quite end up implementing it on their system. Anyway, I’m ready to try again in a different environment. (File this one under lab notebook edition.) I’ve tweaked a number of things I’m trying to keep track of, lest my submission script breaks the workflow.
- remember your model file must end with a newline character in order to correctly count the number of lines
- do I need to set the MKL_NUM_THREADS variable?
- apparently only if the script isn’t running parallel code on its own; otherwise this is where setting it to $NSLOTS would potentially overload the node
- “If INLA is going to use OpenMP then I would suggest letting it have 36 cores and then setting MKL to 2. This would put you at running (at most) 72 threads. Currently our processors have hyperthreading turned on which allows them to operate ~2 threads per core, so running 72 cores worth of work shouldn’t be an issue and will likely give you a small boost to your speed. Of course if you spend very little time in INLA code then it may make more sense to keep it at 1 and let MKL do the hyperthreading for you when it can.” – advice from my contact at the HPC center
- can I still write out and append to a file at a specific location?
- apparently yes…?
- if not, change to dynamic naming: perhaps let the array script handle the output native as long as you have all of your identifying info written out, then combine
- maybe I should change R script output to a print statement that will (hypothetically) go into the auto-generated output files?
- ask a collaborator to initiate a Globus Connect space
- write out LOO dynamically
- do I need to take advantage of the parallel functionality of R-INLA, or should I run single core?