Python code runs on login node but not on cluster

I work for one of my professors and we are trying to run SU2 in parallel on a cluster owned by the university that uses slurm for its workload manager. The problem we are running into is that when we ssh into the cluster and run the command:

parallel_computation.py -f SU2.cfg

on an assigned node by slurm (using sbatch), the code hangs and wont run. The weird thing about this is if we run the same command on the login node, it works just fine. Do any of you know what could possibly be the problem?

Here is some additional information:

  • We talked with the IT guy in charge of the cluster and he doesn't have enough background to know what is going on.
  • On some of our output files we would get the escape key [!0134h, when we changed the terminal settings to get rid of the escape key the code behavior was consistent as above.
  • We can run SU2_CFD "config file" , the code in serial, on both the login node and the cluster just fine
  • We have tried running an interactive session on a node (using srun), no change in behavior

Any thoughts would be appreciated! We really want to be able to run the code in-house instead of outsource.