so I have the following submission script:
#!/bin/bash
#
#SBATCH --job-name=P6
#SBATCH --output=P6.txt
#SBATCH --partition=workq
#SBATCH --ntasks=512
#SBATCH --time=18:00:00
#SBATCH --mem-per-cpu=2500
#SBATCH --cpus-per-task=1
#SBATCH --array=1-512
srun ./P6 $SLURM_ARRAY_TASK_ID
What I want to do is to run 512 instances of the program P6 with an argument from 1 to 512, and I far as I know the submission above does that. However, upon inspecting squeue and sacct, SLURM seems to have assigned 512 CPU's to each task!
What did I did wrong?
You asked for 512 tasks for every job. Ask for a single one (or the number you consider appropriate for your code):
#SBATCH --ntasks=1
BTW, there are a few minor problems in your submission script. All jobs in the job array will be named the same way (which is not really a problem), but they will also share the stdout file, so you will have mixed information of all tasks in the P6.txt. I would advise you to differentiate them with the JobID or the TaskId (%j/%A/%a).
Also, you don't define the standard error destination, so if anything fails or is written in stderr, you will lose that information. My reccomendation is to define the standard error too (#SBATCH --error=P6.txt.%j
).
Another detail is that the working folder ir not defined. It will work as long as you submit the script from the proper folder, but if you try to submit it from another place, it will fail.