Skip to main contentIBM Quantum Documentation mirror

SPANK plugin user guide

The SPANK plugin for the Quantum Resource Management Interface (QRMI) is used to configure access to quantum resources from user jobs in a compute environment administrated by the Slurm workload manager. This is a guide for plugin users to configure QPU resource allocation when creating Slurm jobs.

The Slurm QPU resource definitions determine what physical resources can be used by Slurm jobs in high-performance compute (HPC) environments. User source code should be agnostic to specific backend instances, and even backend types whenever possible. This keeps source code portable while the QPU selection criteria are part of the resource definition (which is considered configuration rather than source code).


Configure QPU resources in job creation

Caution

Note that this plugin is under active development and the exact syntax is subject to change.

Administrator scope

HPC administrators configure the SPANK plugin to specify what physical resources can be provided to Slurm jobs. This configuration contains all the information needed to have Slurm jobs access the physical resources, such as endpoints and access credentials.

Read the qrmi_config.json.example for a comprehensive example configuration.

In slurm.conf, QPU resources can be assigned to some or all nodes for usage:

...
GresTypes=qpu,name
NodeName=node[1-5000] Gres=qpu,name:ibm_fez
...

User scope

HPC users submit jobs using QPU resources that are tied to Slurm QPU resources. The name attribute references what the HPC administrator has defined. During a Slurm job's runtime, backend selection can be based on criteria other than a predefined name referring to a specific backend (for example, by capacity and error rate qualifiers, to help down-select among the defined set of backends).

There might be additional environment variables required, depending on the backend type.

SBATCH parameters will point to one or more QPU resources assigned to the application as generic resources. Environment variables provided through the plugin will provide the necessary information to the application (see the HPC application scope section for details).

#SBATCH --time=100
#SBATCH --output=<LOGS_PATH>
#SBATCH --gres=qpu:1
#SBATCH --qpu=ibm_fez
#SBATCH --... # other options
 
srun ...

To use more QPU resources, add more QPUs to the --qpu parameter:

#SBATCH --time=100
#SBATCH --output=<LOGS_PATH>
#SBATCH --gres=qpu:3
#SBATCH --qpu=my_local_qpu,ibm_fez,ibm_marrakesh
#SBATCH --... # other options
 
srun ...

HPC application scope

HPC applications use the Slurm QPU resources assigned to the Slurm job.

Environment variables provide more details for use by the application; for example, SLURM_JOB_QPU_RESOURCES lists the quantum resource names (comma-separated if several are provided). These variables will be used by QRMI. (See the README files in the various QRMI directories (IBM, pasqal) for more details.)

from qiskit import QuantumCircuit
# Using an IBM QRMI flavor:
from qrmi.primitives import QRMIService
from qrmi.primitives.ibm import SamplerV2, get_backend
 
# define circuit
 
circuit = QuantumCircuit(2)
circuit.h(0)
circuit.cx(0, 1)
circuit.measure_all()
 
# instantiate QRMI service and get quantum resource (we'll take the first one should there be several of them)
# inject credentials needed for accessing the service at this point
load_dotenv()
service = QRMIService()
 
resources = service.resources()
qrmi = resources[0]
 
# Generate transpiler target from backend configuration & properties and transpile
backend = get_backend(qrmi)
pm = generate_preset_pass_manager(
    optimization_level=1,
    backend=backend,
)
 
isa_circuit = pm.run(circuit)
 
# Run the circuit
options = {}
sampler = SamplerV2(qrmi, options=options)
 
job = sampler.run([(isa_circuit, isa_observable, param_values)])
print(f">>> Job ID: {job.job_id()}")
 
result = job.result()
 
if job.done():
    pub_result = result[0]
    print(f"Counts for the 'meas' output register: {pub_result.data.meas.get_counts()}")
elif job.cancelled():
    print("Cancelled")
elif job.errored():
    print(qrmi.task_logs(job.job_id()))

See the examples directory for example files.

Backend specifics

IBM Direct Access API

Administrator scope

Configuration of Direct Access API backends (HPC admin scope) includes endpoints and credentials to the Direct Access endpoint and authentication services, as well as to the S3 endpoint. Specifically, this includes:

  • IBM Cloud® API key for creating bearer tokens
  • Endpoint of the Direct Access API
  • S3 bucket and access details

Access credentials should not be visible to HPC users or other non-privileged users on the system. Therefore, sensitive data can be put in separate files, which can be access-protected accordingly.

Note that Slurm has full access to the backend. This has several implications:

  • The Slurm plugin is responsible for multi-tenancy (ensuring that users don't see results of other users' jobs)
  • The HPC cluster side is responsible for vetting users (who is allowed to access the QPU) and ensuring according access
  • The capacity and priority of the QPU usage is solely managed through Slurm; there is no other scheduling of users involved outside of Slurm
User scope

Execution lanes are not exposed to the HPC administrator or user directly. Instead, during runtime, there can be two different modes that HPC users can specify:

  • exclusive=true specifies that no other jobs can use the resource at the same time. An exclusive mode job gets all execution lanes and cannot run at the same time as a non-exclusive job
  • exclusive=false allows other jobs to run in parallel. In this case, there can be as many jobs as there are execution lanes, all running at the same time, and the job is assigned one lane

Qiskit Runtime Service

User scope

It is expected that users specify additional access details in environment variables. Specifically, this includes the following:

  • Qiskit Runtime service instance (CRN, Cloud Resource Name)
  • Endpoint for Qiskit Runtime (unless auto-detected from the CRN)
  • API key, which has access to the CRN
  • S3 instance, bucket, and access token/credentials for data transfers

These details determine under which user and service instance the Qiskit Runtime service is used. Accordingly, IBM Quantum® Platform scheduling considers the user's and service instance's capabilities for scheduling.

At this time, users must provide the above details (no shared cluster-wide quantum access).

Pasqal Cloud Services

HPC admin scope

There is no specific setup required from HPC admins for PCS usage.

HPC user scope

It is expected that users specify additional access details in environment variables. Currently, this includes the following:

  • PCS resource to target (FRESNEL, EMU_FRESNEL, EMU_MPS)
  • Authorization token