Future outlook and direction
So far, we have learned about the motivation for using both high-performance computing (HPC) and quantum computing to solve scientific problems. We have defined classical and quantum compute resources, including CPUs, GPUs, and QPUs, and discussed how to scale and manage them using techniques like vertical and horizontal scaling, scheduling, and workload management. Furthermore, we have explored programming models for both QPUs (such as quantum circuits and primitives like Sampler and Estimator) and classical computers, including parallel programming practice with MPI, which is a powerful tool of a quantum-classical heterogeneous computing. Finally, we have studied and practiced advanced quantum sampling-based algorithms, like Sample-based Quantum Diagonalization (SQD) and Sample-based Krylov Quantum Diagonalization (SKQD). These algorithms leverage the subspace method to accurately estimate the ground state energy of molecules and materials by preparing and sampling quantum states, which define a subspace for classical diagonalization, a combination of different programming models on a set of heterogeneous resources. With these foundational concepts of quantum and classical supercomputing, we are no longer talking about one replacing the other, but about creating a powerful, integrated system that works in synergy — a combination poised to bring about the dawn of quantum advantage.
Why now?
The community has already moved past the milestone of "quantum utility"—where quantum computers were first proven to be useful scientific tools capable of computations beyond classical brute-force simulation. This utility era started with the now-famous utility paper featured on the cover of Nature in 2023, and went on to include dozens of publications by partners, clients, and researchers at IBM Quantum®. Now, the focus has shifted to the next critical frontier: achieving quantum advantage. For a long time the term "quantum advantage" suffered from imprecise definitions. This paper has put forth a concrete definition, which we will use here. Specifically, quantum advantage denotes the execution of an information processing task on quantum hardware that satisfies two essential criteria:
i) The correctness of the output can be rigorously validated, and
ii) It is performed with a quantum separation that demonstrably offers superior efficiency, cost-effectiveness, or accuracy than what is attainable with classical computation alone.
It is anticipated that quantum advantage will begin to emerge by the end of 2026 and that it will do so through the leveraging of quantum and HPC resources together. This lesson outlines the core vision for this new paradigm, details the key ideas ahead, and presents a future outlook grounded in a verifiable, platform-agnostic framework for demonstrating and realizing true quantum advantage.
5.1 The big picture
For the first time, we are witnessing a significant turning point in the history of computation - the era of quantum-centric supercomputing (QCSC), an emerging paradigm that tightly integrates quantum processing units (QPUs) with classical supercomputers. The vision is not for quantum systems to replace classical ones, but to demonstrate that this heterogeneous architecture—where "quantum plus classical" can outperform classical alone—is the most powerful path forward. In this model, QPUs are envisioned as specialized co-processors, working alongside CPUs and GPUs to tackle computational problems that are intractable for classical computers.
The full potential of this new architecture can only be realized by placing these powerful tools into the hands of as many users as possible. This vision is already taking shape through the deployment of quantum systems in established high-performance computing (HPC) centers and the development of software, such as quantum Slurm plugins, that streamlines their integration into existing classical workflows. By making these heterogeneous systems more accessible to the broader research community, we foster the environment needed for innovation and discovery.
This strategy of combining integrated technology with a broad user base is how we believe the community will reach quantum advantage in the near future. Quantum advantage is not a single, definitive milestone but a process — a sequence of increasingly robust demonstrations that will be scrutinized, reproduced, and challenged by the community until a scientific consensus is reached. This is the path to demonstrating, by the end of 2026, the first credible and verifiable instances where this new way of computing solves practical problems more efficiently, cost-effectively, or accurately than what is attainable with classical computation alone.
Big ideas
To realize this vision, several critical questions and ideas must be addressed.
-
Optimal workload partitioning: On the software side, the challenge lies in managing complex hybrid workflows. Orchestrating the seamless execution of tasks across both quantum and classical resources requires sophisticated tools. This includes Quantum-HPC Middleware and Runtime Infrastructure designed to handle job scheduling, resource management, and data flow in this heterogeneous environment. Furthermore, developing techniques to effectively parallelize quantum circuits or break them down into smaller, manageable parts is crucial for maximizing the utility of today's quantum hardware.
-
System-level fault-tolerance: The ultimate solution for protecting quantum information from noise is fault-tolerant quantum computation (FTQC), where information is encoded into robust "logical qubits". While emerging quantum low-density parity-check (qLDPC) error correction codes offer a path to reducing the immense resource overhead required, the implementation of full fault-tolerance is not expected to be viable in the immediate near term. At the same time, error mitigation uses classical post-processing to reduce or eliminate the bias in calculations caused by noise, which is also a critical element in achieving system-level fault-tolerant quantum systems. Powerful error mitigation methods are already being deployed as a service, demonstrating the power of the QCSC architecture. For example:
- Algorithmiq’s Tensor Network Error Mitigation (TEM) manages noise in software post-processing, leveraging classical HPC resources to extend the reach of current QPUs.
- Qedma’s Quantum Error Suppression and Error Mitigation (QESEM) combines hardware-level error suppression with mitigation to improve the reliability of quantum computations at scale.
-
Democratizing access: Making these powerful hybrid systems broadly accessible is key to accelerating innovation. This is already being realized through the physical deployment of quantum systems in HPC centers and the release of Slurm plugins for streamlined integration. To streamline this integration, both companies have released Slurm plugins, so that quantum workloads can be managed with standard HPC schedulers. Furthermore, comprehensive software stacks like Qiskit provide a cloud-based runtime environment for low-latency quantum-circuit execution, orchestrating complex hybrid tasks and providing tools for compilation, optimization, and error mitigation. Open-access quantum hardware and open-sourced development packages will undoubtedly play a critical role.
IBM's outlook for the future
The IBM Quantum Development Roadmap is a good demonstration of this big picture and these big ideas.
IBM Quantum's hardware roadmap is driven by a focus on increasing qubit scale and connectivity. The Nighthawk series (2025-2028) uses a new square lattice architecture to enhance connectivity, while the Loon processor (2025) introduces "c-couplers" to enable non-local qubit connectivity, which is critical for fault-tolerant quantum computing (FTQC). This roadmap culminates in the IBM Quantum Starling (2029) and Blue Jay (2033+) systems, which are designed to deliver large-scale, fault-tolerant computation with millions of gates and thousands of logical qubits.
The software and middleware strategy is built on four key objectives: executing accurately, orchestrating workloads, discovering new algorithms, and applying them to specific use cases. The roadmap includes ongoing improvements like utility-scale dynamic circuits (2025) and new profiling tools (2026) to ensure efficient execution. For workload orchestration, the C-API (2025) and future workflow accelerators (2027) will integrate quantum and classical high-performance computing (HPC). Furthermore, IBM® will introduce utility mapping tools (2026) and new circuit libraries (2029) to facilitate the discovery and application of new algorithms.
Summary
We have explored the big pictures and big ideas behind the QCSC goal, and we looked at IBM's roadmap on development and innovation of quantum computing. This journey, as we have seen, is a marathon, not a sprint. While IBM is committed to delivering increasingly powerful quantum computers, our progress is only one part of the equation. It is crucial that the quantum community continues to develop new algorithms, paving the way for the applications that will truly bring useful quantum computing to the world.
To achieve this, we must work together. This means establishing standardized benchmarking problems with the help of classical experts to ensure relevance and fairness. It also requires publishing detailed methodologies and datasets to allow for reproducibility, and maintaining open-access leaderboards to track our collective progress.
There has never been a more exciting time to be part of this community. By adopting these best practices and continuing our exploration, we can work together to realize the full potential of quantum advantage.