Skip to main contentIBM Quantum Documentation mirror

Welcome to the Frontier

Throughout history, our scientific understanding of the world has grown rapidly when new tools became available – tools that allowed us to ask new questions, run larger experiments, and investigate new areas of research. Some of those tools include the telescope, the microscope, and the Large Hadron Collider. Each one provided access to new kinds of scientific discoveries. In the 1960s, we also saw the development of high-performance computing (HPC), which became a critical tool for solving complex computational tasks, including many important scientific challenges.

Now we have another important tool for scientific progress: quantum computers. While still an emerging technology, quantum computers have the potential to significantly change the kinds of computational problems we can solve efficiently. This course is about understanding how these technologies, working together, can expand the boundaries of what is computationally possible.

Our mission is clear but ambitious: to provide you with the conceptual and practical knowledge needed to use these technologies to address some of the world’s most difficult problems.

This video gives describes the goals of this course and the motivation behind combining HPC and quantum computing.

HPC

What is high-performance computing exactly? High-performance computing has become the foundation for solving modern computational problems. We no longer live in a time when advanced problems can be solved with simple tools like an abacus or pen and paper; instead, we are working with questions and datasets that require enormous computational power.

The field of high-performance computing can trace its roots to the development of the earliest supercomputers in the 1960s. These were machines specifically designed to solve large-scale scientific and engineering problems more quickly than conventional computers.

One of the first well-known supercomputers was the CDC 6600 (1964), built by Seymour Cray, often referred to as the father of supercomputing. The CDC 6600 was the fastest computer of its time, using innovative architecture that included parallel functional units and pipelining—concepts still used in HPC today.

Cray continued advancing the field with the Cray-1 (1976), which introduced vector processing—a technique that greatly increased the speed of operations on large arrays of data, making it well-suited for scientific computing.

As single-processor speeds began to level off, HPC evolved toward parallel computing—using many processors that work together on different parts of a problem. During the 1980s and 1990s, parallel architectures became common in HPC. By the early 2000s, HPC moved toward clusters of commodity hardware, which are ordinary servers connected by high-speed networks. This shift made supercomputing more affordable and gave wider access to HPC.

Throughout this evolution, IBM® has been at the forefront of HPC research and implementation. Notably, the IBM Blue Gene supercomputers were one of the most influential supercomputer families of the 2000s and early 2010s. This was an era of enormous growth in massively parallel systems stage, of which Blue Gene/Q was an example, with one instance (Sequoia) having 100,000 nodes. The IBM-built Oak Ridge Summit was the first HPC resource to achieve ExaOPS (1.88 mixed precision) in 2018.

Today, we are in the exascale era, where supercomputers can perform 101810^{18} operations per second (exaflops). The first supercomputer capable of achieving this milestone was Frontier, located at Oak Ridge National Laboratory.

So why do we need such powerful computing resources? There are problems critical to human well-being that require such extreme resources to model or solve. Examples include climate models, studying the structure and motion of the Earth's mantle, and fluid dynamics simulations.

Many problems of this type have been addressed by IBM Researchers and collaborators working on IBM systems. This sustained leadership has been widely recognized. For example, IBM researchers have won the Gordon Bell Prize six times.[1]

HPC is a very active domain with boundaries being broken regularly. For one overview of modern capabilities, see this list of the Top 500 supercomputers.

Quantum computing

Quantum computing is a new computing paradigm that does not simply follow the gradual development of classical computers. It aims to leverage the quantum properties of superposition, entanglement, and quantum interference to solve problems that would be intractable for classical computers alone. We will not explore the details of the uniqueness of quantum computing in this course - see Fundamentals of quantum information for more on this - but instead we will discuss how combining these two infrastructures could lead to breakthroughs in applied science.

A hybrid approach

Most important to emphasize is that these two computational paradigms are not competitors. We are in an era where optimized workflows require the two paradigms to complement each other and to place the task where it is most effectively handled. Quantum computers will not replace classical systems; rather, the future of computational science will increasingly depend on hybrid workflows where HPC provides high-performance classical processing, and quantum computing contributes unique capabilities. As a practitioner, researcher, or technologist, understanding how to combine these tools will position you as a leader in the next era of scientific and technological advancement.

We will examine how quantum computing and HPC are positioned to enable breakthroughs across a wide range of industries, including:

  • Chemistry: Accelerating the identification of new drugs and materials.

  • Energy: Designing improved catalysts, batteries, and clean energy solutions.

  • Finance: Modeling risk, optimizing portfolios, and developing new financial instruments.

  • AI & Machine Learning: Enhancing model training, optimization, and data analysis.


Why we're going beyond classical

Humans have had considerable success in the above application areas using HPC. However, even the world’s fastest supercomputers face difficulties when problems scale factorially or exponentially with problem size. For example, listing every possible arrangement of 50 particles inside a complex molecule leads to configurations that grow at least factorially, requiring more memory than all the data centers on Earth combined could provide.

Another example is planning a delivery route for 10,000 cities: the number of possible routes becomes so large that, even if every computer ever built tested one route per microsecond, the calculation would take orders of magnitude longer than the current age of our Sun. These totals are not just large; they grow exponentially, meaning each additional particle or city multiplies the computational burden far beyond simple scaling.

We can continue adding GPUs, but manipulating such vast amounts of data already consumes megawatts of power and requires facilities the size of warehouses. At a certain point, classical hardware cannot scale further in a practical or affordable way. This is why researchers are turning to quantum processors, which store information in superpositions and can sometimes directly address these exponential growth problems, solving specific cases that classical machines cannot complete in any reasonable timeframe.

HPC eventually reaches fundamental limits dictated by combinatorics and thermodynamics. Quantum computing does not eliminate those limits, but it can sometimes bypass them in very specific scenarios.


Why not quantum alone?

If quantum computing can bypass certain limitations of classical computing, why don’t we just rely entirely on quantum computers? The first and most obvious reason is that quantum computers still require classical machines to function. Tasks such as compiling and feeding circuits into the quantum processor, storing measurement outcomes, and carrying out basic post-processing are all performed by classical computing systems.

So why do we additionally need high-performance computing? There are several reasons. Many current and anticipated applications of quantum computing address problems with extremely large search spaces. Quantum algorithms can often reduce the size of this space significantly, but in practice the remaining problem may still be large enough to benefit from HPC resources. Moreover, there are algorithms that balance the strengths of HPC and quantum computing, relegating enough of the work to HPC to make the overall algorithm more robust against the effects of quantum noise.

A concrete example is the sample-based quantum diagonalization (SQD) algorithm. This algorithm, which will be explored in Lesson 4, demonstrates how HPC and quantum computing can complement each other in practice. For additional background, see the Quantum Diagonalization Algorithms course on IBM Quantum Learning.


This course is designed for professionals and students who work—or plan to work—closely with high-performance computing (HPC) infrastructure and/or quantum computing. With the rapid progress in quantum technologies, we anticipate a near future where quantum processors are integrated alongside traditional HPC resources to achieve more accurate results and enable new approaches to problem-solving. This course is intended for learners who want to understand how to build and run such hybrid workflows.

Because participants may come from different backgrounds, we expect two main types of learners: those already experienced in HPC but new to quantum computing, and those well-versed in quantum computing but new to HPC. To help everyone get the most out of this course, we provide preparation recommendations for both groups below.

For those new to HPC

This course assumes familiarity with core HPC concepts such as distributed memory programming, message passing, parallel programming models, and resource management. We will also use tools such as the Slurm workload manager. While many of these concepts will be introduced briefly as needed, having some prior exposure will make the material more accessible. Helpful resources include:

Additional resources are also provided in this GitHub repo.

For those new to quantum

This course will make use of fundamental tools and concepts from quantum computing with minimal introductory review. We recommend that participants have at least a working knowledge of Qiskit, familiarity with quantum gates and circuits, and some exposure to sampling-based algorithms. The resources listed below should provide helpful preparation.

Additional resources are also provided in this GitHub repo.

Learners of all backgrounds may find this guide useful; it covers the SPANK plugin for the quantum resource management and a few words on Slurm.

There are a few ways in which the uniqueness of quantum computing makes it procedurally different from classical computing resources in ways that are material to this course. For example, there is not a good quantum analog of RAM. Information is stored and processed in the states of qubits themselves. While measurements can allow some features of the qubits to be recorded classically, such measurements destroy much of the richness of the quantum state, including superposition and entanglement. Further, quantum computing resources are not currently housed on the same node as other HPC resources, and users of quantum resources will often not have the same level of scheduling control that they might have over classical HPC resources. These realities will be reiterated in the appropriate lessons. But the takeaway here is that quantum computers are going to change the world and must be integrated with HPC, but they are not "just another" HPC resources that can be controlled and used in the same way as CPUs, GPUs, etc. Quantum computers change the way we can approach many computing problems.


About this course

By the end of this course, you will be able to do more than just repeat technical terms—you will understand how to manage a modern hybrid workflow that assigns specific sub-tasks to a quantum processor while CPUs and GPUs handle the remaining work. You will learn how to write scripts for jobs that transition smoothly between classical nodes and QPUs, interpret the results with accuracy, and recognize where quantum acceleration can truly improve calculations (and where it cannot). Equally important, you will practice maintaining a growth mindset: in a new and rapidly evolving field, no one learns everything at once, and real progress comes from iterating, experimenting, and asking questions.

This course is broken down into 5 chapters, which cover the following topics:

Course outline

  • Lesson 1 - This lesson covered background and motivation
  • Lesson 2 - Compute resources and their management
  • Lesson 3 - Programming models that include heterogeneous computing environments
  • Lesson 4 - Quantum algorithms for hybrid workflows, specifically SQD
  • Lesson 5 - Future outlook and direction

Think of this course as your launchpad—the place where you build the mental toolkit and the self-confidence to explore the quantum-classical frontier long after you complete the final lesson.


References

[1] https://www.hpcwire.com/off-the-wire/gordon-bell-prize-awarded-to-ibm-and-leading-university-researchers/