Simulating nature
Watch this video from Olivia Lanes on simulating nature with quantum computers.
This lesson uses content from this tutorial:
Utility-scale error mitigation with probabilistic error amplification tutorial
Introduction
One of the most compelling applications of quantum computers is their ability to simulate natural phenomena. In this lesson, we will explore how quantum computers are used to solve quantum dynamics problems—specifically, how they help us understand the time-evolution of a quantum system.
First, we will take a broad look at the general steps involved in conducting these simulations. Then, we will examine a concrete example: the experiment that IBM presented in 2023, which showcased the concept of quantum utility. This experiment serves as an excellent case study for understanding the practical steps and implications of simulating quantum dynamics with real quantum hardware. By the end, you will have a clearer picture of how researchers approach these challenges and why quantum simulation holds such promise for advancing our understanding of the natural world.
Richard Feynman gave a highly influential lecture at Caltech in 1959. It was famously titled “There’s Plenty of Room at the Bottom,” in playful allusion to the vast, unexplored possibilities at the microscopic scale. Feynman argued that much of physics at the atomic and subatomic levels had yet to be uncovered.
The significance of the talk grew in the 1980s as technology progressed. During this period, Feynman revisited these ideas in another important lecture at Caltech, presenting a paper called “Simulating Nature with Computers.” There, he posed a bold question: could computers be used to perform exact simulations that replicate nature’s behavior at the quantum level? Feynman suggested that, instead of relying on rough approximations to model atomic processes, we could use computers that harness the laws of quantum mechanics themselves—not merely to model nature, but to emulate it.
It is this type of physical simulation that we will examine through this lesson.
Recall this timeline graphic introduced in a previous episode. At one end of the spectrum, we see problems that are straightforward to solve and do not require the enhanced speed quantum computing might bring.
At the opposite end are extremely challenging problems that demand fully fault-tolerant quantum machines — technology that is not yet available. Fortunately, many simulation problems are believed to fall somewhere in the middle of this timeline, within the range where today’s quantum computers can already be effectively applied. There are many reasons to be excited and intrigued by this prospect, as simulating nature forms the foundation for a wide range of promising applications.
The following information covers the general workflow in nature simulations and then a specific instance of the workflow to replicate results from a well-known study.
General workflow
Before anyone can apply quantum computing to these exciting areas, it's important to first understand the basic steps in a typical simulation workflow:
- Identify system Hamiltonian
- Hamiltonian encoding
- State preparation
- Time-evolution of the state
- Circuit optimization
- Circuit execution
- Post-processing
The process begins by identifying a quantum system of interest. This helps determine the Hamiltonian that governs its time evolution, as well as a meaningful description of its initial properties, or its state. Next, you need to select an appropriate method to implement the time evolution of this state. Note that the first four steps in this workflow are all part of the Mapping step in the Qiskit patterns framework.
After setting up the time-evolution circuit, the subsequent stages involve performing the actual experiment. This typically includes optimizing the quantum circuit that implements the time-evolution algorithm, running the circuit on quantum hardware, and post-processing the results. These are the same as the last three steps in the Qiskit patterns framework.
Next, we'll discuss what these steps mean before we move on to coding.
1. Identify the system Hamiltonian
The first essential step in performing a simulation experiment is to identify the Hamiltonian that describes the system. In many cases, the Hamiltonian is well established. However, we often construct it by summing up the energy contributions from smaller parts of the system. This is typically expressed as a sum of terms:
where each term acts on one of the local subsystems (like a single particle or a small group of particles) of the total Hamiltonian . In the case of indistinguishable elementary particles, it is important to determine whether the system involves fermions or bosons, where fermions obey the Pauli Exclusion Principle, meaning no two identical fermions can occupy the same quantum states like electrons. Unlike fermions, multiple bosons can exist in the same quantum state, and these difference affects the system's statistics and how it must be modeled.
In practice, people are often interested in physical systems in which the elements are presumed to be well-separated or labeled, and thus distinguishable, as in spins on a lattice.
This system consists of magnetic dipole spins arranged on a lattice, which are treated as distinguishable particles by counting their address. This system is described by the Transverse-Field Ising Model, and its Hamiltonian is constructed from the sum of two parts:
Where the first term represents the interaction energy between neighboring spins. Here the indicates that we sum over all pairs of spins that are directly connected on the lattice, and are the Pauli-Z matrices, which represent the state of the spins at the site and , and is the coupling constant, which defines the strength of this interaction. The second term represents the influence of an external magnetic field applied across the entire system. Here is the Pauli-X matrix acting on the individual spin at site , and indicates the strength of this external field.
2. Hamiltonian encoding
The next step is to translate the Hamiltonian into a form that a quantum computer can process, which we call encoding. This encoding process depends critically on the type of particles in systems: distinguishable or indistinguishable, and fermion or boson, if the particles are indistinguishable.
If you have a system with distinguishable particles, like spins fixed on a lattice, which we took a simple look at above, the Hamiltonian is often already written in a language compatible with qubits. The Pauli-Z operator, for instance, naturally describes a spin's up or down, and no special encoding is needed.
When simulating indistinguishable particles of fermions or bosons, it is necessary to apply an encoding transformation. These particles are used to describe within a special mathematical framework called second quantization
, which tracks the occupation number of each quantum state by introducing creation
and annihilation
operators, where the creation operator adds one particle to state while the annihilation operator removes one particle from state . Based on this second quantization framework, the fermion can be transformed by Bravyi-Kitaev and Jordan-Wigner. Jordan-Wigner transformation defines the fermionic creation operator
which fill the -th quantum state with a fermion and fermionic annihilation operator which empties a fermion from the =th states. You can find more details of this Jordan-Wigner transformation at our Quantum Computing in Practice, episode 5 - Mapping. Similarly, bosons also require their own encoding methods, such as the Holstein-Primakoff transformation, to be represented by qubits.
Ultimately, whether the path is direct or requires a translation, the goal is the same: to express the system's Hamiltonian in the form of Pauli spin operators that a quantum computer can understand and execute.
3. State preparation
After encoding the desired Hamiltonian into the quantum computer's gate set, the next important step is to select an appropriate initial quantum state to begin the simulation. The choice of initial state influences not only the convergence of variational algorithms such as the Variational Quantum Eigensolver (VQE) but also affects the accuracy and efficiency of time evolution and sampling. Essentially, the initial state serves as the starting point for the computation, laying the groundwork for extracting useful observables from the quantum system being modeled. Ideally, this state should represent a physically meaningful configuration of the system under study.
For many quantum chemistry simulations, the Hartree-Fock state can be a good starting point. In the language of second quantization, Hartree-Fock state () is created by applying creation operators () for each of the lowest-energy orbitals to the vacuum state(), a state with no electrons.
Additionally, an easily prepared ansatz with significant overlap to the true ground state can serve as a good initial state for chemistry problems, such as finding the ground state energy.
More generally, we can write an arbitrary -qubit state as a superposition of computational basis states with coefficients , satisfying normalization conditions. Preparing such a state can generally be approached by applying a specific operator to the initial state, which is typically the all-zero standard basis state by convention.
However, this process often requires an exponential number of CNOT gates, making it generally resource intensive. We often focus on preparing initial states for which the implementation resource demands are more modest. For this reason, we often focus on preparing initial states that are less complex. A common and practical choice is a product state, where qubits are not entangled, can be prepared using only single-qubit operations, significantly reducing the resource demands of the state preparation and the complexity.
4. Time-evolution of the state
Now that the initial state is set, we can finally begin the simulation itself - examine how the system's state changed into after some time, . In quantum mechanics, this evolution is described by a single mathematical operation called the time-evolution operator:
where we have set by convention. Applying this operator to our initial state gives us the final state:
However, building a quantum circuit that directly implements the full operator is typically impossible when our Hamiltonian is a sum of different parts. Therefore, we need Trotterization.
In simple terms, Trotterization is a technique for approximating the exponentiation of a matrix (here the Hamiltonian, ), especially when the exponent contains non-commuting operators (). Often the Hamiltonian consists of multiple operators that do not commute. In this case, you cannot separate their exponentials:
A useful approach is to alternately apply their time-evolution exponentials over small durations, , a total of times. In the case of these two non-commuting contributions, we would write
The error introduced by this approximation is called the Trotter error. We can reduce this error by increasing , but this comes at a cost. More advanced, higher-order formulas (the second-order and other variants) also exist. For example, the second-order formula offers better accuracy by applying the steps in a symmetric pattern.
Here, is the number of non-commuting terms, , in the Hamiltonian to be broken up in this way, and is the number of small time steps into which this evolution is broken. Note the reverse order of operators in the second product in the second-order treatment.
See the Trotterization section in the Quantum Diagonalization Algorithms course for more details.
5. Circuit optimization
After generating the Trotterized circuit, the mapping step is complete, and we can proceed to circuit optimization. This process involves several key tasks:
- Establish a qubit layout that maps the abstract qubits of the circuit to the physical qubits on the hardware. This step is necessary because the hardware’s architecture often has specific connectivity constraints, while quantum circuit designs typically assume any qubit can interact with any other.
- Insert swap gates as needed to enable interactions between qubits that are not directly connected on the device.
- Translate the circuit’s gates into Instruction Set Architecture (ISA) instructions that the hardware can execute directly.
- Perform circuit optimizations to reduce the circuit depth and gate count. This optimization can also be applied earlier, on the virtual circuit before the qubits are assigned to specific hardware connections.
It is important to note that much of this optimization process is handled automatically by tools in Qiskit. We will explore exactly how this works later in this lesson.
6. Circuit execution
After completing the optimization step, we are ready to execute the circuit using a primitive. We are considering a simulation experiment in which the goal is to understand how certain properties of the system change over time. For this purpose, the Estimator primitive is the most appropriate choice, as it allows you to measure the expectation values of observables that correspond to these properties.
Next, we use options including error suppression and mitigation techniques, to improve the Estimator's accuracy. Finally, we run the experiment to collect the results.
7. Post-process
The final step is to post-process the collected data. This involves extracting the measured expectation values, or, if the Sampler primitive was used, the sampled probability distribution in the computational basis. When only the expectation values of the relevant observables are needed, these can be directly obtained from the Estimator primitive, available both as raw results and with error mitigation applied. Often, these measured expectation values serve as the starting point for additional calculations involving other quantities of interest. Such additional calculations typically do not require quantum computation and can be efficiently performed on a classical computer.
Replicating the "Utility" paper
This part is a high-level walk-through of Utility-scale error mitigation with probabilistic error amplification tutorial, which replicates the result of the Evidence for the Utility of Quantum Computing Before Fault Tolerance paper. We strongly suggest you open the referenced tutorial along with this session.
We will now examine a concrete example from a highly influential paper published by IBM in 2023, titled Evidence for the Utility of Quantum Computing Before Fault Tolerance, often referred to as the "Utility paper".
Upon its release, this work quickly became a landmark study within the quantum computing community. Its central thesis is that a noisy quantum computer, utilizing 127 qubits and 2,880 gates, can produce accurate expectation values for quantum circuits that lie beyond the reach of brute-force classical simulation methods, which attempt exact simulation of the same circuits.
This study was particularly significant because it demonstrated that quantum computers can be used to verify or compare results with approximate classical simulation methods, such as tensor network algorithms—especially in scenarios where the exact solution is unknown beforehand.
Another remarkable aspect of this work is that it has been widely reproduced: researchers and users now have the ability to replicate and verify the experiment using IBM’s cloud-accessible quantum systems and the Qiskit software framework. In the following, we will guide you through the steps to perform this replication yourself by reviewing IBM's tutorial step by step.
In this lesson, we discuss the specific steps required to translate the problem into inputs that a quantum device can process. We focus on simulating the dynamics of the total magnetization in a system of magnetic dipole spins arranged on a lattice, subjected to an external magnetic field. This system can be described by an Ising model with a transverse magnetic field. We represent it using a parametrized quantum circuit, where the parameters correspond to the tunable values of the spin-spin () interactions and the strength of the external, transverse magnetic field (, parametrized using ).
Since this series is titled Quantum Computing in Practice, we will cover additional details of the experimental techniques used to improve the quality of the results. One important procedure involves identifying and removing "bad" qubits—those with low gate fidelities or short decoherence times—that could significantly impact the experiment's outcome. Such problematic qubits may arise from poor calibration or interactions with two-level systems (TLS). Removing these qubits alters the hardware's native topology, effectively changing the lattice on which the system is simulated.
Additionally, we will discuss how to construct the parametrized quantum circuit that implements the system’s time-evolution using Trotterization. A key part of this process is identifying entangling layers within the circuit, which play a crucial role in the main error mitigation technique.
Qiskit patterns step 1: Map
The tutorial accomplishes the mapping step similarly to the general approach described above. Specific to this problem, the tutorial does the following:
- Creates a parameterized Ising model circuit
- Creates entangling layers and removes bad qubits
- Generates a Trotterized version of the circuit
In the tutorial, we begin by creating a series of helper functions early in the notebook. These functions are designed to simplify the process as we proceed. These are not a required part of the procedure, but this is good common practice when working on similar experiments: break down the problem into manageable components. The functions include
- Remove qubit couplings
- Define qubit couplings
- Construct the layer couplings
- Construct the entangling layer
- Define the Trotterized circuit
Here, let us explore topics related to these functions a bit more.
Layer couplings
Layer couplings define how qubits interact with their neighbors during the simulation. Our quantum devices utilize a heavy-hexagonal layout, a distinctive pattern for connecting qubits. Within this layout, the connections between qubits—known as "edges"—can be divided into three distinct sets. Importantly, no two connections in the same set share a qubit. This organization addresses a key hardware constraint: on a real quantum computer, a qubit can participate in only one two-qubit gate at any given time.
By structuring all connections into three separate layers, two-qubit gates can be applied across the entire device in three successive rounds. This ensures that no qubit is involved in more than one gate per layer. These gates implement the ZZ interaction in the Ising model, and they are repeated at each time step of the simulation (each Trotter step).
Additionally, a technique called twirling is employed to modify noise characteristics in the device. Twirling transforms the noise such that even simple noise models become more accurate representations of the physical errors. This refinement enables more precise characterization of the noise, which can then be leveraged to improve error mitigation strategies.
Removing "bad" qubits
The next step involves removing the “bad” qubits from the list of physical qubits available for the experiment. A qubit can become “bad” for various reasons. Sometimes it’s simply a matter of poor calibration, which can be fixed by recalibrating. In other cases, the issue is more complex and related to what’s known as a two-level system (TLS) defect. These TLS defects cause fluctuations in qubit parameters and relaxation. Resolving this often requires warming up the entire system and then cooling it down again—a process that can take some time and isn’t feasible when accessing quantum hardware remotely via the cloud.
For now, the simplest approach is to exclude these problematic qubits from the pool of physical qubits that will be used in the experiment. IBM Quantum Platform® makes it easy to identify which qubits are underperforming on a QPU. You can either open the QPU and visualize their characteristics directly on the platform or download the data from the platform as a CSV file. Next, create a list of qubits to exclude and remove them from the total set of physical qubits on the device.
Removing unreliable qubits ensures that the system’s behavior is more predictable, which improves the accuracy of the experiment. It also allows for better noise modeling, which is essential for implementing effective error mitigation strategies.
Trotterized circuit
It is now time to construct our Trotterized circuit. As discussed earlier, Trotterization breaks down the time evolution into discrete steps, so we need to choose how many steps to use. For this example, we will select six steps. Generally, the approach involves balancing the Trotter error—an approximation error introduced by the algorithm—with errors caused by decoherence. Increasing the number of Trotter steps reduces the approximation error but requires deeper quantum circuits, which are more susceptible to decoherence noise.
The circuit will be defined using several parameters: the theta parameter representing the strength of the external magnetic field, the couplings between layers, the number of steps, the number of qubits, and, of course, the choice of the device backend. Since the magnetization of the system depends on the external magnetic field’s strength, it is valuable to run the simulation at different magnetic field values. This variation corresponds to different rotation angles for the RX gate in the circuit.
from qiskit.circuit import Parameter
num_steps = 6 #Trotter steps
theta = Parameter("theta")
circuit = trotter_circuit(
theta, layer_couplings, num_steps, qubits = good_qubits, backend = backend
)
num_params = 12
# 12 parameter values for Rx between [0,p/2].
# Reshape to out product broadcast with observables
parameter_values = np.linespace(0,np.pi/2,num_params).reshape((num_params,1))
num_params = parameter_values.size
Qiskit patterns step 2: Optimize
Now that we have generated our circuit, the next step is to optimize it. The first part of this process involves defining a pass manager. In the context of the Qiskit SDK, transpilation is the process of transforming an input circuit into a form that is suitable for execution on a quantum device. This transformation happens through a sequence of steps known as transpiler passes.
A pass manager is an object that holds a list of these transpiler passes and can apply them to a circuit. To create one, you initialize a PassManager
with the desired list of transpiler passes. Ultimately, the pass manager produces an ISA circuit—a circuit expressed in terms of the backend’s Instruction Set Architecture (ISA). This means the circuit is represented using gates that are native to the backend hardware, although it does not yet include the timing information required to run the circuit on the device.
Qiskit patterns step 3: Execute using primitives
Now, it is time to run our circuit. We will use the Estimator as our primary tool for this experiment because our goal is to measure the total magnetization of the system. The Estimator is specifically designed to estimate the expectation values of observables, making it the ideal choice here. At this stage, it is also essential to configure our error mitigation settings. We will apply Zero Noise Extrapolation (ZNE) to improve the accuracy of our results. In the tutorial, you will see that we specify two or more noise factor values at which to evaluate the extrapolated models, and we select “Probabilistic Error Amplification” (PEA) as our amplification method. PEA is preferred for this experiment because it scales significantly better than other options, which is crucial when working with systems of 100 or more qubits.
This is all that is required to run the experiment.
Error mitigation interlude
Before we proceed to post-processing, let’s take a brief moment to clarify what is meant by Zero Noise Extrapolation (ZNE). We have touched on this concept in earlier episodes, but it’s worth reviewing briefly. ZNE is an error mitigation technique designed to reduce the impact of unknown noise that occurs during the execution of quantum circuits, provided that this noise can be scaled in a controlled way. The method relies on the assumption that expectation values scale with noise according to a known function:
where represents the noise strength, which can be intentionally amplified.
The process of implementing ZNE consists of the following steps:
- Amplify the circuit noise for various noise factors , , … .
- Execute each noise-amplified circuit to measure the corresponding expectation values , , ….
- Extrapolate these results back to the zero-noise limit \langle .
This technique allows us to estimate what the outcome would be if there were no noise, improving the accuracy of quantum computations.
The primary challenge in effectively implementing ZNE is developing an accurate noise model for the expectation value and amplifying the noise in a controlled and well-understood manner. Common techniques for error amplification in ZNE include scaling pulse duration through calibration, repeating gates using identity cycles, and adding noise via sampling Pauli channels—a method known as Probabilistic Error Amplification (PEA).
Among these, PEA is often the preferred choice for several reasons:
- Pulse stretching incurs a high computational cost.
- Gate folding, which uses identity insertions, lacks strong theoretical guarantees for preserving the noise bias.
- PEA is applicable to any circuit executed with a native noise factor, although it requires learning the noise model in advance.
PEA operates under the assumption of a layer-based noise model similar to that used in probabilistic error cancellation (PEC). However, unlike PEC, it avoids the exponential sampling overhead that typically grows with circuit noise. This efficiency makes PEA a practical and robust approach for noise amplification in ZNE, facilitating more reliable quantum error mitigation
To characterize the noise model, we first need to identify the distinct layers of two-qubit operations within the circuit. For each of these layers, we apply a Pauli twirling procedure to the two-qubit gates, which helps ensure that the noise can be accurately described by a damping noise model. Next, we repeat pairs of identity layers at various depths, and finally, we fit the fidelity values to determine the error rates for each noise channel.
While it is beneficial to understand this method conceptually, implementing it manually in Qiskit is much simpler, as demonstrated in the accompanying tutorial.
Qiskit patterns step 4: Post-process
After the experiment fishes, you can view the result by post-processing it. The dotted gray line in the plotted data represents the results obtained using approximate classical methods, with the approximation error reduced to a low threshold. The raw data points for the various noise factors, selected at the outset, are clearly offset from this dotted line. In contrast, the solid blue line displays the data after applying our ZNE processing, which noticeably brings the results much closer to the exact values. In summary, the values obtained under normal noise conditions (noise factor nf=1.0) show significant deviation from the exact results. Meanwhile, the mitigated values align closely with the exact ones, demonstrating the effectiveness of the PEA-based noise mitigation technique.
Summary
To quickly summarize what we have learned:
- Quantum simulation is one of the most promising application areas in the short to mid term.
- It has wide-ranging applications, from pharmaceuticals to high-energy physics, materials science, and more.
- The Utility paper from IBM, published in 2023 pointed the way towards being able to use quantum computers for scientific discovery and we worked through the associated tutorial that goes along with that paper.
- The steps to work through a simulation problem from start to finish are relatively straightforward, and we hope that you can now use this video and tutorial as a guide for even more simulation problems.