Instances and extensions
Ang pahinang ito ay hindi pa naisalin. Nakikita mo ang orihinal na bersyon sa Ingles.
This chapter will cover several quantum variational algorithms, including
- Variational Quantum Eigensolver (VQE)
- Subspace Search VQE (SSVQE)
- Variational Quantum Deflation (VQD)
- Quantum Sampling Regression (QSR)
By using these algorithms, we will learn about several design ideas that can be incorporated into custom variational algorithms, such as weights, penalties, over-sampling, and under-sampling. We encourage you to experiment with these concepts and share your findings with the community.
The Qiskit patterns framework applies to all these algorithms - but we will explicitly call out the steps only in the first example.
Variational Quantum Eigensolver (VQE)
VQE is one of the most widely used variational quantum algorithms, setting up a template for other algorithms to build upon.
Step 1: Map classical inputs to a quantum problem
Theoretical layout
VQE's layout is simple:
- Prepare reference operators
- We start from the state and go to the reference state
- Apply the variational form to create an ansatz
- We go from the state to
- Bootstrap at if we have a similar problem (typically found via classical simulation or sampling)
- Each optimizer will be bootstrapped differently, resulting in an initial set of parameter vectors (for example, from an initial point ).
- Evaluate the cost function for all prepared states on a quantum computer.
- Use a classical optimizer to select the next set of parameters .
- Repeat the process until convergence is reached.
This is a simple classical optimization loop where we evaluate the cost function. Some optimizers may require multiple evaluations to calculate a gradient, determine the next iteration, or assess convergence.
Here's the example for the following observable:
Implementation
from qiskit import QuantumCircuit
from qiskit.quantum_info import SparsePauliOp
from qiskit.circuit.library import TwoLocal
import numpy as np
theta_list = (2 * np.pi * np.random.rand(1, 8)).tolist()
observable = SparsePauliOp.from_list([("II", 2), ("XX", -2), ("YY", 3), ("ZZ", -3)])
reference_circuit = QuantumCircuit(2)
reference_circuit.x(0)
variational_form = TwoLocal(
2,
rotation_blocks=["rz", "ry"],
entanglement_blocks="cx",
entanglement="linear",
reps=1,
)
ansatz = reference_circuit.compose(variational_form)
ansatz.decompose().draw("mpl")
def cost_func_vqe(parameters, ansatz, hamiltonian, estimator):
"""Return estimate of energy from estimator
Parameters:
params (ndarray): Array of ansatz parameters
ansatz (QuantumCircuit): Parameterized ansatz circuit
hamiltonian (SparsePauliOp): Operator representation of Hamiltonian
estimator (Estimator): Estimator primitive instance
Returns:
float: Energy estimate
"""
estimator_job = estimator.run([(ansatz, hamiltonian, [parameters])])
estimator_result = estimator_job.result()[0]
cost = estimator_result.data.evs[0]
return cost
from qiskit.primitives import StatevectorEstimator
estimator = StatevectorEstimator()
We can use this cost function to calculate optimal parameters
# SciPy minimizer routine
from scipy.optimize import minimize
x0 = np.ones(8)
result = minimize(
cost_func_vqe, x0, args=(ansatz, observable, estimator), method="COBYLA"
)
result
message: Optimization terminated successfully.
success: True
status: 1
fun: -5.999999982445723
x: [ 1.741e+00 9.606e-01 1.571e+00 2.115e-05 1.899e+00
1.243e+00 6.063e-01 6.063e-01]
nfev: 136
maxcv: 0.0
Step 2: Optimize problem for quantum execution
We will select the least-busy backend, and import the necessary components from qiskit_ibm_runtime.
from qiskit_ibm_runtime import SamplerV2 as Sampler
from qiskit_ibm_runtime import EstimatorV2 as Estimator
from qiskit_ibm_runtime import Session, EstimatorOptions
from qiskit_ibm_runtime import QiskitRuntimeService
service = QiskitRuntimeService()
backend = service.least_busy(operational=True, simulator=False)
print(backend)
<IBMBackend('ibm_brisbane')>
We will transpile the circuit using the preset pass manager with optimization level 3, and we will apply the corresponding layout to the observable.
from qiskit.transpiler.preset_passmanagers import generate_preset_pass_manager
pm = generate_preset_pass_manager(backend=backend, optimization_level=3)
isa_ansatz = pm.run(ansatz)
isa_observable = observable.apply_layout(layout=isa_ansatz.layout)
Step 3: Execute using Qiskit Runtime primitives
We are now ready to run our calculation on IBM Quantum® hardware. Because the cost function minimization is highly iterative, we will start a Runtime session. This way, we will only have to wait in a queue once. Once the job begins running, each iteration with updates to parameters will run immediately.
x0 = np.ones(8)
estimator_options = EstimatorOptions(resilience_level=1, default_shots=10_000)
with Session(backend=backend) as session:
estimator = Estimator(mode=session, options=estimator_options)
result = minimize(
cost_func_vqe,
x0,
args=(isa_ansatz, isa_observable, estimator),
method="COBYLA",
options={"maxiter": 200, "disp": True},
)
session.close()
print(result)
Step 4: Post-process, return result in classical format
We can see that the minimization routine successfully terminated, meaning we reached the default tolerance of the COBYLA classical optimizer. If we require a more precise result, we can specify a smaller tolerance. This may indeed be the case, since the result was several percent off compared to the result obtained by the simulator above.
The value of x obtained is the current best guess for the parameters that minimize the cost function. If iterating to obtain a higher precision, those values should be used in place of the x0 initially used (a vector of ones).
Finally, we note that the function was evaluated 96 times in the process of optimization. That might be different from the number of optimization steps, since some optimizers require multiple function evaluations in a single step, such as when estimating a gradient.
Subspace Search VQE (SSVQE)
SSVQE is a variant of VQE that allows obtaining the first eigenvalues of an observable with eigenvalues , where . Without loss of generality, we assume that . SSVQE introduces a new idea by adding weights to help prioritize optimizing for the term with the largest weight.
To implement this algorithm, we need mutually orthogonal reference states , meaning for . These states can be constructed using Pauli operators. The cost function of this algorithm is then:
where is an arbitrary positive number such that if then , and is the user-defined variational form.
The SSVQE algorithm relies on the fact that eigenstates corresponding to different eigenvalues are mutually orthogonal. Specifically, the inner product of and can be expressed as:
The first equality holds because is a quantum operator and is therefore unitary. The last equality holds because of the orthogonality of the reference states . The fact that orthogonality is preserved through unitary transformations is deeply related to the principle of conservation of information, as expressed in quantum information science. Under this view, non-unitary transformations represent processes where information is either lost or injected.
Weights help ensure that all the states are eigenstates. If the weights are sufficiently different, the term with the largest weight (that is, ) will be given priority during optimization over the others. As a result, the resulting state will become the eigenstate corresponding to . Because are mutually orthogonal, the remaining states will be orthogonal to it and, therefore, contained in the subspace corresponding to the eigenvalues .
Applying the same argument to the rest of the terms, the next priority would then be the term with weight , so would be the eigenstate corresponding to , and the other terms would be contained in the eigenspace of .
By reasoning inductively, we deduce that will be an approximate eigenstate of for
Theoretical layout
SSVQE's can be summarized as follows:
- Prepare several reference states by applying a unitary U_R to k different computational basis states
- This algorithm requires the usage of mutually orthogonal reference states , such that for .
- Apply the variational form to each reference state, resulting in the following ansatz .
- Bootstrap at if a similar problem is available (usually found via classical simulation or sampling).
- Evaluate the cost function for all prepared states on a quantum computer.
- This can be separated into calculating the expectation value for an observable and multiplying that result by .
- Afterward, the cost function returns the sum of all weighted expectation values.
- Use a classical optimizer to determine the next set of parameters .
- Repeat the above steps until convergence is achieved.
You will be reconstructing SSVQE's cost function in the assessment, but we have the following snippet to motivate your solution:
import numpy as np
def cost_func_ssvqe(
params, initialized_anastz_list, weights, ansatz, hamiltonian, estimator
):
# """Return estimate of energy from estimator
# Parameters:
# params (ndarray): Array of ansatz parameters
# initialized_anastz_list (list QuantumCircuit): Array of initialised ansatz with reference
# weights (list): List of weights
# ansatz (QuantumCircuit): Parameterized ansatz circuit
# hamiltonian (SparsePauliOp): Operator representation of Hamiltonian
# estimator (Estimator): Estimator primitive instance
# Returns:
# float: Weighted energy estimate
# """
energies = []
# Define SSVQE
weighted_energy_sum = np.dot(energies, weights)
return weighted_energy_sum