You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The choice of the feature map is perhaps the most important choice a QML engineer can make, as it will directly dictate the size of the circuit required. There are four common encoding algorithms :
Basis Encoding
Angle Encoding
Amplitude Encoding
Arbitrary Encoding
There is a wide array of literature on each of the approaches mentioned, with each having its own list of algorithms. However, as a rule of thumb, amplitude encoders are more appropriate for large models given their logarithmic scaling in terms of number of qubits with respect to number of features to be embedded. Currently, Lambeq is using IQPAnsatz based on IQP Encoding which similarly to Angle encoding requires N qubits to encode N features. For now, we will ignore the depth scaling.
The challenge is to create a new encoder class called AmplitudeAnsatz based on amplitude encoding, with the motivation of the efficiency of amplitude encoders for high dimensional data.
Notes
One can start by creating an ansatz that does amplitude encoding (i.e., RFV, FRQI, etc.)
The amplitude encoder must take into consideration how Lambeq currently performs the feature embedding (i.e., using qubit count dedicated to each AtomicType)
The implementation must take into consideration how Model is trained given the embedded circuits (going back to what Model is exactly)
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Description
The choice of the feature map is perhaps the most important choice a QML engineer can make, as it will directly dictate the size of the circuit required. There are four common encoding algorithms :
There is a wide array of literature on each of the approaches mentioned, with each having its own list of algorithms. However, as a rule of thumb, amplitude encoders are more appropriate for large models given their logarithmic scaling in terms of number of qubits with respect to number of features to be embedded. Currently, Lambeq is using
IQPAnsatz
based on IQP Encoding which similarly to Angle encoding requires N qubits to encode N features. For now, we will ignore the depth scaling.The challenge is to create a new encoder class called
AmplitudeAnsatz
based on amplitude encoding, with the motivation of the efficiency of amplitude encoders for high dimensional data.Notes
AtomicType
)Model
is trained given the embedded circuits (going back to whatModel
is exactly)See Also
https://cqcl.github.io/lambeq/tutorials/extend-lambeq.html#Creating-ans%C3%A4tze
https://arxiv.org/pdf/1804.11326.pdf
https://docs.pennylane.ai/en/stable/code/api/pennylane.IQPEmbedding.html
https://qiskit.org/documentation/stubs/qiskit.circuit.library.IQP.html
https://qiskit.org/ecosystem/machine-learning/tutorials/01_neural_networks.html
Beta Was this translation helpful? Give feedback.
All reactions