Skip to main content
Log in

Quantum case-based reasoning (qCBR)

  • Published:
Artificial Intelligence Review Aims and scope Submit manuscript

Abstract

Case-Based Reasoning (CBR) is an artificial intelligence approach to problem-solving with a good record of success. This article proposes using Quantum Computing to improve some of the key processes of CBR, such that a quantum case-based reasoning (qCBR) paradigm can be defined. The focus is set on designing and implementing a qCBR based on the variational principle that improves its classical counterpart in terms of average accuracy, scalability and tolerance to overlapping. A comparative study of the proposed qCBR with a classic CBR is performed for the case of the social workers’ problem as a sample of a combinatorial optimization problem with overlapping. The algorithm’s quantum feasibility is modelled with docplex and tested on IBMQ computers, and experimented on the Qibo framework.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

References

Download references

Acknowledgements

The authors greatly thank the IBMQ team, mainly Steve Wood. P.A. thanks Jennifer Ramírez Molino, the Qibo team, Adrian Perez-Salinas and Guillermo Alonso Alonso de Linaje for his support and comments on the manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Parfait Atchade Adelomou.

Ethics declarations

Conflict of interest

The authors declare that they have no conflicts of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix: A variational quantum classifier

To date, two dominant categories allow to design quantum classifiers. Although almost all are inspired by the classical classifiers (kernel or neural networks) (Abdiansah and Wardoyo 2015), there is a new category of classifiers that respond to the current era of quantum computing (NISQ); hybrid and variational classifiers.

1.1 The Ansatz

The Ansatz design inherited from previous works (Atchade-Adelomou et al. 2020c; Atchade-Adelomou et al. 2020b; Sim et al. 2019). The way to load the data into the Ansatz is inspired by (Pérez-Salinas et al. 2021) where the data (variable x) is entered using the weights and biases scheme. In this case, the single-qubit gate that serves as the building block for all Ansatz is given by (A1) similar to neural networks.

$$\begin{aligned} U= \left( \theta ,x \right) =R_{x} \left( \theta _{1}x+ \theta _{2} \right) R_{z} \left( \theta _{3} \right) \end{aligned}$$
(A1)

Being \(\theta \) the vector of the parameters and \(R_{x}\) and \(R_{y}\) the unit gates of qubits used to create the Ansatz. To complement the experimentation scenario, it would be necessary to add the CNOT gate and the CRZ, which are the gates that help to achieve entanglement as seen in Figs. 13, 14 and 15.

Fig. 13
figure 13

\(R_y\) and \(R_z\) Ansatzes without entanglament used in qCBR experimentation

Fig. 14
figure 14

\(R_y\) and \(R_z\) Ansatzes with CRZ entanglement used in qCBR experimentation

Fig. 15
figure 15

\(R_y\) and \(R_z\) Ansatzes with CNOT entanglement used in qCBR

The variational quantum classifier structure (Figs. 16 and 17) is based on layers of trainable circuit blocks \( L \left( i \right) = \prod _{i,j}^{}U \left( i,j \right) \) and data coding, as shown in (3) for 8 dimensional or in (A1) for 2 dimensional data size. Additionally, the entanglement can be achieved using the CRZ or CNOT gates.

Fig. 16
figure 16

Three-qubit quantum classifier circuit without entanglement

Fig. 17
figure 17

Three-qubit quantum classifier circuit with entanglement by using CZ or CNOT gates

The number of parameters to optimize the classifier is given by (A2).

$$\begin{aligned} NumParam= ( nL*2d *L) \end{aligned}$$
(A2)

In this case, with n the number of qubits, \(n = 3\) , L the number of layers (blocks), in the experiment, it is a variable data and d which is the dimension of the problem. In other words, d varies with the choice of Ansatz and whether or not entanglement is applied. In the case of the entanglements in Fig. 14, the d would be summed 1 (CRZ gate has one parameter), which equates to Eq. (A3).

$$\begin{aligned} NumParam= ( nL*2 ( d+1 ) *L ) \end{aligned}$$
(A3)

1.2 Fidelity cost function

The similarity function follows the same strategy as the re-uploading and path; nevertheless, the Ansatz is different. It uses the definition of quantum fidelity associated with several qubits and maximizes said average fidelity between the test state and the final state corresponding to its class. Equation (A4) (Pérez-Salinas et al. 2020) defines the cost function used.

$$ \begin{aligned}&CF \left( \vec{\alpha},\vec{\theta },\vec{\omega} \right) = \\&= \frac{1}{2} \sum_{\mu =1}^{M} \sum_{c=1}^{C} \left(\sum_{q=1}^{Q} \left( \alpha_{c,q}F_{c,q} \left( \vec{\theta },\vec{ \omega ,}\vec{x}_{\mu } \right) -Y_{c} \left( \vec{x}_{\mu} \right) \right) ^{2} \right) \end{aligned}$$
(A4)

with

$$\begin{aligned} F_{c,q} \left( \vec{ \theta },\vec{ \omega ,}\vec{x}_{ \mu } \right) =\langle \psi _{c} \vert \rho _{q} \left( \vec{ \theta },\vec{ \omega ,}\vec{x} \right) \vert \psi _{c} \rangle \end{aligned}$$
(A5)

where \(\rho _{q}\) is the reduced density matrix of the qubit to be measured, M is the total number of training point, C is the total number of the classes, \(\vec{x}_{ \mu }\) are the training points and \(\vec{ \alpha }= \left( \alpha _{1}, \ldots , \alpha _{C} \right) \) are introduced as class weights to be optimized together with \(\vec{ \theta } \), \( \vec{ \omega ,}\) are the parameters and Q the numbers of the qubits. Counting on \(Y_{c} \left( \vec{x}_{ \mu } \right) \) as the fidelity vector for a perfect classification. This cost function (A4) is weighted and averaged over all the qubit that form this classifier. In order to complete the hybrid system, it is used for the classical part, the following minimization methods above cited: L-BFGS-B (Byrd et al. 1995), COBYLA (community The SciPy. Cobyla 2021) and SPSA (Spall James 2001).

Appendix B: The qCBR’s details

The operation of the retrieve (prediction) block is given by a new case (schedule). In this experimentation, the schedule that best adapts to the latest case to be solved is recovered with the predict method, which is executed at a time O(log(MN)). It worth saying that, due to the SWP descriptions, a possible schedule change, a stage of understanding or interpretation is necessary, since an adequate resolution of the new schedule cannot be carried out if it is not understood with some completeness. This stage of understanding is a simple decision algorithm with minimal intelligence.

Once having the predicted solution, the synthesis block creates a new solution (proposed solution) by combining recovered solutions. To do this, the algorithm is divided into two main lines (Fig. 4). A line that determines an acceptable degree of error (after a probabilistic study) that the predicted solution can be considered the proposed solution. The second branch is in charge of improving the expected solution towards a better-proposed solution. To do this, the Initial_point associated with the retrieved schedule is retrieved from the case memory, and the Variational Quantum Eigensolver is executed with very few shots, (k shots). The idea here is to refine the new schedule’s similarity with the recovered one. Operating the VQE with Initial_point provides the algorithm with parameter values through the initial point as a starting point for searching for the minimum eigenvalue (similarity between the two times) when the new time’s solution point is believed to be close to a matter of the recovered schedule. This is how the Re-use block works. These operations have a complexity of \(O(klog(N)+log(NM))\). Where N is the number of social workers, M is the number of patients and k is the number of shots.

The algorithm’s processes to review the proposed solution are seen below the Re-use block in Fig. 4. It is essential to classify the best possible solution for the proposed prototype. The best possible solution is calculated with the VQE with the maximum resolution and depth (for the variational part). Once the solution is obtained, it is compared with the proposed solution and said solution with its characteristics is added to the new schedule before storing it (see Figs. 8 and 9). The computational complexity of the Revise is determined by \(O(log(N)+ICA))\). In this work, access to data (states) is determined by O(log(MN)) due to the characteristics of the inner products and superpositions.

One of the most critical blocks in this work is to Retain. This block is the heart of the CBR because it is the classifier and because it is the block that allows us to conclude that it has been learned from the previous cases. Not all instances (schedules) are saved in this job, leading to the excessively slow classifier. Therefore, in this part of the algorithm, the best cases (timetables) that summarize all the essential information are retained.

Fig. 18
figure 18

Representation of the generation of the n weekly schedules of the SWP. The last plane, the one to the right of everything, represents all the SWP classes. The overlapping effect generated by the social workers’ problem’s characteristics and experimentation scenarios can be observed. This experimentation leads us to use the ICA technique to have a resulting dataset regardless of the schedules

Fig. 19
figure 19

Fundamental processes to apply ICA to SWP dataset. The first thing that is done is to centre the data x by subtracting the mean, balancing the data x removing its variance, and calculating the unmixing matrix of W. is calculated. Then the new value of w is calculated, and then w is normalized before checking if with the said value the algorithm converges or not. If it does not converge, a new w must be recalculated, and if it converges, calculate the scalar product of \(\langle x,y \rangle \) to obtain the independent weekly schedules

The Retain process begins with the treatment of schedules, searching for the algorithm’s best efficiency, which is a challenge to solve in this block. In the case of SWP, the patient visits hours have a margin range of 30 min. Therefore, if one schedule starts at 9:00, the next could begin at 9:30, leading to a dataset with overlap between schedules if many schedules have similar time ranges spread over different days of the week. In the case of non-linearity of the data, an almost perfect classifier with an average accuracy more significant than \( 80 \% \) would be needed to be combined with a data processing system and a decision tree.

In this work, we contemplate both scenarios. First, get an excellent classifier and apply data processing techniques to help a poor classifier. Using the standard classifier, ICA (Pratt and Wiley 1978) is applied to the original data to reduce the effects of the degree of overlap (Fig. 18) without losing the fundamental characteristics of the data. Figure 19 summarizes the processes and operations applied to reduce the overlapping effect observed in the generation of SWP schedules. The complexity of this operation is noted as O(ICA). The PCA is then used to reduce the data dimension from 8 to 2 and apply it to the designed variational classifier with the complexity equal to O(PCA). Once the best time is determined, we retain the knowledge acquired at the time of the case’s resolution.

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Adelomou, P.A., Fauli, D.C., Ribé, E.G. et al. Quantum case-based reasoning (qCBR). Artif Intell Rev 56, 2639–2665 (2023). https://doi.org/10.1007/s10462-022-10238-w

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10462-022-10238-w

Keywords

Navigation