{"id":26308,"date":"2021-05-27T06:00:00","date_gmt":"2021-05-27T13:00:00","guid":{"rendered":"https:\/\/insidebigdata.com\/?p=26308"},"modified":"2021-05-28T09:20:01","modified_gmt":"2021-05-28T16:20:01","slug":"quantum-machine-learning-an-introduction-to-qgans","status":"publish","type":"post","link":"https:\/\/insidebigdata.com\/2021\/05\/27\/quantum-machine-learning-an-introduction-to-qgans\/","title":{"rendered":"Quantum Machine Learning &#8211; An Introduction to QGANs"},"content":{"rendered":"\n<p><strong>Introduction<\/strong><\/p>\n\n\n\n<p>Since Alex Krizhevsky&#8217;s breakthrough in imagenet competition, deep learning has been transforming the way we process large scale complex data with computers. Deep neural networks can perform image and speech recognition at very high accuracies. One of the exciting developments in deep learning is generative adversarial networks- GANs which have many applications: image generation, generation of 3d objects, text generation, generation of synthetic data for chemistry, biology and physics.<\/p>\n\n\n\n<p>Quantum GANs which use a quantum generator or discriminator or both is an algorithm of similar architecture developed to run on Quantum systems. The quantum advantage of various algorithms is impeded by the assumption that data can be loaded to quantum states. However this can be achieved for specific but not generic data.<\/p>\n\n\n\n<p>Data loading can easily dominate the overall complexity of an otherwise advantageous quantum algorithm in many cases with complexity O(2<sup>n<\/sup>).&nbsp; A variant of QGAN with a quantum generator and classical discriminator can be used to efficiently load data into quantum states in O(poly(n)) time. Here we discuss GANs and QGANs and how they are similar and the quantum processing associated with it. Let\u2019s delve into a primer on classical GAN.<\/p>\n\n\n\n<p><strong>Classical GANs<\/strong><\/p>\n\n\n\n<p><em>Generative<\/em> &#8211; Learning a generative model<\/p>\n\n\n\n<p><em>Adversarial<\/em> &#8211; Training in an adversarial setting<\/p>\n\n\n\n<p><em>Networks<\/em> &#8211; Using Deep Neural Networks<\/p>\n\n\n\n<p>A deep generative model with two networks- Generator and Discriminator that compete with each other in a game.<\/p>\n\n\n\n<ul><li>Generator &#8211; generates images from random noise and tries to fool discriminator<\/li><li>Discriminator &#8211; tries to classify generated images and real images more accurately<\/li><\/ul>\n\n\n\n<p>Training is complete when the system achieves equilibrium.<\/p>\n\n\n\n<p><strong>GAN Architecture<\/strong><\/p>\n\n\n\n<div class=\"wp-block-image is-style-default\"><figure class=\"aligncenter size-large\"><img decoding=\"async\" loading=\"lazy\" width=\"700\" height=\"355\" src=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid1.png\" alt=\"\" class=\"wp-image-26309\" srcset=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid1.png 700w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid1-150x76.png 150w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid1-300x152.png 300w\" sizes=\"(max-width: 700px) 100vw, 700px\" \/><\/figure><\/div>\n\n\n\n<p><strong>Probabilistic interpretation of GANs<\/strong><\/p>\n\n\n\n<p>Suppose the real-world data comes from some fixed distribution p<sub>R<\/sub>(x), generated by some (potentially complex and unknown) process R. The generator \u2013 parameterized by a vector of real-valued parameters \u03b8 \u2013 takes as input an unstructured random variable z (typically drawn from a normal or uniform distribution). G transforms this noise source into data samples x = G(\u03b8, z), creating the generator distribution p<sub>G<\/sub>(x). In the ideal case of a perfectly trained generator G, the discriminator would not be able to decide whether a given sample x came from p<sub>G<\/sub>(x) or from p<sub>R<\/sub>(x) . Therefore, the task of training G corresponds to the task of maximizing the probability that D misclassifies a generated sample as an element of the real data. On the other hand, the discriminator \u2013 parameterized by a vector of real-valued parameters \u03c6&nbsp; \u2013 takes as input either real data examples x \u223c p<sub>R<\/sub>(x)&nbsp; or fake data samples x \u223c p<sub>G<\/sub>(x). D\u2019s goal is to discriminate between these two classes, outputting a binary random variable. Training D thus corresponds to maximizing the probability of successfully classifying real data, while minimizing the probability of misclassifying fake data.<\/p>\n\n\n\n<p>The optimization problem turns out to be min-max optimization problem with \u03b8 , \u03c6&nbsp;&nbsp; as parameters and various optimization algorithms like ADAM and AMSGRAD are typically chosen to solve the optimization objective<\/p>\n\n\n\n<p>If you want to dig deeper into the math of GAN, here\u2019s the mathematical model.<\/p>\n\n\n\n<p>GAN mathematical model:<\/p>\n\n\n\n<p>X = { x0, . . . , xs\u22121 } \u2282 Rkout&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Sampled from p<sub>R<\/sub>(x)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<\/p>\n\n\n\n<p>G\u03b8:&nbsp; R<sub>kin <\/sub>&nbsp;&nbsp;&nbsp;\u2192 R<sub>kout<\/sub>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; \u03b8 \u2208 R<sub>kg<\/sub><\/p>\n\n\n\n<p>D\u03c6:&nbsp; R<sub>kout<\/sub> \u2192 {0,1}&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;\u03c6 \u2208 R<sub>kd&nbsp;<\/sub><\/p>\n\n\n\n<p>Generator loss function:<\/p>\n\n\n\n<p>&nbsp;LG (\u03c6, \u03b8) = \u2212E<sub>z\u223cprior<\/sub> [log (D\u03c6 (G\u03b8 (z)))]&nbsp;&nbsp;&nbsp;&nbsp; =&nbsp;<\/p>\n\n\n\n<figure class=\"wp-block-image size-large is-resized is-style-default\"><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid2.png\" alt=\"\" class=\"wp-image-26310\" width=\"187\" height=\"52\" srcset=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid2.png 396w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid2-150x42.png 150w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid2-300x83.png 300w\" sizes=\"(max-width: 187px) 100vw, 187px\" \/><\/figure>\n\n\n\n<p>Discriminator loss function:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<\/p>\n\n\n\n<p>LD (\u03c6, \u03b8) = E<sub>x\u223cpreal<\/sub> [log D\u03c6 (x)] + E<sub>z\u223cprior<\/sub> [log (1 \u2212 D\u03c6 (G\u03b8 (z)))]&nbsp;&nbsp; =<\/p>\n\n\n\n<figure class=\"wp-block-image size-large is-resized is-style-default\"><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid3.png\" alt=\"\" class=\"wp-image-26311\" width=\"306\" height=\"55\" srcset=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid3.png 612w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid3-150x27.png 150w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid3-300x54.png 300w\" sizes=\"(max-width: 306px) 100vw, 306px\" \/><\/figure>\n\n\n\n<p>Training GAN is equivalent to searching Nash- equilibrium<\/p>\n\n\n\n<p>Max LG (\u03c6, \u03b8) &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <\/p>\n\n\n\n<p>Max LD (\u03c6, \u03b8)&nbsp;<\/p>\n\n\n\n<p><strong>Quantum GANs &#8211; overview <\/strong><\/p>\n\n\n\n<p>We can generalize these ideas of GAN to quantum setting. QGANs can be formulated in several ways. They can be modeled with both generator and discriminator on quantum systems where data takes the form of an ensemble of quantum states. Another model of QGAN where the generator is classical, can theoretically generate real data by fixed measurement on a fault tolerant system which can be considered quantum supremacy.<\/p>\n\n\n\n<p>In this blog we discuss a variant of QGAN where the generator is quantum and the discriminator is classical which captures the probability distribution of classical training samples. We shall look at specifically how QGAN can be implemented to load distribution data in polynomial time into quantum states.<\/p>\n\n\n\n<p><strong>QGANS for learning and loading random distributions<\/strong><\/p>\n\n\n\n<ul><li>&nbsp;Quantum algorithms have the potential to outperform classical counterparts but loading classical data into quantum states requires O(2<sup>n<\/sup>) for the best known methods thus impairing the potential quantum advantage in many cases.<\/li><li>A hybrid quantum classical algorithm(HQC) &#8211; QGAN can facilitate efficient loading of generic probability distributions in O(poly(n))<\/li><li>This can help in using classical data to do quantum information processing efficiently for tasks using the quantum generated data from QGAN to apply other quantum algorithms like Quantum amplitude estimation(QAE) which has applications in finance domain etc<\/li><\/ul>\n\n\n\n<p>This model of QGAN uses:<\/p>\n\n\n\n<ul><li>Data (Classical)<\/li><li>Generator (Quantum)<\/li><li>Discriminator (Classical)<\/li><\/ul>\n\n\n\n<p>A variational quantum circuit is used as generator and a neural network as a discriminator. Classical data is loaded into quantum states using this architecture which can be used on other quantum algorithms like QAE.<\/p>\n\n\n\n<p><strong>QGANS &#8211; Quantum generator<\/strong><\/p>\n\n\n\n<ul><li>A parametrized quantum channel, i.e. the quantum generator, is trained to transform a given n-qubit input state |\u03c8in\u27e9 to an n-qubit output state.&nbsp;<\/li><\/ul>\n\n\n\n<figure class=\"wp-block-image size-large is-resized is-style-default\"><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid4.png\" alt=\"\" class=\"wp-image-26312\" width=\"240\" height=\"62\" srcset=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid4.png 626w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid4-150x39.png 150w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid4-300x78.png 300w\" sizes=\"(max-width: 240px) 100vw, 240px\" \/><\/figure>\n\n\n\n<ul><li>where p<sub>\u03b8<\/sub><sup>j<\/sup> describe the resulting occurrence probabilities of the basis states |j\u27e9<\/li><li>Quantum generator is implemented by a variational quantum circuit consisting of alternating layers of single qubit rotations(pauli y (Ry) rotations) and blocks of two two qubit gates(controlled z gates(Cz))<\/li><li>Carefully chosen input state can help reduce complexity of generator and faster convergence. Preparation of input state should be O(poly(n)) to not impair speed advantage.<\/li><\/ul>\n\n\n\n<p>Quantum circuit:<\/p>\n\n\n\n<figure class=\"wp-block-image size-large is-style-default\"><img decoding=\"async\" loading=\"lazy\" width=\"700\" height=\"529\" src=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid5.png\" alt=\"\" class=\"wp-image-26313\" srcset=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid5.png 700w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid5-150x113.png 150w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid5-300x227.png 300w\" sizes=\"(max-width: 700px) 100vw, 700px\" \/><\/figure>\n\n\n\n<p>The variational form, depicted in (a), with depth k acts on n qubits. It is composed of k + 1 layers of single-qubit Pauli-Y -rotations and k entangling blocks Uent. As illustrated in (b), each entangling block applies CZ gates from qubit i to qubit (i + 1) mod n, i \u2208 { 0, . . . , n \u2212 1 } to create entanglement between the different qubits.<\/p>\n\n\n\n<p><strong>QGANS &#8211; Architecture<\/strong><\/p>\n\n\n\n<div class=\"wp-block-image is-style-default\"><figure class=\"aligncenter size-large\"><img decoding=\"async\" loading=\"lazy\" width=\"700\" height=\"312\" src=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid6.png\" alt=\"\" class=\"wp-image-26314\" srcset=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid6.png 700w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid6-150x67.png 150w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid6-300x134.png 300w\" sizes=\"(max-width: 700px) 100vw, 700px\" \/><\/figure><\/div>\n\n\n\n<p><strong>QGANS &#8211; Optimisation and learning<\/strong><\/p>\n\n\n\n<ul><li>Classical discriminator is a standard neural network with sigmoid activation functions<\/li><li>Given m data samples g<sup>L<\/sup>&nbsp; from quantum generator and m randomly chosen data samples from real data samples x<sup>L<\/sup>&nbsp; where L = 1,2,..m, the loss functions of QGAN are<\/li><\/ul>\n\n\n\n<p>Generator loss:<\/p>\n\n\n\n<figure class=\"wp-block-image size-large is-resized is-style-default\"><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid7.png\" alt=\"\" class=\"wp-image-26315\" width=\"206\" height=\"48\" srcset=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid7.png 480w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid7-150x35.png 150w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid7-300x70.png 300w\" sizes=\"(max-width: 206px) 100vw, 206px\" \/><\/figure>\n\n\n\n<p>Discriminator loss:&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<\/p>\n\n\n\n<figure class=\"wp-block-image size-large is-resized is-style-default\"><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid8.png\" alt=\"\" class=\"wp-image-26316\" width=\"279\" height=\"79\" srcset=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid8.png 566w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid8-150x42.png 150w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid8-300x85.png 300w\" sizes=\"(max-width: 279px) 100vw, 279px\" \/><\/figure>\n\n\n\n<p>As in&nbsp; the classical case, the loss functions are optimized alternately w.r.t \u03b8 and \u03c6<\/p>\n\n\n\n<p><strong>QGANS &#8211; Simulation study<\/strong><\/p>\n\n\n\n<p>As explained in earlier <a href=\"https:\/\/www.sigmoid.com\/blogs\/quantum-computing-blog-3-how-to-implement-qsvm-in-the-ibm-q-environment\/\" target=\"_blank\" rel=\"noreferrer noopener\">blog<\/a>, we use the Qiskit library on IBM Q experience. QGAN implementation can be done by a variational quantum circuit for the generator which was built in qiskit. We use this framework to generate log-normal distribution using quantum generators and classical discriminator.<\/p>\n\n\n\n<p>We build a quantum circuit using 2 qubits and use samples from log normal distribution as real data. Using either a back-end or a simulator to run, QGAN can be initialized in the following manner:<\/p>\n\n\n\n<div class=\"wp-block-image is-style-default\"><figure class=\"aligncenter size-large\"><img decoding=\"async\" loading=\"lazy\" width=\"700\" height=\"603\" src=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid9.png\" alt=\"\" class=\"wp-image-26317\" srcset=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid9.png 700w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid9-150x129.png 150w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid9-300x258.png 300w\" sizes=\"(max-width: 700px) 100vw, 700px\" \/><\/figure><\/div>\n\n\n\n<figure class=\"wp-block-image size-large is-style-default\"><img decoding=\"async\" loading=\"lazy\" width=\"700\" height=\"89\" src=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid10.png\" alt=\"\" class=\"wp-image-26318\" srcset=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid10.png 700w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid10-150x19.png 150w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid10-300x38.png 300w\" sizes=\"(max-width: 700px) 100vw, 700px\" \/><\/figure>\n\n\n\n<p>The numpy discriminator is a classical 3 layer neural network that uses linear transformation, leaky relu in hidden layer and sigmoid in output layer. The default method of optimization of parameters is ADAM.<\/p>\n\n\n\n<p>We printed loss functions of generator and discriminator to observe convergence.<\/p>\n\n\n\n<div class=\"wp-block-image is-style-default\"><figure class=\"aligncenter size-large\"><img decoding=\"async\" loading=\"lazy\" width=\"398\" height=\"333\" src=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid11.png\" alt=\"\" class=\"wp-image-26319\" srcset=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid11.png 398w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid11-150x126.png 150w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid11-300x251.png 300w\" sizes=\"(max-width: 398px) 100vw, 398px\" \/><\/figure><\/div>\n\n\n\n<p>Relative entropy plot after running the QGAN: If P is the actual distribution taken and Q is the generated distribution, the closeness of the two distributions can be measured by relative entropy given by:&nbsp;&nbsp;<\/p>\n\n\n\n<figure class=\"wp-block-image size-large is-resized is-style-default\"><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid12.png\" alt=\"\" class=\"wp-image-26320\" width=\"261\" height=\"52\" srcset=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid12.png 652w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid12-150x30.png 150w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid12-300x60.png 300w\" sizes=\"(max-width: 261px) 100vw, 261px\" \/><\/figure>\n\n\n\n<div class=\"wp-block-image is-style-default\"><figure class=\"aligncenter size-large\"><img decoding=\"async\" loading=\"lazy\" width=\"392\" height=\"333\" src=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid13.png\" alt=\"\" class=\"wp-image-26321\" srcset=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid13.png 392w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid13-150x127.png 150w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid13-300x255.png 300w\" sizes=\"(max-width: 392px) 100vw, 392px\" \/><\/figure><\/div>\n\n\n\n<p>Cdf plot of log normal vs the simulated points:<\/p>\n\n\n\n<div class=\"wp-block-image is-style-default\"><figure class=\"aligncenter size-large\"><img decoding=\"async\" loading=\"lazy\" width=\"386\" height=\"333\" src=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid14.png\" alt=\"\" class=\"wp-image-26322\" srcset=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid14.png 386w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid14-150x129.png 150w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid14-300x259.png 300w\" sizes=\"(max-width: 386px) 100vw, 386px\" \/><\/figure><\/div>\n\n\n\n<p><strong>Conclusion and Outlook<\/strong><\/p>\n\n\n\n<p>Using QGANs we can load random probability distributions into quantum data states efficiently in polynomial time and this quantum generated data can be used on other quantum algorithms like QAE for use cases in the banking and financial industry. The probability distribution for Initialisation of the generator needs to be carefully selected for the QGAN to perform optimally. Further studies of performance on various distributions have been published to show the efficacy of QGAN in the research with application in quantum finance.<\/p>\n\n\n\n<p><strong>References<\/strong><\/p>\n\n\n\n<ul><li>Research Paper 1 &#8211; <a href=\"https:\/\/arxiv.org\/abs\/1804.08641\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/arxiv.org\/abs\/1804.08641<\/a><\/li><li>Research Paper 2 &#8211; <a href=\"https:\/\/advances.sciencemag.org\/content\/5\/1\/eaav2761\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/advances.sciencemag.org\/content\/5\/1\/eaav2761<\/a><\/li><li>QISKIT QGAN Documentation &#8211; <a href=\"https:\/\/qiskit.org\/documentation\/stubs\/qiskit.aqua.algorithms.QGAN.html\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/qiskit.org\/documentation\/stubs\/qiskit.aqua.algorithms.QGAN.html<\/a><\/li><li>QISKIT latest Github code &#8211; <a href=\"https:\/\/github.com\/Qiskit\/qiskit-aqua\/blob\/1f2c316c3a1aca1296f45241d14ad8ae5fbe2027\/qiskit\/aqua\/algorithms\/distribution_learners\/qgan.py\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/github.com\/Qiskit\/qiskit-aqua\/blob\/1f2c316c3a1aca1296f45241d14ad8ae5fbe2027\/qiskit\/aqua\/algorithms\/distribution_learners\/qgan.py<\/a><\/li><\/ul>\n\n\n\n<p><em>Note: There may have been some structural changes in the github code from when we explored it.<\/em><\/p>\n\n\n\n<p><strong>About the Authors<\/strong><\/p>\n\n\n\n<div class=\"wp-block-image is-style-default\"><figure class=\"alignleft size-large\"><img decoding=\"async\" loading=\"lazy\" width=\"125\" height=\"124\" src=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid_Aniruddh-Rawat.png\" alt=\"\" class=\"wp-image-26323\" srcset=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid_Aniruddh-Rawat.png 125w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid_Aniruddh-Rawat-110x110.png 110w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid_Aniruddh-Rawat-50x50.png 50w\" sizes=\"(max-width: 125px) 100vw, 125px\" \/><\/figure><\/div>\n\n\n\n<p>Aniruddh Rawat is a Data Scientist at <a href=\"https:\/\/www.sigmoid.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">Sigmoid<\/a>. He works with data and application of Machine Learning algorithms. Currently, he is focusing on Quantum Machine Learning, Recommendation Systems and Big Data architecture.<\/p>\n\n\n\n<p><\/p>\n\n\n\n<p><\/p>\n\n\n\n<div class=\"wp-block-image is-style-default\"><figure class=\"alignleft size-large\"><img decoding=\"async\" loading=\"lazy\" width=\"125\" height=\"127\" src=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid_Bhaskar-Ammu.jpeg\" alt=\"\" class=\"wp-image-26324\" srcset=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid_Bhaskar-Ammu.jpeg 125w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/05\/Sigmoid_Bhaskar-Ammu-50x50.jpeg 50w\" sizes=\"(max-width: 125px) 100vw, 125px\" \/><\/figure><\/div>\n\n\n\n<p>Assisted by &#8211; Bhaskar Ammu &#8211; A Senior Data Scientist at <a href=\"https:\/\/www.sigmoid.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">Sigmoid<\/a>. He leads a team of data scientists and specializes in designing data science solutions for businesses, building database architectures, and managing projects.<\/p>\n\n\n\n<p><\/p>\n\n\n\n<p><em>Sign up for the free insideBIGDATA&nbsp;<a rel=\"noreferrer noopener\" href=\"http:\/\/insidebigdata.com\/newsletter\/\" target=\"_blank\">newsletter<\/a>.<\/em><\/p>\n\n\n\n<p><em>Join us on Twitter:&nbsp;@InsideBigData1 \u2013 <a href=\"https:\/\/twitter.com\/InsideBigData1\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/twitter.com\/InsideBigData1<\/a><\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>In this contributed article, data scientists from Sigmoid discuss quantum machine learning and provide an introduction to QGANs. Quantum GANs which use a quantum generator or discriminator or both is an algorithm of similar architecture developed to run on Quantum systems. The quantum advantage of various algorithms is impeded by the assumption that data can be loaded to quantum states. However this can be achieved for specific but not generic data.<\/p>\n","protected":false},"author":10513,"featured_media":8785,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"footnotes":""},"categories":[526,87,180,67,56,97,84,1],"tags":[277,634,1011,95],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v20.6 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Quantum Machine Learning - An Introduction to QGANs - insideBIGDATA<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/insidebigdata.com\/2021\/05\/27\/quantum-machine-learning-an-introduction-to-qgans\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Quantum Machine Learning - An Introduction to QGANs - insideBIGDATA\" \/>\n<meta property=\"og:description\" content=\"In this contributed article, data scientists from Sigmoid discuss quantum machine learning and provide an introduction to QGANs. Quantum GANs which use a quantum generator or discriminator or both is an algorithm of similar architecture developed to run on Quantum systems. The quantum advantage of various algorithms is impeded by the assumption that data can be loaded to quantum states. However this can be achieved for specific but not generic data.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/insidebigdata.com\/2021\/05\/27\/quantum-machine-learning-an-introduction-to-qgans\/\" \/>\n<meta property=\"og:site_name\" content=\"insideBIGDATA\" \/>\n<meta property=\"article:publisher\" content=\"http:\/\/www.facebook.com\/insidebigdata\" \/>\n<meta property=\"article:published_time\" content=\"2021-05-27T13:00:00+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2021-05-28T16:20:01+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2014\/04\/Quantum_ML.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"195\" \/>\n\t<meta property=\"og:image:height\" content=\"155\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Editorial Team\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@insideBigData\" \/>\n<meta name=\"twitter:site\" content=\"@insideBigData\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Editorial Team\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"10 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/insidebigdata.com\/2021\/05\/27\/quantum-machine-learning-an-introduction-to-qgans\/\",\"url\":\"https:\/\/insidebigdata.com\/2021\/05\/27\/quantum-machine-learning-an-introduction-to-qgans\/\",\"name\":\"Quantum Machine Learning - An Introduction to QGANs - insideBIGDATA\",\"isPartOf\":{\"@id\":\"https:\/\/insidebigdata.com\/#website\"},\"datePublished\":\"2021-05-27T13:00:00+00:00\",\"dateModified\":\"2021-05-28T16:20:01+00:00\",\"author\":{\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9\"},\"breadcrumb\":{\"@id\":\"https:\/\/insidebigdata.com\/2021\/05\/27\/quantum-machine-learning-an-introduction-to-qgans\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/insidebigdata.com\/2021\/05\/27\/quantum-machine-learning-an-introduction-to-qgans\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/insidebigdata.com\/2021\/05\/27\/quantum-machine-learning-an-introduction-to-qgans\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/insidebigdata.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Quantum Machine Learning &#8211; An Introduction to QGANs\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/insidebigdata.com\/#website\",\"url\":\"https:\/\/insidebigdata.com\/\",\"name\":\"insideBIGDATA\",\"description\":\"Your Source for AI, Data Science, Deep Learning &amp; Machine Learning Strategies\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/insidebigdata.com\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9\",\"name\":\"Editorial Team\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g\",\"caption\":\"Editorial Team\"},\"sameAs\":[\"http:\/\/www.insidebigdata.com\"],\"url\":\"https:\/\/insidebigdata.com\/author\/editorial\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Quantum Machine Learning - An Introduction to QGANs - insideBIGDATA","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/insidebigdata.com\/2021\/05\/27\/quantum-machine-learning-an-introduction-to-qgans\/","og_locale":"en_US","og_type":"article","og_title":"Quantum Machine Learning - An Introduction to QGANs - insideBIGDATA","og_description":"In this contributed article, data scientists from Sigmoid discuss quantum machine learning and provide an introduction to QGANs. Quantum GANs which use a quantum generator or discriminator or both is an algorithm of similar architecture developed to run on Quantum systems. The quantum advantage of various algorithms is impeded by the assumption that data can be loaded to quantum states. However this can be achieved for specific but not generic data.","og_url":"https:\/\/insidebigdata.com\/2021\/05\/27\/quantum-machine-learning-an-introduction-to-qgans\/","og_site_name":"insideBIGDATA","article_publisher":"http:\/\/www.facebook.com\/insidebigdata","article_published_time":"2021-05-27T13:00:00+00:00","article_modified_time":"2021-05-28T16:20:01+00:00","og_image":[{"width":195,"height":155,"url":"https:\/\/insidebigdata.com\/wp-content\/uploads\/2014\/04\/Quantum_ML.jpg","type":"image\/jpeg"}],"author":"Editorial Team","twitter_card":"summary_large_image","twitter_creator":"@insideBigData","twitter_site":"@insideBigData","twitter_misc":{"Written by":"Editorial Team","Est. reading time":"10 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/insidebigdata.com\/2021\/05\/27\/quantum-machine-learning-an-introduction-to-qgans\/","url":"https:\/\/insidebigdata.com\/2021\/05\/27\/quantum-machine-learning-an-introduction-to-qgans\/","name":"Quantum Machine Learning - An Introduction to QGANs - insideBIGDATA","isPartOf":{"@id":"https:\/\/insidebigdata.com\/#website"},"datePublished":"2021-05-27T13:00:00+00:00","dateModified":"2021-05-28T16:20:01+00:00","author":{"@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9"},"breadcrumb":{"@id":"https:\/\/insidebigdata.com\/2021\/05\/27\/quantum-machine-learning-an-introduction-to-qgans\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/insidebigdata.com\/2021\/05\/27\/quantum-machine-learning-an-introduction-to-qgans\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/insidebigdata.com\/2021\/05\/27\/quantum-machine-learning-an-introduction-to-qgans\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/insidebigdata.com\/"},{"@type":"ListItem","position":2,"name":"Quantum Machine Learning &#8211; An Introduction to QGANs"}]},{"@type":"WebSite","@id":"https:\/\/insidebigdata.com\/#website","url":"https:\/\/insidebigdata.com\/","name":"insideBIGDATA","description":"Your Source for AI, Data Science, Deep Learning &amp; Machine Learning Strategies","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/insidebigdata.com\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9","name":"Editorial Team","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g","caption":"Editorial Team"},"sameAs":["http:\/\/www.insidebigdata.com"],"url":"https:\/\/insidebigdata.com\/author\/editorial\/"}]}},"jetpack_featured_media_url":"https:\/\/insidebigdata.com\/wp-content\/uploads\/2014\/04\/Quantum_ML.jpg","jetpack_shortlink":"https:\/\/wp.me\/p9eA3j-6Qk","jetpack-related-posts":[{"id":28658,"url":"https:\/\/insidebigdata.com\/2022\/03\/11\/research-highlights-generative-adversarial-networks\/","url_meta":{"origin":26308,"position":0},"title":"Research Highlights: Generative Adversarial Networks","date":"March 11, 2022","format":false,"excerpt":"In this regular column, we take a look at highlights for important research topics of the day for big data, data science, machine learning, AI and deep learning. It's important to keep connected with the research arm of the field in order to see where we're headed. In this edition,\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2022\/03\/Research_highlights_1.png?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":24213,"url":"https:\/\/insidebigdata.com\/2020\/04\/16\/best-of-arxiv-org-for-ai-machine-learning-and-deep-learning-march-2020\/","url_meta":{"origin":26308,"position":1},"title":"Best of arXiv.org for AI, Machine Learning, and Deep Learning \u2013 March 2020","date":"April 16, 2020","format":false,"excerpt":"In this recurring monthly feature, we will filter all the recent research papers appearing in the arXiv.org preprint server for subjects relating to AI, machine learning and deep learning \u2013 from disciplines including statistics, mathematics and computer science \u2013 and provide you with a useful \u201cbest of\u201d list for the\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2013\/12\/arxiv.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":8783,"url":"https:\/\/insidebigdata.com\/2014\/04\/18\/quantum-machine-learning\/","url_meta":{"origin":26308,"position":2},"title":"Quantum Machine Learning","date":"April 18, 2014","format":false,"excerpt":"Ever wonder what will happen when exabyte data stores are the norm, and even the parallelism of Hadoop can no longer provide the necessary processing power to address the data deluge? Quantum computing may hold the answer.","rel":"","context":"In &quot;Big Data Software&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":22953,"url":"https:\/\/insidebigdata.com\/2019\/07\/18\/best-of-arxiv-org-for-ai-machine-learning-and-deep-learning-june-2019\/","url_meta":{"origin":26308,"position":3},"title":"Best of arXiv.org for AI, Machine Learning, and Deep Learning \u2013 June 2019","date":"July 18, 2019","format":false,"excerpt":"In this recurring monthly feature, we will filter all the recent research papers appearing in the arXiv.org preprint server for subjects relating to AI, machine learning and deep learning \u2013 from disciplines including statistics, mathematics and computer science \u2013 and provide you with a useful \u201cbest of\u201d list for the\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2013\/12\/arxiv.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":20367,"url":"https:\/\/insidebigdata.com\/2018\/05\/09\/data-efficient-machine-learning\/","url_meta":{"origin":26308,"position":4},"title":"Data-Efficient Machine Learning","date":"May 9, 2018","format":false,"excerpt":"From Quadrant (a D-Wave business), this whitepaper \"Data-Efficient Machine Learning\" describes a practical impediment to the application of deep neural network models when large training data sets are unavailable. Encouragingly however, it is shown that recent machine learning advances make it possible to obtain the benefits of deep neural networks\u2026","rel":"","context":"In &quot;Featured&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":25940,"url":"https:\/\/insidebigdata.com\/2021\/04\/11\/cambridge-quantum-computing-pioneers-quantum-machine-learning-methods-for-reasoning\/","url_meta":{"origin":26308,"position":5},"title":"Cambridge Quantum Computing Pioneers Quantum Machine Learning Methods for Reasoning","date":"April 11, 2021","format":false,"excerpt":"Scientists at Cambridge Quantum Computing (CQC) have developed methods and demonstrated that quantum machines can learn to infer hidden information from very general probabilistic reasoning models. These methods could improve a broad range of applications, where reasoning in complex systems and quantifying uncertainty are crucial. Examples include medical diagnosis, fault-detection\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/img.youtube.com\/vi\/kMNTHkb627c\/0.jpg?resize=350%2C200","width":350,"height":200},"classes":[]}],"_links":{"self":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts\/26308"}],"collection":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/users\/10513"}],"replies":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/comments?post=26308"}],"version-history":[{"count":0,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts\/26308\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/media\/8785"}],"wp:attachment":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/media?parent=26308"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/categories?post=26308"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/tags?post=26308"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}