Quantum Luck: Building Truly Random Numbers on Real Hardware

Author: Denis Avetisyan


New research rigorously tests the performance of quantum random number generators on the IQM Spark 5 quantum processing unit, offering insights into optimal circuit designs.

Statistical analysis employing the NIST SP 800-22 test suite-across varied quantum circuit configurations (C1-C5), gate operations-including Hadamard-Hadamard (HH) and rotations $R_xR_y$-and qubit counts-reveals the distribution of p-values for tests assessing randomness, such as Block Frequency (BF), Cumulative Sums (CS), Runs (Rns), and others including Approximate Entropy (AE) and Linear Complexity (LC), demonstrating how subtle variations in quantum system parameters impact the reliability of randomness assessments.
Statistical analysis employing the NIST SP 800-22 test suite-across varied quantum circuit configurations (C1-C5), gate operations-including Hadamard-Hadamard (HH) and rotations $R_xR_y$-and qubit counts-reveals the distribution of p-values for tests assessing randomness, such as Block Frequency (BF), Cumulative Sums (CS), Runs (Rns), and others including Approximate Entropy (AE) and Linear Complexity (LC), demonstrating how subtle variations in quantum system parameters impact the reliability of randomness assessments.

Analysis of entanglement-based quantum circuits on the IQM Spark 5 reveals that native gate implementations and parallel qubit architectures significantly outperform transpiled gates and isolated qubit approaches in generating high-quality randomness, as validated by NIST SP 800-22 testing.

While deterministic algorithms provide predictable randomness, truly random number generation remains crucial for applications like cryptography and machine learning. This need is addressed in ‘True Random Number Generators on IQM Spark’, a systematic evaluation of diverse quantum circuit designs for generating random numbers on actual superconducting hardware. Our results demonstrate that circuits leveraging native gate operations and parallel qubit arrangements outperform those relying on transpiled gates and isolated qubit implementations. Could this approach pave the way for scalable and reliable quantum random number generators accessible for broader technological applications?


The Illusion of Randomness: Why We Need Quantum Solutions

The foundation of modern digital security relies heavily on random numbers, yet conventional computational methods for generating them are inherently flawed. These “random” number generators are, in actuality, deterministic algorithms – sequences meticulously crafted to appear random, but are entirely predictable if the initial conditions and the algorithm itself are known. This predictability poses a critical vulnerability, especially in cryptography, where secure communication demands truly unpredictable keys and initialization vectors. An attacker capable of reverse-engineering the algorithm or discerning the seed value can compromise the entire system, decrypting sensitive data and forging digital signatures. Unlike genuine randomness derived from physical phenomena, algorithmic generation offers only a pseudo-random sequence, making it unsuitable for applications requiring unassailable security protocols and raising significant concerns in an increasingly interconnected digital landscape.

Unlike the pseudorandomness of classical computers, which relies on deterministic algorithms and is therefore predictable, Quantum Random Number Generators (QRNGs) harness the inherent unpredictability of quantum mechanics to produce truly random numbers. These devices leverage phenomena like the superposition and measurement of quantum states – for example, the probabilistic nature of a photon’s path through a beam splitter – to generate digits that are not determined by any prior state or algorithm. This fundamental difference is crucial for cryptographic applications, where predictability compromises security; a truly random key, generated by a QRNG, offers a theoretically uncrackable defense against even the most powerful computational attacks. The security stems from the laws of physics themselves, rather than the complexity of a mathematical function, providing a level of assurance unattainable with traditional methods and paving the way for more robust data encryption and secure communications.

The pursuit of quantum random number generators (QRNGs) faces considerable hurdles within the current Noisy Intermediate-Scale Quantum (NISQ) era, largely stemming from the inherent limitations of available qubits. These qubits, the fundamental building blocks of quantum computation, are prone to decoherence – the loss of quantum information – and exhibit relatively high error rates. This fragility complicates the precise measurement of quantum phenomena, such as the superposition or entanglement, which are crucial for generating truly random numbers. Current NISQ devices typically possess a limited number of qubits, restricting the complexity of quantum circuits that can be reliably implemented for QRNGs. Furthermore, the need for extensive error correction, while vital for accurate results, adds significant overhead and further strains the capabilities of these nascent quantum systems, demanding innovative approaches to mitigate noise and maximize the efficiency of random number generation.

Circuit C3 demonstrates a GHZ state across all qubits using Hadamard, Rx(π/2), or Ry(π/2) gates.
Circuit C3 demonstrates a GHZ state across all qubits using Hadamard, Rx(π/2), or Ry(π/2) gates.

Quantum States as the Source of True Randomness

Quantum Random Number Generators (QRNGs) leverage the principle of superposition to produce unpredictable outputs. Unlike classical bits which are definitively 0 or 1, a qubit, through superposition, exists as a probabilistic combination of both states, described by $ \alpha|0\rangle + \beta|1\rangle$, where $\alpha$ and $\beta$ are complex amplitudes. Measurement of this qubit collapses the superposition, yielding either 0 or 1 with probabilities determined by $|\alpha|^2$ and $|\beta|^2$ respectively. Because the outcome is fundamentally probabilistic and not determined by any pre-existing conditions, repeated measurements of qubits in superposition provide a source of true randomness, essential for cryptographic applications and simulations.

The Hadamard gate, denoted $H$, is a single-qubit quantum gate that transforms a qubit in the $|0\rangle$ or $|1\rangle$ state into an equal superposition of both states. Mathematically, it operates as follows: $H|0\rangle = \frac{1}{\sqrt{2}}(|0\rangle + |1\rangle)$ and $H|1\rangle = \frac{1}{\sqrt{2}}(|0\rangle – |1\rangle)$. This property is crucial for Quantum Random Number Generator (QRNG) circuits because measuring the output of a qubit after applying the Hadamard gate yields a random bit value with a 50% probability of being 0 or 1. Multiple Hadamard gates are frequently used in series or parallel within QRNG designs to generate multiple random bits, forming the foundation for many practical implementations of quantum-based randomness sources.

Entanglement, a uniquely quantum phenomenon, correlates the quantum states of two or more qubits in a manner that their fates are intertwined regardless of the physical distance separating them. This correlation is leveraged in Quantum Random Number Generation (QRNG) by measuring the entangled qubits; the outcome of measuring one qubit instantaneously determines the possible outcomes of measuring the other. While a single qubit in superposition provides one bit of randomness, entangled qubits can, in principle, generate multiple bits of randomness per measurement, increasing throughput. Furthermore, entanglement enables the construction of more complex QRNG circuits with potentially improved statistical properties and security against prediction compared to circuits relying solely on single-qubit superposition. However, maintaining entanglement is susceptible to decoherence and requires precise control and isolation of the qubits.

Entropy estimations from the NIST SP 800-90B test suite reveal performance differences across various quantum circuit configurations, gate types, and qubit counts, as assessed by tests including collision, Markov, compression, and entropy-based measures.
Entropy estimations from the NIST SP 800-90B test suite reveal performance differences across various quantum circuit configurations, gate types, and qubit counts, as assessed by tests including collision, Markov, compression, and entropy-based measures.

Building Blocks: QRNG Circuit Designs and Their Implementation

The C1 circuit represents a foundational Quantum Random Number Generator (QRNG) design, employing a single qubit subjected to a Hadamard gate. This gate transforms the qubit’s initial state, typically $|0\rangle$ or $|1\rangle$, into a superposition of both states, creating an equal probability of measuring either $0$ or $1$. The resulting measurement outcome serves as the random bit. While offering simplicity for initial testing and validation of the QRNG implementation, the single-qubit approach inherently limits the rate of random number generation, necessitating more complex designs for higher throughput applications. The C1 circuit serves as a baseline against which the performance of more advanced circuits, such as C2 and C3, can be evaluated.

Circuits C2 and C3 employ parallel qubit architectures and the Controlled-Z (CZ) gate to enhance random number generation throughput. Unlike the single-qubit C1 circuit, these designs operate on multiple qubits simultaneously, increasing the rate at which random bits can be produced. The CZ gate, a two-qubit gate facilitating entanglement, is integral to creating correlations between qubits, which are then measured to derive random values. By leveraging entanglement and parallelization, C2 and C3 significantly improve the output rate compared to simpler designs, although this comes with increased circuit complexity and potential for correlated errors that must be mitigated through post-processing.

Implementation and testing of the C1, C2, and C3 quantum random number generator (QRNG) circuits were conducted on the IQM Spark superconducting quantum computer. Results indicated that native Rx/Ry gates consistently yielded superior performance compared to Hadamard gates which required transpilation for execution on the IQM Spark hardware. Specifically, Circuit C2, utilizing parallel qubits and a CZ gate for entanglement, achieved the highest quality randomness as measured by statistical tests performed on the generated bitstrings, demonstrating a practical implementation of a quantum-based random number generator.

Circuit C1 utilizes either a Hadamard, Rx(π/2), or Ry(π/2) gate.
Circuit C1 utilizes either a Hadamard, Rx(π/2), or Ry(π/2) gate.

Validating True Randomness: The Rigor of Statistical Tests

The National Institute of Standards and Technology (NIST) provides two widely adopted statistical test suites for evaluating random number generators: Special Publication 800-22 and Special Publication 800-90B. SP 800-22 comprises 15 statistical tests designed to detect non-randomness in a sequence of bits, examining aspects such as frequency, runs, and serial correlations. SP 800-90B, the latest revision, focuses on the statistical testing of entropy sources used in cryptographic applications, offering a more comprehensive and modern approach to validation. Both suites provide a standardized methodology for assessing the quality of randomness, allowing for objective comparison of different random number generation techniques and ensuring suitability for security-critical applications. Passing these tests does not guarantee true randomness, but indicates that the generator does not exhibit easily detectable patterns or biases.

Min-entropy, denoted as $H_{min}(X) = min(-log_2(p(x)))$ for a random variable $X$ with probability mass function $p(x)$, provides a measure of the worst-case unpredictability of a random source. Unlike Shannon entropy, which calculates the average uncertainty, min-entropy focuses on the minimum probability of any single outcome. A higher min-entropy value indicates a greater degree of true randomness, as it implies that no single outcome is significantly more likely than others. This metric is particularly important in cryptographic applications where an attacker aims to predict the output of a random number generator; min-entropy directly bounds the attacker’s advantage. A source with low min-entropy is susceptible to prediction, even if its average randomness, as measured by Shannon entropy, appears sufficient. Consequently, min-entropy serves as a conservative and robust indicator of randomness quality.

Min-entropy, a measure of unpredictability, is susceptible to degradation from sources of error inherent in quantum systems, specifically measurement error and crosstalk. While statistical tests, such as those defined in NIST SP 800-22 and SP 800-90B, generally indicated acceptable randomness-evidenced by high p-values across tested circuits and gates-variations in min-entropy were observed depending on the specific quantum configuration. Notably, the 2-qubit circuit employing the C3-Ry gate demonstrated a significantly lower min-entropy value compared to other tested gate configurations, suggesting a greater sensitivity to these error sources and emphasizing the necessity of implementing error mitigation techniques to ensure high-quality random number generation.

The Road Ahead: Towards Robust and Scalable Quantum Randomness

Quantum Random Number Generators (QRNGs) strive for true randomness, yet are often constrained by the capabilities of available quantum hardware. Transpiled gate implementations present a compelling solution by effectively rewriting complex quantum operations into a sequence of simpler, native gates that the hardware can readily execute. This process doesn’t merely adapt the algorithm; it optimizes it for the specific device, mitigating errors and enhancing performance. By carefully mapping abstract quantum circuits onto the physical limitations of qubits and control systems, transpilation allows researchers to achieve higher random number generation rates and improve the overall quality of randomness, even on noisy intermediate-scale quantum (NISQ) devices. The technique essentially unlocks the potential of imperfect hardware, paving the way for more practical and scalable QRNGs suitable for real-world cryptographic applications and scientific simulations.

Quantum Random Number Generators (QRNGs) continually benefit from circuit design innovations aimed at maximizing randomness and efficiency. Recent explorations have focused on advanced circuit variations, notably C4 and C5, which depart from standard designs to address limitations in random bit generation. These circuits employ distinct quantum mechanical principles and gate arrangements to amplify inherent quantum uncertainty. C4, for example, leverages a cascade of entangled states to create a more complex and less predictable output, while C5 investigates alternative measurement schemes to minimize bias in the generated bits. Through rigorous analysis and comparison, researchers are identifying how these novel circuit topologies can improve the overall performance of QRNGs, increasing the entropy rate and reducing vulnerabilities to side-channel attacks, ultimately paving the way for more secure cryptographic systems and simulations.

The pursuit of genuinely random numbers remains a critical endeavor, as quantum random number generators (QRNGs) promise security levels unattainable by classical methods. However, realizing the full potential of QRNGs demands sustained investigation into current limitations. Challenges persist in scaling these devices for widespread application and ensuring their robustness against environmental noise and deliberate attacks. Future research will likely focus on novel quantum phenomena exploitable for randomness generation, improved integration with existing cryptographic infrastructure, and the development of standardized verification protocols. Successfully addressing these hurdles will not only fortify secure communication networks but also enable advancements in fields reliant on unbiased data, such as scientific simulations, financial modeling, and machine learning algorithms.

Circuit C4 utilizes either a Hadamard gate (HH), an Rx rotation gate with π/2 radians, or an Ry rotation gate with π/2 radians.
Circuit C4 utilizes either a Hadamard gate (HH), an Rx rotation gate with π/2 radians, or an Ry rotation gate with π/2 radians.

The pursuit of true randomness, as demonstrated by the evaluation of quantum circuits on the IQM Spark, feels remarkably cyclical. Researchers meticulously craft designs, optimize for performance, and then… production inevitably introduces unforeseen variables. This mirrors a fundamental truth about complex systems – elegant theory collides with messy reality. As Richard Feynman observed, “The best way to have a good idea is to have a lot of ideas.” Here, a ‘lot of ideas’ translates to various quantum circuit designs, all ultimately subjected to the unforgiving scrutiny of statistical testing, proving yet again that everything new is old again, just renamed and still broken. The fact that native gates performed better than transpiled ones simply reinforces this; a seemingly advanced technique outperformed a more basic approach, because the devil, as always, is in the implementation details.

What’s Next?

The pursuit of randomness, as this work demonstrates, isn’t about achieving perfect unpredictability-it’s about characterizing the failure modes of predictability. Each iteration of circuit design, each optimization for the IQM Spark, merely shifts the burden of bias, creating new, more subtle correlations for future tests to uncover. The observation that native gates currently outperform transpiled ones isn’t a victory, but a baseline; everything optimized will one day be optimized back, demanding a relentless reevaluation of what constitutes ‘better’.

The focus now drifts, inevitably, toward scaling. More qubits don’t simply amplify randomness; they amplify the complexity of debugging it. Current statistical suites, like NIST SP 800-22, were designed for classical sources. The question isn’t whether quantum randomness passes these tests, but whether the tests remain relevant when confronted with errors intrinsic to the quantum substrate. A more pressing task involves developing diagnostics attuned to quantum failure-signatures of control errors, decoherence, and cross-talk that manifest as subtle deviations from ideal randomness.

Architecture isn’t a diagram, it’s a compromise that survived deployment. This research doesn’t resolve the question of quantum randomness, it reframes it. It’s not about finding the perfect generator, but about building systems resilient enough to tolerate imperfect ones. The long game isn’t about minimizing bias, it’s about understanding and quantifying it-a shift from chasing an ideal to navigating a landscape of inevitable imperfection. The field doesn’t refactor code-it resuscitates hope.


Original article: https://arxiv.org/pdf/2512.09862.pdf

Contact the author: https://www.linkedin.com/in/avetisyan/

See also:

2025-12-11 17:56