Speaker
Description
Achieving undeniable quantum supremacy requires the precise manipulation of many high-quality qubits. Whether the endeavor of building such a quantum computer is possible remains a central question with profound implications for both physics and technology. Currently, we are living in the "noisy intermediate-scale quantum" (NISQ) era, characterized by small and imperfect quantum processors. Our immediate challenge is to improve their quality and scalability. To address this challenge, we need tools to compare various strategies and architectures for quantum computer construction. This task is complex, as numerous factors, such as the number of qubits, connectivity, quantum gates, and compatibility with classical software, influence a quantum computer’s performance.
Quantum Volume (QV) serves as a performance metric for quantum computers. It represents the largest random quantum circuit with the same number of qubits as layers that can be run with reliable results. In this way, it assesses a quantum computer’s capability without delving into its specific details. Computing QV involves determining the average over random circuits of the fidelity between an ideal state (resulting from a faultless computer) and an imperfect state (resulting from a faulty one).
This project aims to gain a deeper understanding of QV and explore how it depends on factors like expressibility (the range of achievable unitaries in a circuit), connectivity, and gate size. This analysis will help us decide when QV is suitable and when additional refinements are necessary for meaningful comparisons of quantum processors.