Model-Driven Quantum Code Generation Using Large Language Models and Retrieval-Augmented Generation
This program is tentative and subject to change.
This paper introduces a novel research direction for model-to-text/code transformations by leveraging Large Language Models (LLMs) that can be enhanced with Retrieval-Augmented Generation (RAG) pipelines. The focus is on quantum and hybrid quantum-classical software systems, where model-driven approaches can help reduce the costs and mitigate the risks associated with the heterogeneous platform landscape and lack of developers’ skills. We validate one of the proposed ideas regarding generating code out of UML model instances of software systems. This Python code uses a well-established library, called Qiskit, to execute on gate-based or circuit-based quantum computers. The RAG pipeline that we deploy incorporates sample Qiskit code from public GitHub repositories. Experimental results show that well-engineered prompts can improve CodeBLEU scores by up to a factor of four, yielding more accurate and consistent quantum code. However, the proposed research direction can go beyond this through further investigation in the future by conducting experiments to address our other research questions and ideas proposed here, such as deploying software system model instances as the source of information in the RAG pipelines, or deploying LLMs for code-to-code transformations, for instance, for transpilation use cases.
This program is tentative and subject to change.
Wed 8 OctDisplayed time zone: Eastern Time (US & Canada) change
14:00 - 15:30 | Session 3: Large Language Models and ModelingNew Ideas and Emerging Results (NIER) / Research Papers at Main Room 1 | ||
14:00 18mTalk | MCeT: Behavioral Model Correctness Evaluation using Large Language ModelsFT Research Papers Khaled Ahmed Huawei Technologies Canada, Jialing Song Huawei Technologies Canada, Boqi Chen McGill University, Ou Wei Huawei Technologies Canada, Bingzhou Zheng Huawei Technologies Canada Pre-print | ||
14:18 18mTalk | Accurate and Consistent Graph Model Generation from Text with Large Language ModelsFT Research Papers Boqi Chen McGill University, Ou Wei Huawei Technologies Canada, Bingzhou Zheng Huawei Technologies Canada, Gunter Mussbacher McGill University Pre-print | ||
14:36 18mTalk | SHERPA: A Model-Driven Framework for Large Language Model ExecutionFT Research Papers Boqi Chen McGill University, Kua Chen McGill University, José Antonio Hernández López Department of Computer Science and Systems, University of Murcia, Gunter Mussbacher McGill University, Daniel Varro Linköping University / McGill University, Amir Feizpour Aggregate Intellect Pre-print | ||
14:54 18mTalk | Model-Driven Quantum Code Generation Using Large Language Models and Retrieval-Augmented Generation New Ideas and Emerging Results (NIER) Nazanin Siavash University of Colorado Colorado Springs (UCCS), Armin Moin University of Colorado Colorado Springs | ||
15:12 18mTalk | Towards LLM-enhanced Conflict Detection and Resolution in Model Versioning New Ideas and Emerging Results (NIER) Martin Eisenberg Johannes Kepler University, Linz, Stefan Klikovits Johannes Kepler University, Linz, Manuel Wimmer JKU Linz, Konrad Wieland LieberLieber Software GmbH |