This program is tentative and subject to change.

Graph model generation from natural language description is an important task with many applications in software engineering. With the rise of large language models (LLMs), there is a growing interest in using LLMs for graph model generation. Nevertheless, LLM-based graph model generation typically produces partially correct models that suffer from three main issues: (1) syntax violations: the generated model may not adhere to the syntax defined by its metamodel, (2) constraint inconsistencies: the structure of the model might not conform to some domain-specific constraints, and (3) inaccuracy: due to the inherent uncertainty in LLMs, the models can include inaccurate, hallucinated elements. While the first issue is often addressed through techniques such as constraint decoding or filtering, the latter two remain largely unaddressed. Motivated by recent self-consistency approaches in LLMs, we propose a novel abstraction-concretization framework that enhances the consistency and quality of generated graph models by considering multiple outputs from an LLM. Our approach first constructs a probabilistic partial model that aggregates all candidate outputs and then refines this partial model into the most appropriate concrete model that satisfies all constraints. We evaluate our framework on several popular open-source and closed-source LLMs using diverse datasets for model generation tasks. The results demonstrate that our approach significantly improves both the consistency and quality of the generated graph models.

This program is tentative and subject to change.

Wed 8 Oct

Displayed time zone: Eastern Time (US & Canada) change

14:00 - 15:30
Session 3: Large Language Models and ModelingResearch Papers / New Ideas and Emerging Results (NIER) at DCIH 102

Hybrid

14:00
18m
Talk
MCeT: Behavioral Model Correctness Evaluation using Large Language ModelsFT@In Person
Research Papers
Khaled Ahmed Huawei Research Canada, University of British Columbia (UBC), Jialing Song Huawei Technologies Canada, Boqi Chen McGill University, Ou Wei Huawei Technologies Canada, Bingzhou Zheng Huawei Technologies Canada
Pre-print
14:18
18m
Talk
Towards LLM-enhanced Conflict Detection and Resolution in Model VersioningIn Person
New Ideas and Emerging Results (NIER)
Martin Eisenberg Johannes Kepler University, Linz, Stefan Klikovits Johannes Kepler University, Linz, Manuel Wimmer JKU Linz, Konrad Wieland LieberLieber Software GmbH
14:36
18m
Talk
Model-Driven Quantum Code Generation Using Large Language Models and Retrieval-Augmented GenerationIn Person
New Ideas and Emerging Results (NIER)
Nazanin Siavash University of Colorado Colorado Springs (UCCS), Armin Moin University of Colorado Colorado Springs
14:54
18m
Talk
SHERPA: A Model-Driven Framework for Large Language Model Execution@RemoteFT
Research Papers
Boqi Chen McGill University, Kua Chen McGill University, José Antonio Hernández López Department of Computer Science and Systems, University of Murcia, Gunter Mussbacher McGill University, Daniel Varro Linköping University / McGill University, Amir Feizpour Aggregate Intellect
Pre-print
15:12
18m
Talk
Accurate and Consistent Graph Model Generation from Text with Large Language Models@RemoteFT
Research Papers
Boqi Chen McGill University, Ou Wei Huawei Technologies Canada, Bingzhou Zheng Huawei Technologies Canada, Gunter Mussbacher McGill University
Pre-print