A framework for evaluating tool support for co-evolution of modeling languages, tools and models
This program is tentative and subject to change.
We present a framework for evaluating language workbenches’ capabilities for co-evolution of graphical modeling languages, modeling tools and models. As with programming, maintenance tasks such as language refinement and enhancement typically account for more work than the initial development phase. Modeling languages have the added challenge of keeping tools and existing models in step with the evolving language. As domain-specific modeling languages and tools have started to be used widely, thanks to reports of significant productivity improvements, some language workbench users have indeed reported problems with co-evolution of tools and models. Our tool-agnostic evaluation framework aims to cover changes across the whole language definition: the abstract syntax, concrete syntax, and constraints. Change impact is assessed for knock-on effects within the language definition, the modeling tools, semantics via generators, and existing models. We demonstrate the viability of the framework by evaluating MetaEdit+, EMF/Sirius and Jjodel, providing a detailed evaluation process for others to repeat with their tools. The results of the evaluation show differences among the tools: from editors not opening correctly or at all, through highlighting of items requiring manual intervention, to fully automatic updates of languages, models and editors. We call for industry to evaluate their tool choices with the framework, tool developers to extend their tool support for co-evolution, and researchers to refine the evaluation framework and evaluations presented.
This program is tentative and subject to change.
Wed 8 OctDisplayed time zone: Eastern Time (US & Canada) change
14:00 - 15:30 | Session 4: Model Transformation, Verification, and AnalysisResearch Papers / New Ideas and Emerging Results (NIER) / Journal-First at DCIH 507 Hybrid | ||
14:00 18mTalk | Translating Behavior Trees to Petri Nets for Model CheckingFT Research Papers Matteo Palmas Bosch Research, Robert Bosch GmbH, Michaela Klauck Bosch Research, Robert Bosch GmbH, Ralph Lange Bosch Research, Robert Bosch GmbH, Enrico Ghiorzi University of Genoa, Armando Tacchella University of Genoa | ||
14:18 18mTalk | Vision: An Extensible Methodology for Formal Software Verification in Microservice Systems New Ideas and Emerging Results (NIER) Connor Wojtak University of Arizona, Darek Gajewski University of Arizona, Tucson, Arizona, USA, Tomas Cerny University of Arizona | ||
14:36 18mTalk | Automata Models for Effective Bug DescriptionFT Research Papers Tom Yaacov Ben-Gurion University of the Negev, Gera Weiss Ben-Gurion University of the Negev, Gal Amram IBM Research, Avi Hayoun Ben-Gurion University of the Negev | ||
14:54 18mTalk | Towards the Coordination and Verification of Heterogeneous Systems with Data and Time Research Papers Tim Kräuter Western Norway University of Applied Sciences, Adrian Rutle Western Norway University of Applied Sciences, Yngve Lamo Western Norway University of Applied Sciences, Harald König FHDW Hannover, Western Norway University of Applied Sciences, Francisco Durán University of Málaga, Spain Pre-print | ||
15:12 18mTalk | A framework for evaluating tool support for co-evolution of modeling languages, tools and models Journal-First Juha-Pekka Tolvanen MetaCase, Steven Kelly MetaCase, Juri Di Rocco University of L'Aquila, Alfonso Pierantonio , Giordano Tinella DOI |