Meriem Ben Chaaben

Towards using Few-Shot Prompt Learning for Automating Model Completion

ICSE23 NIER track · May 14, 2023

We propose a simple yet novel approach to improve completion in domain modeling activities. Our approach exploits the power of large language models by using few-shot prompt learning without the need to train or fine-tune those models with large datasets that are scarce in this field. We implemented our approach and tested it on the completion of static and dynamic domain diagrams. Our initial evaluation shows that such an approach is effective and can be integrated in different ways during the modeling activities.

Link to Publication

Toward Intelligent Generation of Tailored Concrete Syntax

MODELS 2024 · March 2024

Proceedings of the ACM/IEEE 27th International Conference on Model Driven Engineering Languages and Systems (MODELS 2024).

This paper presents our novel approach for the intelligent generation of tailored concrete syntax for domain-specific languages. It introduces a framework that enables designers to interact with and customize the concrete syntax according to their needs.

In-person presentation date: October 27, 2024 in Linz, Austria.

Link to Publication

Software Modeling Assistance with Large Language Models

MODELS 2024 Student Research Competition · April 2024

Proceedings of the ACM/IEEE 27th International Conference on Model Driven Engineering Languages and Systems (MODELS 2024). This work discusses the use of large language models to assist in software modeling and presents the results of our research, which earned second place in the ACM Student Research Competition. It includes a detailed abstract and poster outlining our findings.

In-person competition date: October 25, 2024 in Linz, Austria.

Award: Second Place.

On the Utility of Domain Modeling Assistance with Large Language Models

ACM Transactions on Software Engineering and Methodology · CSS on Human-Centric SE

This paper, currently under review, explores the utility of domain modeling assistance using large language models. It discusses how LLMs can aid in complex domain modeling tasks and offers insights into future possibilities of blending human expertise with AI-powered assistance tools.