Spelling suggestions: "subject:"model alignment"" "subject:"model lignment""
1 |
Managing Consistency of Business Process Models across Abstraction LevelsALMEIDA CASTELO BRANCO, MOISES January 2014 (has links)
Process models support the transition from business requirements to IT
implementations. An organization that adopts process modeling often maintain
several co-existing models of the same business process. These models target different
abstraction levels and stakeholder perspectives. Maintaining consistency among
these models has become a major challenge for such an organization. For
instance, propagating changes requires identifying tacit correspondences among the models,
which may be only in the memories of their original creators or may be lost
entirely.
Although different tools target specific needs of different roles,
we lack appropriate support for checking whether related models
maintained by different groups of specialists are still consistent after independent
editing. As a result, typical consistency management tasks such as
tracing, differencing, comparing, refactoring, merging, conformance checking,
change notification, and versioning are frequently done manually, which is
time-consuming and error-prone.
This thesis presents the Shared Model, a framework designed to improve
support for consistency management and impact analysis in process modeling. The
framework is designed as a result of a comprehensive industrial study that
elicited typical correspondence patterns between Business and IT process models
and the meaning of consistency between them.
The framework encompasses three major techniques and contributions:
1) matching heuristics to automatically discover complex correspondences
patterns among the models, and to maintain traceability among model
parts---elements and fragments; 2) a generator of edit operations to compute the
difference between process models; 3) a process model synchronizer, capable of
consistently propagating changes made to any model to its counterpart.
We evaluated the Shared Model experimentally. The evaluation shows that the
framework can consistently synchronize Business and IT views related by
correspondence patterns, after non-simultaneous independent editing.
|
2 |
Propagating Changes between Aligned Process ModelsWeidlich, Matthias, Mendling, Jan, Weske, Mathias 28 February 2012 (has links) (PDF)
There is a wide variety of drivers for business process modelling initiatives, reaching from organisational redesign to the development of information systems. Consequently, a common business process is often captured in multiple models that overlap in content due to serving different purposes. Business process management aims at exible adaptation to changing business needs. Hence, changes of business processes occur frequently and have to be incorporated in the respective process models. Once a process model is changed, related process models have to be updated accordingly, despite the fact that those process models may only be loosely coupled. In this article, we introduce an approach that supports change propagation between related process models. Given a change in one process model, we leverage the behavioural
abstraction of behavioural profiles for corresponding activities in order to determine a change region in another model. Our approach is able to cope with changes in pairs of models that are not related by hierarchical refinement and show behavioural inconsistencies. We evaluate the applicability of our approach with two real-world process model collections. To this end, we either deduce change operations from different model revisions or rely on synthetic change operations.
|
3 |
AUTOMATIC IMAGE TO MODEL ALIGNMENT FOR PHOTO-REALISTIC URBAN MODEL RECONSTRUCTIONPartington, Mike 01 January 2001 (has links)
We introduce a hybrid approach in which images of an urban scene are automatically alignedwith a base geometry of the scene to determine model-relative external camera parameters. Thealgorithm takes as input a model of the scene and images with approximate external cameraparameters and aligns the images to the model by extracting the facades from the images andaligning the facades with the model by minimizing over a multivariate objective function. Theresulting image-pose pairs can be used to render photo-realistic views of the model via texturemapping.Several natural extensions to the base hybrid reconstruction technique are also introduced. Theseextensions, which include vanishing point based calibration refinement and video stream basedreconstruction, increase the accuracy of the base algorithm, reduce the amount of data that mustbe provided by the user as input to the algorithm, and provide a mechanism for automaticallycalibrating a large set of images for post processing steps such as automatic model enhancementand fly-through model visualization.Traditionally, photo-realistic urban reconstruction has been approached from purely image-basedor model-based approaches. Recently, research has been conducted on hybrid approaches, whichcombine the use of images and models. Such approaches typically require user assistance forcamera calibration. Our approach is an improvement over these methods because it does notrequire user assistance for camera calibration.
|
4 |
Aligning language models to code : exploring efficient, temporal, and preference alignment for code generationWeyssow, Martin 09 1900 (has links)
Pre-trained and large language models (PLMs, LLMs) have had a transformative impact on the artificial intelligence (AI) for software engineering (SE) research field.
Through large-scale pre-training on terabytes of natural and programming language data, these models excel in generative coding tasks such as program repair and code generation.
Existing approaches to align the model's behaviour with specific tasks propose using parameter-free methods like prompting or fine-tuning to improve their effectiveness.
Nevertheless, it remains unclear how to align code PLMs and LLMs to more complex scenarios that extend beyond task effectiveness.
We focus on model alignment in three overlooked scenarios for code generation, each addressing a specific objective: optimizing fine-tuning costs, aligning models with new data while retaining previous knowledge, and aligning with user coding preferences or non-functional requirements.
We explore these scenarios in three articles, which constitute the main contributions of this thesis.
In the first article, we conduct an empirical study on parameter-efficient fine-tuning techniques (PEFTs) for code LLMs in resource-constraint settings.
Our study reveals the superiority of PEFTs over few-shot learning, showing that PEFTs like LoRA and QLoRA allow fine-tuning LLMs with up to 33 billion parameters on a single 24GB GPU without compromising task effectiveness.
In the second article, we examine the behaviour of code PLMs in a continual fine-tuning setting, where the model acquires new knowledge from sequential domain-specific datasets.
Each dataset introduces new data about third-party libraries not seen during pre-training or previous fine-tuning.
We demonstrate that sequential fine-tuning leads to catastrophic forgetting and implement replay- and regularization-based continual learning approaches, showcasing their superiority in balancing task effectiveness and knowledge retention.
In our third article, we introduce CodeUltraFeedback and CODAL-Bench, a novel dataset and benchmark for aligning code LLMs to user coding preferences or non-functional requirements.
Our experiments reveal that tuning LLMs with reinforcement learning techniques like direct preference optimization (DPO) using CodeUltraFeedback results in better-aligned LLMs to coding preferences and substantial improvement in the functional correctness of LLM-generated code. / Les modèles de langue pré-entraînés et de grande taille (PLMs, LLMs) ont eu un impact
transformateur sur le domaine de la recherche en intelligence artificielle (IA) pour l’ingénierie
logicielle (SE). Grâce à un pré-entraînement à grande échelle sur des téraoctets de données
en langage naturel et de programmation, ces modèles excellent dans les tâches de codage
génératif telles que la réparation de programmes et la génération de code. Les approches
existantes pour aligner le comportement du modèle avec des tâches spécifiques proposent
l’utilisation de méthodes non paramétriques telles que le prompting ou le fine-tuning pour
améliorer leur efficacité. Néanmoins, il reste incertain comment aligner les PLMs et LLMs de
code sur des scénarios plus complexes qui nécessitent plus que garantir l’efficacité du modèle
sur des tâches cibles. Nous nous concentrons sur l’alignement des modèles dans trois scénarios
négligés pour la génération de code, chacun abordant un objectif spécifique: optimiser les
coûts de fine-tuning, aligner les modèles avec de nouvelles données dans le temps tout en
conservant les connaissances antérieures, et aligner les modèles sur les préférences de codage
des utilisateurs ou exigences non fonctionnelles. Nous explorons ces scénarios dans trois
articles, qui constituent les principales contributions de cette thèse.
Dans le premier article, nous réalisons une étude empirique sur les techniques de finetuning efficaces en paramètres (PEFTs) pour les LLMs de code dans des environnements
à ressources limitées. Notre étude révèle la supériorité des PEFTs par rapport au few-shot
learning, montrant que des PEFTs comme LoRA et QLoRA permettent de fine-tuner des
LLMs jusqu’à 33 milliards de paramètres sur un seul GPU de 24Go sans compromettre
l’efficacité sur les tâches. Dans le deuxième article, nous examinons le comportement des
PLMs de code dans un contexte de fine-tuning continu, où le modèle acquiert de nouvelles
connaissances à partir de jeux de données séquentiels. Chaque jeu de données introduit
de nouvelles informations sur des bibliothèques tierces non vues lors de la phase de préentraînement ou dans les jeux de données de fine-tuning précédents. Nous démontrons que le
fine-tuning séquentiel conduit à de l’oubli catastrophique et mettons en œuvre des approches
d’apprentissage continu basées sur le replay et la régularisation, et montrons leur supériorité
pour balancer l’efficacité du modèle et la rétention des connaissances. Dans notre troisième
article, nous introduisons CodeUltraFeedback et CODAL-Bench, un nouveau jeu de données
et un banc d’essai pour aligner les LLMs de code sur les préférences de codage des utilisateurs
ou exigences non fonctionnelles. Nos expériences révèlent que le tuning des LLMs avec des
techniques d’apprentissage par renforcement comme l’optimisation directe des préférences
(DPO) utilisant CodeUltraFeedback résulte en des LLMs mieux alignés sur les préférences de
codage et une amélioration substantielle de l’exactitude fonctionnelle des codes générés.
|
Page generated in 0.0611 seconds