Spelling suggestions: "subject:"model off development"" "subject:"model oof development""
31 |
An approach to architecture-centric domain-specific modelling and implementation for software development and reuseDuan, Qing January 2010 (has links)
Model-driven development has been considered to be the hope of improving software productivity significantly. However, it has not been achieved even after many years of research and application. Models are only and still used at the analysis and design stage, furthermore, models gradually deviate from system implementation. The thesis integrates domain-specific modelling and web service techniques with model-driven development and proposes a unified approach, SODSMI (Service Oriented executable Domain-Specific Modelling and Implementation), to build the executable domain-specific model and to achieve the target of model-driven development. The approach is organised by domain space at architectural level which is the elementary unit of the domain-specific modelling and implementation framework. The research of SODSMI is made up of three main parts: Firstly, xDSM (eXecutable Domain-Specific Model) is proposed as the core construction for domain-specific modelling. Behaviour scenario is adopted to build the meta-modelling framework for xDSM. Secondly, XDML language (eXecutable Domain-specific Meta-modelling Language) is designed to describe the xDSM meta-model and its application model. Thirdly, DSMEI (Domain-Specific Model Execution Infrastructure) is designed as the execution environment for xDSM. Web services are adopted as the implementation entities mapping to core functions of xDSM so as to achieve the service-oriented domain-specific application. The thesis embodies the core value of model and provides a feasible approach to achieve real model-driven development from modelling to system implementation which makes domain-specific software development and reuse coming true.
|
32 |
Rahmenwerk zur integrativen Gestaltung von ServicesAugenstein, Christoph 05 July 2016 (has links) (PDF)
Die vorliegende Arbeit befasst sich mit der modellgetriebenen Servicebeschreibung, einem Ansatz zur integrativen Beschreibung bzw. Modellierung von Services. Ausgehend von der Prämisse, dass die Natur von Services nur schwer zu erfassen ist und eine Disziplinen-übergreifende Positiv-Definition nicht erreicht werden kann, soll mit dem Ansatz eine Syn-these bisher vorhandener Modellierungsansätze erfolgen. Das Ziel liegt dabei nicht in einem erneuten Versuch, ein vollständiges Modell zur Beschreibung von Services zu entwickeln, sondern vielmehr vorhandene Perspektiven so zu integrieren, dass ein vollständiges Bild als eine Art Mosaik entsteht. Den Kern der Arbeit bildet das Service Modeling Framework. Als Rahmenwerk umfasst es Anforderungen und Restriktionen für die Arbeit mit unter-schiedlichen Servicemodellen, definiert Methoden zur Integration und bietet Werkzeuge, mit deren Hilfe die darin enthaltenen Konzepte umgesetzt werden.
Ziel der Konstruktion des Rahmenwerks ist es Nutzer in die Lage zu versetzen mittels mo-dellgetriebener Verfahren eine Zusammenführung von Modellen zu ermöglichen. Auf Basis einer fachlichen, nicht nur syntaktischen Beschreibung von Beziehungen zwischen Model-len und Modellelementen sollen Zusammenhänge modelliert werden, die einen Informati-onsaustausch zwischen Modellen realisieren. Dadurch werden Abhängigkeiten zwischen Modellen explizit formuliert oder aber die Entwicklung neuer Modelle auf Basis bereits bestehender Modelle vorangetrieben.
Der Beitrag dieser Arbeit besteht in der Erarbeitung der notwendigen Konzepte und in der Bereitstellung geeigneter Verfahren sowie Werkzeugen zur Umsetzung. Insbesondere stellt diese Arbeit einen Metamodell-basierten Ansatz zur Verfügung, mit dem Modelle über ein Domänen-neutrales Basismetamodell zueinander in Beziehung gesetzt werden können. Zugehörige Werkzeuge, wie Editoren, zeigen eine prototypische Umsetzbarkeit.
|
33 |
Preservation of Extra-Functional Properties in Embedded Systems DevelopmentSaadatmand, Mehrdad January 2015 (has links)
The interaction of embedded systems with their environments and their resource limitations make it important to take into account properties such as timing, security, and resource consumption in designing such systems. These so-called Extra-Functional Properties (EFPs) capture and describe the quality and characteristics of a system, and they need to be taken into account from early phases of development and throughout the system's lifecycle. An important challenge in this context is to ensure that the EFPs that are defined at early design phases are actually preserved throughout detailed design phases as well as during the execution of the system on its platform. In this thesis, we provide solutions to help with the preservation of EFPs; targeting both system design phases and system execution on the platform. Starting from requirements, which form the constraints of EFPs, we propose an approach for modeling Non-Functional Requirements (NFRs) and evaluating different design alternatives with respect to the satisfaction of the NFRs. Considering the relationship and trade-off among EFPs, an approach for balancing timing versus security properties is introduced. Our approach enables balancing in two ways: in a static way resulting in a fixed set of components in the design model that are analyzed and thus verified to be balanced with respect to the timing and security properties, and also in a dynamic way during the execution of the system through runtime adaptation. Considering the role of the platform in preservation of EFPs and mitigating possible violations of them, an approach is suggested to enrich the platform with necessary mechanisms to enable monitoring and enforcement of timing properties. In the thesis, we also identify and demonstrate the issues related to accuracy in monitoring EFPs, how accuracy can affect the decisions that are made based on the collected information, and propose a technique to tackle this problem. As another contribution, we also show how runtime monitoring information collected about EFPs can be used to fine-tune design models until a desired set of EFPs are achieved. We have also developed a testing framework which enables automatic generation of test cases in order verify the actual behavior of a system against its desired behavior. On a high level, the contributions of the thesis are thus twofold: proposing methods and techniques to 1) improve maintenance of EFPs within their correct range of values during system design, 2) identify and mitigate possible violations of EFPs at runtime. / CHESS / MBAT / ITS-EASY
|
34 |
Extracting Reusable Design Decisions for UML-based Domain-specific Languages: A Multi-Method StudySobernig, Stefan, Hoisl, Bernhard, Strembeck, Mark January 2016 (has links) (PDF)
When developing domain-specific modeling languages (DSMLs), software engineers have to make a number of important
design decisions on the DSML itself, or on the software-development process that is applied to develop the DSML. Thus, making well-informed design decisions is a critical factor in developing DSMLs. To support this decision-making process, the model-driven development community has started to collect established design practices in terms of patterns, guidelines, story-telling, and procedural models. However, most of these documentation practices do not capture the details necessary to reuse the rationale behind these decisions in other DSML projects. In this paper, we report on a three-year research effort to compile and to empirically validate a catalog of structured decision descriptions (decision records) for UML-based DSMLs. This
catalog is based on design decisions extracted from 90 DSML projects. These projects were identified - among others - via an extensive systematic literature review (SLR) for the years 2005 - 2012. Based on more than 8,000 candidate publications, we finally selected 84 publications for extracting design-decision data. The extracted data were evaluated quantitatively using a
frequent-item-set analysis to obtain characteristic combinations of design decisions and qualitatively to document recurring
documentation issues for UML-based DSMLs. We revised the collected decision records based on this evidence and made the decision-record catalog for developing UML-based DSMLs publicly available. Furthermore, our study offers insights into UML usage (e.g. diagram types) and into the adoption of UML extension techniques (e.g. metamodel extensions, profiles).
|
35 |
A Model Driven Component Agent Framework for Domain ExpertsJayatilleke, Gaya Buddhinath, buddhinath@gmail.com January 2007 (has links)
Industrial software systems are becoming more complex with a large number of interacting parts distributed over networks. Due to the inherent complexity in the problem domains, most such systems are modified over time to incorporate emerging requirements, making incremental development a suitable approach for building complex systems. In domain specific systems it is the domain experts as end users who identify improvements that better suit their needs. Examples include meteorologists who use weather modeling software, engineers who use control systems and business analysts in business process modeling. Most domain experts are not fluent in systems programming and changes are realised through software engineers. This process hinders the evolution of the system, making it time consuming and costly. We hypothesise that if domain experts are empowered to make some of the system changes, it would greatly ease the evolutionary process, thereby making the systems more effective. Agent Oriented Software Engineering (AOSE) is seen as a natural fit for modeling and implementing distributed complex systems. With concepts such as goals and plans, agent systems support easy extension of functionality that facilitates incremental development. Further agents provide an intuitive metaphor that works at a higher level of abstraction compared to the object oriented model. However agent programming is not at a level accessible to domain experts to capitalise on its intuitiveness and appropriateness in building complex systems. We propose a model driven development approach for domain experts that uses visual modeling and automated code generation to simplify the development and evolution of agent systems. Our approach is called the Component Agent Framework for domain-Experts (CAFnE), which builds upon the concepts from Model Driven Development and the Prometheus agent software engineering methodology. CAFnE enables domain experts to work with a graphical representation of the system , which is easier to understand and work with than textual code. The model of the system, updated by domain experts, is then transformed to executable code using a transformation function. CAFnE is supported by a proof-of-concept toolkit that implements the visual modeling, model driven development and code generation. We used the CAFnE toolkit in a user study where five domain experts (weather forecasters) with no prior experience in agent programming were asked to make changes to an existing weather alerting system. Participants were able to rapidly become familiar with CAFnE concepts, comprehend the system's design, make design changes and implement them using the CAFnE toolkit.
|
36 |
The Meaning of UML ModelsO'Keefe, Greg, gregokeefe@netspace.net.au January 2010 (has links)
The Unified Modelling Language (UML) is intended to express complex ideas
in an intuitive and easily understood way. It is important because it is widely
used in software engineering and other disciplines. Although an official definition
document exists, there is much debate over the precise meaning of UML models.
¶
In response, the academic community have put forward many different proposals
for formalising UML, but it is not at all obvious how to decide between
them. Indeed, given that UML practitioners are inclined to reject formalisms as
non-intuitive, it is not even obvious that the definition should be formal at all.
Rather than searching for yet another formalisation of UML, our main aim is to
determine what would constitute a good definition of UML.
¶
The first chapter sets the UML definition problem in a broad context, relating
it to work in logic and the philosophy of science. More specific conclusions about
the nature of model driven development are reached in the beginning of Chapter 2.
We then develop criteria for a definition of UML. Applying these criteria to the
existing definition, we find that it is lacking in clarity. We then set out to test the
precision of the definition. The test is to take an apparently inconsistent model, and
determine whether it really is inconsistent according to the definition.
¶
Many people have proposed that UML models are graphs, but few have justified
this choice using the official definition of UML. We begin Chapter 3 by arguing
from the official definition that UML models are graphs and that instantiation
is a graph homomorphism into an interpretation functor. The official definition of
UML defines the semantics against its abstract syntax, which is in turn defined by
a UML model. Chapters 3 and 4 prepare for our test by resolving this apparent
circularity. The result is a semantics for the metamodel fragment of the language.
¶
In Chapter 5, we find, contrary to popular belief, that the official definition does
provide sufficient semantics to classify the example model as inconsistent. Moreover,
the sustained study of the semantics in Chapters 3 to 5 confirms our initial
argument that the semantic domain is graphs. The Actions are the building blocks
of UMLs prescriptive dynamics. We see that they can be naturally defined as graph
transformation rules. Sequence diagrams are the main example of descriptive dynamics,
but we find that their official semantics are broken. The recorded history
approach should be replaced, we suggest, by a graph-oriented dynamic logic.
¶
Chapter 6 presents our early work on dynamic logic for UML sequence diagrams
and further explores the proposed semantic repairs. In Chapter 7, guided
by the criteria developed in Chapter 2, we critically survey the UML formalisation
literature and conclude that an existing body of graph transformation based work
known as dynamic metamodelling is very close to what is required.
¶
The final chapter draws together our conclusions. It proposes a category theoretic
construction to merge models of the syntax and semantic domain, yielding
a type graph for the graph transformation system which defines the dynamic semantics
of the language. Finally, it outlines the further work required to realise a
satisfactory definition of UML.
|
37 |
Automatic test vector generation and coverage analysis in model-based software developmentAndersson, Jonny January 2005 (has links)
<p>Thorough testing of software is necessary to assure the quality of a product before it is released. The testing process requires substantial resources in software development. Model-based software development provides new possibilities to automate parts of the testing process. By automating tests, valuable time can be saved. This thesis focuses on different ways to utilize models for automatic generation of test vectors and how test coverage analysis can be used to assure the quality of a test suite or to find "dead code" in a model. Different test-automation techniques have been investigated and applied to a model of an adaptive cruise control system (ACC) used at Scania. Source code has been generated automatically from the model, model coverage and code coverage has therefore been compared. The work with this thesis resulted in a new method to create test vectors for models based on a combinatorial test technique.</p>
|
38 |
Model Synchronization for Software EvolutionIvkovic, Igor 26 August 2011 (has links)
Software evolution refers to continuous change that a software system endures from inception to retirement. Each change must be efficiently and tractably propagated across models representing the system at different levels of abstraction. Model synchronization activities needed to support the systematic specification and analysis of evolution activities are still not adequately identified and formally defined.
In our research, we first introduce a formal notation for the representation of domain models and model instances to form the theoretical basis for the proposed model synchronization framework. Besides conforming to a generic MOF metamodel, we consider that each software model also relates to an application domain context (e.g., operating systems,
web services). Therefore, we are addressing the problems of model synchronization by focusing on domain-specific contexts.
Secondly, we identify and formally define model dependencies that are needed to trace and propagate changes across system models at different levels of abstraction, such as from design to source code. The approach for extraction of these dependencies is based on Formal Concept Analysis (FCA) algorithms. We further model identified dependencies
using Unified Modeling Language (UML) profiles and constraints, and utilize the extracted dependency relations in the context of coarse-grained model synchronization.
Thirdly, we introduce modeling semantics that allow for more complex profile-based dependencies using Triple Graph Grammar (TGG) rules with corresponding Object Constraint Language (OCL) constraints. The TGG semantics provide for fine-grained model synchronization, and enable compliance with the Query/View/Transformation (QVT) standards.
The introduced framework is assessed on a large, industrial case study of the IBM Commerce system. The dependency extraction framework is applied to repositories of business process models and related source code. The extracted dependencies were evaluated by IBM developers, and the corresponding precision and recall values calculated with results
that match the scope and goals of the research. The grammar-based model synchronization and dependency modelling using profiles has also been applied to the IBM Commerce system, and evaluated by the developers and architects involved in development of the system. The results of this experiment have been found to be valuable by stakeholders, and a patent
codifying the results has been filed by the IBM organization and has been granted. Finally, the results of this experiment have been formalized as TGG rules, and used in the context of fine-grained model synchronization.
|
39 |
Model and tool integration in high level design of embedded systemsShi, Jianlin January 2007 (has links)
<p>The development of advanced embedded systems requires a systematic approach as well as advanced tool support in dealing with their increasing complexity. This complexity is due to the increasing functionality that is implemented in embedded systems and stringent (and conflicting) requirements placed upon such systems from various stakeholders. The corresponding system development involves several specialists employing different modeling languages and tools. Integrating their work and the results thereof then becomes a challenge. In order to facilitate system architecting and design integration of different models, an approach that provides dedicated workspaces/views supported by structured information management and information exchange between domain models and tools is required.</p><p>This work is delimited to the context of embedded systems design and taking a model based approach. The goal of the work is to study possible technical solutions for integrating different models and tools, and to develop knowledge, support methods and a prototype tool platform.</p><p>To this end, this thesis examines a number of approaches that focus on the integration of multiple models and tools. Selected approaches are compared and characterized, and the basic mechanisms for integration are identified. Several scenarios are identified and further investigated in case studies. Two case studies have been performed with model transformations as focus. In the first one, integration of Matlab/Simulink® and UML2 are discussed with respect to the motivations, technical possibilities, and challenges. A preliminary mapping strategy, connecting a subset of concepts and constructs of Matlab/Simulink® and UML2, is presented together with a prototype implementation in the Eclipse environment. The second case study aims to enable safety analysis based on system design models in a UML description. A safety analysis tool, HiP-HOPS (Hierarchically Performed Hazard Origin and Propagation Studies), is partially integrated with a UML tool where an EAST-ADL2 based architecture model is developed. The experience and lessons learned from the experiments are reported in this thesis.</p><p>Multiple specific views are involved in the development of embedded systems. This thesis has studied the integration between system architecture design, function development and safety analysis through using UML tools, Matlab/Simulink, and HiP-HOPS. The results indicate that model transformations provide a feasible and promising solution for integrating multiple models and tools. The contributions are believed to be valid for a large class of advanced embedded systems. However, the developed transformations so far are not really scalable. A systematic approach for efficient development of model transformations is desired to standardize the design process and reuse developed transformations. To this end, future studies will be carried out to develop guidelines for model and tool integration and to provide support for structured information at both meta level and instance level.</p>
|
40 |
Model Driven Development and Maintenance of Business Logic for Information SystemsBrückmann, Tobias 20 January 2011 (has links) (PDF)
Since information systems become more and more important in today\'s society, business firms, organizations, and individuals rely on these systems to manage their daily business and social activities. The dependency of possibly critical business processes on complex IT systems requires a strategy that supports IT departments in reducing the time needed to implement changed or new domain requirements of functional departments. In this context, software models help to manage system\'s complexity and provide a tool for communication and documentation purposes. Moreover, software engineers tend to use automated software model processing such as code generation to improve development and maintenance processes. Particularly in the context of web-based information systems, a number of model driven approaches were developed. However, we believe that compared to the user interface layer and the persistency layer, there could be a better support of consistent approaches providing a suitable architecture for the consistent model driven development of business logic.
To ameliorate this situation, we developed an architectural blueprint consisting of meta models, tools, and a method support for model driven development and maintenance of business logic from analysis until system maintenance. This blueprint, which we call Amabulo infrastructure, consists of five layers and provides concepts and tools to set up and apply concrete infrastructures for model driven development projects. Modeling languages can be applied as needed. In this thesis we focus on business logic layers of J2EE applications. However, concrete code generation rules can be adapted easily for different target platforms.
After providing a high-level overview of our Amabulo infrastructure, we describe its layers in detail: The Visual Model Layer is responsible for all visual modeling tasks. For this purpose, we discuss requirements for visual software models for business logic, analyze several visual modeling languages concerning their usefulness, and provide an UML profile for business logic models.
The Abstract Model Layer provides an abstract view on the business logic model in the form of a domain specific model, which we call Amabulo model. An Amabulo model is reduced to pure logical information concerning business logic aspects. It focuses on information that is relevant for the code generation. For this purpose, an Amabulo model integrates model elements for process modeling, state modeling, and structural modeling. It is used as a common interface between visual modeling languages and code generators. Visual models of the Visual Model Layer are automatically transformed into an Amabulo model.
The Abstract System Layer provides a formal view onto the system in the form of a Coloured Petri Net (CPN). A Coloured Petri Net representation of the modeled business logic is a formal structure and independent of the actual business logic implementation. After an Amabulo model is automatically transformed into a CPN, it can be analyzed and simulated before any line of code is generated.
The Code Generation Layer is responsible for code generation. To support the design and implementation of project-specific code generators, we discuss several aspects of code integration issues and provide object-oriented design approaches to tackle the issues. Then, we provide a conceptual mapping of Amabulo model elements into architectural elements of a J2EE infrastructure. This mapping explicitly considers robustness features, which support a later manual integration of generated critical code artifacts and external systems. The Application Layer is the target layer of an Amabulo infrastructure and comprises generated code artifacts. These artifacts are instances of a specific target platform specification, and they can be modified for integration purposes with development tools.
Through the contributions in this thesis, we aim to provide an integrated set of solutions to support an efficient model driven development and maintenance process for the business logic of information systems. Therefore, we provide a consistent infrastructure blueprint that considers modeling tasks, model analysis tasks, and code generation tasks. As a result, we see potential for reducing the development and maintenance efforts for changed domain requirements and simultaneously guaranteeing robustness and maintainability even after several changes.
|
Page generated in 0.1261 seconds