• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 442
  • 58
  • 38
  • 35
  • 23
  • 14
  • 11
  • 9
  • 5
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 742
  • 742
  • 235
  • 175
  • 125
  • 118
  • 111
  • 110
  • 98
  • 91
  • 79
  • 65
  • 64
  • 63
  • 62
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Reducing the Distance Between Requirements Engineering and Verification

Abdeen, Waleed January 2022 (has links)
Background Requirements engineering and verification (REV) processes play es-sential roles in software product development. There are physical and non-physicaldistances between entities (actors, artifacts, and activities) in these processes. Cur-rent practices that reduce the distances, such as automated testing and alignmentof document structure and tracing only partially close the above mentioned gap.Objective The aim of this thesis is to investigate solutions w.r.t their abilityto reduce the distances between requirements engineering and verification. Twotechniques that are explored in this thesis are automated testing (model-basedtesting, MBT) and alignment of document structure and tracing (traceability).Method The research methods used in this thesis are systematic mapping, soft-ware requirements mining, case study, literature survey, validation study, and de-sign science.Results MBT and traceability are effective in reducing the distance between re-quirements and verification. However, both activities have some shortcoming thatneeds to be addressed when used for that purpose. Current MBT techniques inthe context of software performance do not attain all the goals of MBT: 1) require-ments validation, 2) checking the testability of requirements, and 3) the generationof an efficient test suite. These goals are essential to reduce the distance. We de-veloped and assessed performance requirements verification and test environmentgeneration approach to tackle these shortcomings. Also, traceability between re-quirements and verification suffers from the low granularity of trace links and doesnot support the verification of all requirements. We propose the use of taxonomictrace links to trace and align the structure of requirements specifications and ver-ification artifacts. The results from the validation study show that the solution isfeasible in practice. However, this comes with challenges that need to be addressed.Conclusion MBT and improved traceability reduce multiple distances betweenactors, artifacts, and activities in the requirements engineering and verificationprocess. MBT is most effective in reducing the distances when the model used isbuilt from the requirements. Traceability is essential in easing access to relevantinformation when needed and should not be seen as an overhead. When creatingtrace links, we need to consider the difference in the abstraction, structure, andtime between the linked artifacts / <p>Chapter 3 and 4 are papers submitted to journals, and therefore removed from the fulltext file.</p>
112

Model-based design of haptic devices

Aftab, Ahmad January 2012 (has links)
Efficient engineering design and development of high precision and reliable surgical simulators, like haptic devices for surgical training benefits from model-based and simulation driven design. The complexity of the design space, multi-domains, multicriteria requirements and multi-physics character of the behavior of such a product ask for a model based systematic approach for creating and validating compact and computationally efficient simulation models to be used for the design process.The research presented in this thesis describes a model-based design approach towards the design of haptic devices for simulation of surgical procedures, in case of hard tissues such as bone or teeth milling. The proposed approach is applied to a new haptic device based on TAU configuration.The main contributions of this thesis are: Development and verification of kinematic and dynamic models of the TAU haptic device. Multi-objective optimization (MOO) approach for optimum design of the TAU haptic device by optimizing kinematic performance indices, like workspace volume, kinematic isotropy and torque requirement of actuators.  A methodology for creating an analytical and compact model of the quasi-static stiffness of haptic devices, which considers the stiffness of; actuation system;flexible links and passive joints. / QC 20120611
113

Implementation relations and testing for cyclic systems: adding probabilities

Nunez, M., Hierons, R.M., Lefticaru, Raluca 17 April 2023 (has links)
Yes / This paper concerns the systematic testing of robotic control software based on state-based models. We focus on cyclic systems that typically receive inputs (values from sensors), perform computations, produce outputs (sent to actuators) and possibly change state. We provide a testing theory for such cyclic systems where time can be represented and probabilities are used to quantify non-deterministic choices, making it possible to model probabilistic algorithms. In addition, refusals, the inability of a system to perform a set of actions, are taken into account. We consider several possible testing scenarios. For example, a tester might only be able to passively observe a sequence of events and so cannot check probabilities, while in another scenario a tester might be able to repeatedly apply a test case and so estimate the probabilities of sequences of events. These different testing scenarios lead to a range of implementation relations (notions of correctness). As a consequence, this paper provides formal definitions of implementation relations that can form the basis of sound automated testing in a range of testing scenarios. We also validate the implementation relations by showing how observers can be used to provide an alternative but equivalent characterisation. / This work has been supported by EPSRC, United Kingdom grant EP/R025134/2 RoboTest: Systematic Model-Based Testing and Simulation of Mobile Autonomous Robots, the Spanish MINECO-FEDER grant PID2021- 122215NB-C31 (AwESOMe) and the Region of Madrid grant S2018/TCS-4314 (FORTE-CM) co-funded by EIE Funds of the European Union.
114

Clustering Response-Stressor Relationships in Ecological Studies

Gao, Feng 31 July 2008 (has links)
This research is motivated by an issue frequently encountered in water quality monitoring and ecological assessment. One concern for researchers and watershed resource managers is how the biological community in a watershed is affected by human activities. The conventional single model approach based on regression and logistic regression usually fails to adequately model the relationship between biological responses and environmental stressors since the study samples are collected over a large spatial region and the response-stressor relationships are usually weak in this situation. In this dissertation, we propose two alternative modeling approaches to partition the whole region of study into disjoint subregions and model the response-stressor relationships within subregions simultaneously. In our examples, these modeling approaches found stronger relationships within subregions and should help the resource managers improve impairment assessment and decision making. The first approach is an adjusted Bayesian classification and regression tree (ABCART). It is based on the Bayesian classification and regression tree approach (BCART) and is modified to accommodate spatial partitions in ecological studies. The second approach is a Voronoi diagram based partition approach. This approach uses the Voronoi diagram technique to randomly partition the whole region into subregions with predetermined minimum sample size. The optimal partition/cluster is selected by Monte Carlo simulation. We propose several model selection criteria for optimal partitioning and modeling according to the nature of the study and extend it to multivariate analysis to find the underlying structure of response-stressor relationships. We also propose a multivariate hotspot detection approach (MHDM) to find the region where the response-stressor relationship is the strongest according to an R-square-like criterion. Several sets of ecological data are studied in this dissertation to illustrate the implementation of the above partition modeling approaches. The findings from these studies are consistent with other studies. / Ph. D.
115

Exploring the Adoption Process of MBSE: A Closer Look at Contributing Organizational Structure Factors

Henderson, Kaitlin Anne 07 October 2022 (has links)
Over the past few decades, not only have systems continued to increase in complexity, but they are expected to be delivered in the same timeframe and cost range. Technology has advanced us into what some refer to as the 4th Industrial Revolution. Digital is becoming the expectation in all areas of people's lives. Model-Based Systems Engineering (MBSE) represents the transition of systems into this new digital age, promising many improvements over the previous Document-Based Systems Engineering. This transition, however, is not simple. MBSE is a major paradigm shift for systems engineers, especially for those who have been in this field for many years. In order to work as intended, MBSE requires the participation of many different disciplines and functionalities in an organization. Gaining this level of organizational collaboration, however, is no easy task. Organizational structure and culture have intuitively been believed to be critical barriers to the successful adoption of MBSE, but little work has been done to discover what the impacts of these organizational factors are. The purpose of this research is to further explore the MBSE adoption process in the context of the organization. There were three research objectives designed to address the research question: how does organizational structure influence the adoption and implementation of MBSE? Research objective one was: relate organizational structure characteristics to MBSE adoption and implementation measures. Research objective two was: discover how organizational factors contribute to decisions made and other aspects of the MBSE adoption process. Research objective three was: connect different organizational structure and adoption variables together to derive critical variables in the adoption process. Research objective one was carried out using a survey as the instrument. The objective of the survey was to examine what the effects of organizational structure are on MBSE adoption and implementation. Organizational structure was represented by seven variables: Size, Formalization, Centralization, Specialization, Vertical Differentiation, Flexibility, and Interconnectedness. These are different characteristics of organizational structure that can be measured on a scale. MBSE adoption and implementation was represented by one adoption and three implementation variables. These include Adoption Process, Maturity of MBSE, Use of MBSE, and Influence on organizational outcomes. A total of 51 survey responses were received that met the inclusion criteria. Factor analysis was done for variables with multi-item measures. The factors were then analyzed using pairwise correlations to determine which relationships were significant. Formalization, Flexibility, and Interconnectedness were found to have positive correlations with adoption and implementation variables. Size and Vertical Differentiation had a negative correlation with Use of MBSE (implementation). Centralization was found to have negative correlations with adoption and implementation. Specialization did not have any significant correlations. Research objective two utilized semi-structured interviews as the main instrument. Survey participants had the opportunity to provide more detailed explanations of their organizations' experiences in the form of follow-up interviews. Eighteen survey participants agreed to this follow-up interview focused on MBSE adoption. Two of the participants shared failed adoption experiences, with the rest were at various stages of the adoption process. One of the most emergent themes out of the interviews was the idea of integration. Integration needs to occur at the organizational level, and the technical level. The technical level refers to the fact that tools, models, and/or data repositories need to be linked together in some way. Integration also has to occur at the organizational level, because a lot of different functional areas need to come together for MBSE. The way that organizations can address the issue of integration is through coordination mechanisms. The ultimate goal is to achieve implicit coordination through the use of connected models, but getting to that point will require coordination between different subunits. Interview responses were evaluated for coordination mechanisms, or situations that showed a distinct lack of a coordination mechanism. The lack of coordination mechanisms largely consists of a lack of standardization, lack of communication between subunits, and issues of authority. The final research objective of this work was carried out through a causal analysis using the data obtained from the survey and interviews. The purpose of this analysis was to visualize and better understand the adoption process. According to the calculated measures of centrality, the important nodes in this model are Improved organizational outcomes, Coordination between subunits, Projects use tools/methods, and People willing to use tools. Improved organizational outcomes is part of a key loop in the causal model. Improved organizational outcomes contributes to leaders and employees' willingness to support and use MBSE methods and tools, which contribute to actual use of tools and methods. This creates more Improved organizational outcomes, completing the loop. The survey results showed that Formalization, Decentralization, Flexibility, and Interconnectedness all have positive correlations with the Influence on organizational outcomes. So these organizational structure components are external factors that can be used to positively impact the adoption loop. Overall, this work provided several contributions to the field regarding the MBSE adoption process in an organizational setting. Organizational structure was shown to have significant correlations with adoption and implementation of MBSE. Coordination mechanisms were identified as a method to achieve integration across different functional areas of the organization. Improved organizational outcomes was shown to be a critical variable in the adoption process as an avenue for organizational structure factors to have a positive effect on the adoption process. / Doctor of Philosophy / Over the past few decades, not only have systems continued to increase in complexity, but they are expected to be delivered in the same timeframe and cost range. Technology has advanced us into what some refer to as the 4th Industrial Revolution. Digital is becoming the expectation in all areas of people's lives. Model-Based Systems Engineering (MBSE) represents the transition of systems into this new digital age, promising many improvements over the previous Document-Based Systems Engineering. This transition, however, is not simple. MBSE is a major mindset change for systems engineers, especially for those who have been in this field for many years. In order to work as intended, MBSE requires the participation of many different disciplines and functionalities in an organization. Gaining this level of organizational collaboration, however, is no easy task. Organizational structure and culture have intuitively been believed to be critical barriers to the successful adoption of MBSE, but little work has been done to discover what the impacts of these organizational factors are. This research looks into how organizational structure may have an impact on MBSE adoption and implementation. This research was carried out with the use of three different methods: an online survey, semi-structured interviews, and a causal analysis. The data obtained from the survey and interviews was used to construct a causal model depicting the MBSE adoption process. Overall, this work provided several contributions to the field regarding the MBSE adoption process in an organizational setting. Organizational structural variables were shown to have significant correlations with adoption and implementation of MBSE. Formalization, Flexibility, and Interconnectedness were found to have positive correlations with adoption and implementation variables, while Centralization had negative correlations with adoption and implementation. Coordination mechanisms were identified as a method to achieve integration across different functional areas of the organization. Interview responses were evaluated for coordination mechanisms, or situations that showed a distinct lack of a coordination mechanism. The lack of coordination mechanisms largely consists of a lack of standardization, lack of communication between subunits, and issues of authority. The causal analysis showed that Improved organizational outcomes, Coordination between subunits, Projects use tools/methods, and People willing to use tools were the critical variables in the MBSE adoption process.
116

Incremental Design Migration Support in Industrial Control Systems Development

Balasubramanian, Harish 04 December 2014 (has links)
Industrial control systems (ICS) play an extremely important role in the world around us. They have helped in reducing human effort and contributed to automation of processes in oil refining, power generation, food and beverage and production lines. With advancement in technology, embedded platforms have emerged as ideal platforms for implementation of such ICSes. Traditional approaches in ICS design involve switching from a model or modeling environment directly to a real-world implementation. Errors have the potential to go unnoticed in the modeling environment and have a tendency to affect real control systems. Current models for error identification are complex and affect the design process of ICS appreciably. This thesis adds an additional layer to ICS design: an Interface Abstraction Process (IAP). IAP helps in incremental migration from a modeling environment to a real physical environment by supporting intermediate design versions. Implementation of the IAP is simple and independent of control system complexity. Early error identification is possible since intermediate versions are supported. Existing control system designs can be modified minimally to facilitate the addition of an extra layer. The overhead of adding the IAP is measured and analysed. With early validation, actual behavior of the ICS in the real physical setting matches the expected behavior in the modeling environment. This approach to ICS design adds a significant amount of latency to existing ICSes without affecting the design process significantly. Since the IAP helps in early design validation, it can be removed before deployment in the real-world. / Master of Science
117

Model-Based Design of a Fork Control System in Very Narrow Aisle Forklifts

Bodin, Erik, Davidsson, Henric January 2017 (has links)
This thesis explains the model-based design of a fork control system in a turret head operated Very Narrow Aisle forklift in order to evaluate and push the limits of the current hardware architecture. The turret head movement consists of two separate motions, traversing and rotation, which both are hydraulically actuated. The plant is thoroughly modeled in the Mathworks softwares Simulink/Simscape to assist in the design of the control system. The control system is designed in Simulink/Stateflow and code-generated to be evaluated in the actual forklift. Optimal control theory is used to generate a minimum-jerk trajectory for auto-rotation, that is simultaneous traversing and rotation with the load kept in centre. The new control system is able to control the system within the positioning requirements of +/- 10 mm and +/- 9 mrad for traversing and rotation, respectively. It also shows good overall performance in terms of robustness since it has been tested and validated with different loads and on different versions of the forklift. However, the study also shows that the non-linearities of the system, especially in the hydraulic proportional valves, causes problems in a closed-loop control system. The work serves as a proof of concept for model-based development at the company since the development time of the new control system was significantly lower than for the original control system.
118

Fastest : improving and supporting the Test Template Framework / Fastest : amélioration et développement du Test Template Framework

Cristia, Maximiliano 13 April 2012 (has links)
La phase la plus consommatrice de ressources d'une production de logiciels est l'étape de vérification qualité, incluant la vérification fonctionnelle des programmes. Donc, à cause de ces coûts, l'industrie de développement des logiciels exécute rarement une vérification minutieuse de ses produits. Une des stratégies les plus prometteuses pour réduire les coûts de vérification des logiciels est de rendre cette vérification aussi automatique que possible. Actuellement, cette industrie repose essentiellement sur une seule phase de tests pour vérifier les fonctionnalités des programmes. Nous avons donc cherché à automatiser le test fonctionnel des logiciels en fournissant une assistance pour l'étape de génération des scénarios de test. Le test basé sur des modèles (MBT) est une théorie de test qui a eu un grand succès dans l'automatisation du processus de test. Une méthode MBT analyse un modèle formel ou une spécification d'un logiciel pour produire des scénarios de test qui sont exécutés plus tard par le programme. La plupart de ces méthodes travaillent avec des machines à états finis (FSM) qui limite leur application puisque les FSMs ne peuvent pas décrire tous les logiciels. Cependant, un gros avantage de ces méthodes est qu'elles atteignent un haut degré d'automatisation. Dans cette thèse nous montrons, au contraire, le niveau d'automatisation que nous avons obtenu en appliquant une méthode MBT, connue comme Test Template Framework (TTF), sur des spécifications Z. Comme Z est basé sur la logique de premier ordre et la théorie du jeu, il est beaucoup plus expressif que les FSMs, rendant ainsi nos résultats applicables à une plus large gamme de programmes. / The most resource-consuming phase of software production is the verification of its qualities, including functional correctness. However, due to its costs, the software industry seldom performs a thorough verification of its products. One of the most promising strategies to reduce the costs of software verification is making it as automatic as possible. Currently, this industry relies basically only on testing to verify functional correctness. Therefore, we sought to automate functional testing of software systems by providing tool support for the test case generation step. Model-based testing (MBT) is a testing theory that has achieved impressive successes in automating the testing process. Any MBT method analyses a formal model or specification of the system under test to generate test cases that later are executed on the system. Almost all methods work with some form of finite state machines (FSM) which limits their application since FSM's cannot describe general systems. However, a great advantage of these methods is that they reach a high degree of automation. In this thesis we show, on the contrary, the degree of automation we have achieved by applying a MBT method, known as Test Template Framework (TTF), to Z specifications. Since Z is based on first-order logic and set theory it is far more expressive than FSM's, thus making our results applicable to a wider range of programs. During this thesis we have improved the TTF and developed a tool, called Fastest, that implements all of our ideas.
119

Modellbasierte Modulprüfung für die Entwicklung technischer, softwareintensiver Systeme mit Real-Time Object-Oriented Modeling / Model-based unit-testing for software-intensive, technical systems using <i>real-time object-oriented modeling</i>

Robinson-Mallett, Christopher January 2005 (has links)
Mit zunehmender Komplexität technischer Softwaresysteme ist die Nachfrage an produktiveren Methoden und Werkzeugen auch im sicherheitskritischen Umfeld gewachsen. Da insbesondere objektorientierte und modellbasierte Ansätze und Methoden ausgezeichnete Eigenschaften zur Entwicklung großer und komplexer Systeme besitzen, ist zu erwarten, dass diese in naher Zukunft selbst bis in sicherheitskritische Bereiche der Softwareentwicklung vordringen. Mit der Unified Modeling Language Real-Time (UML-RT) wird eine Softwareentwicklungsmethode für technische Systeme durch die Object Management Group (OMG) propagiert. Für den praktischen Einsatz im technischen und sicherheitskritischen Umfeld muss diese Methode nicht nur bestimmte technische Eigenschaften, beispielsweise temporale Analysierbarkeit, besitzen, sondern auch in einen bestehenden Qualitätssicherungsprozess integrierbar sein. Ein wichtiger Aspekt der Integration der UML-RT in ein qualitätsorientiertes Prozessmodell, beispielsweise in das V-Modell, ist die Verfügbarkeit von ausgereiften Konzepten und Methoden für einen systematischen Modultest. <br><br> Der Modultest dient als erste Qualititätssicherungsphase nach der Implementierung der Fehlerfindung und dem Qualitätsnachweis für jede separat prüfbare Softwarekomponente eines Systems. Während dieser Phase stellt die Durchführung von systematischen Tests die wichtigste Qualitätssicherungsmaßnahme dar. Während zum jetzigen Zeitpunkt zwar ausgereifte Methoden und Werkzeuge für die modellbasierte Softwareentwicklung zur Verfügung stehen, existieren nur wenig überzeugende Lösungen für eine systematische modellbasierte Modulprüfung. <br><br> Die durchgängige Verwendung ausführbarer Modelle und Codegenerierung stellen wesentliche Konzepte der modellbasierten Softwareentwicklung dar. Sie dienen der konstruktiven Fehlerreduktion durch Automatisierung ansonsten fehlerträchtiger, manueller Vorgänge. Im Rahmen einer modellbasierten Qualitätssicherung sollten diese Konzepte konsequenterweise in die späteren Qualitätssicherungsphasen transportiert werden. Daher ist eine wesentliche Forderung an ein Verfahren zur modellbasierten Modulprüfung ein möglichst hoher Grad an Automatisierung. <br><br> In aktuellen Entwicklungen hat sich für die Generierung von Testfällen auf Basis von Zustandsautomaten die Verwendung von Model Checking als effiziente und an die vielfältigsten Testprobleme anpassbare Methode bewährt. Der Ansatz des Model Checking stammt ursprünglich aus dem Entwurf von Kommunikationsprotokollen und wurde bereits erfolgreich auf verschiedene Probleme der Modellierung technischer Software angewendet. Insbesondere in der Gegenwart ausführbarer, automatenbasierter Modelle erscheint die Verwendung von Model Checking sinnvoll, das die Existenz einer formalen, zustandsbasierten Spezifikation voraussetzt. Ein ausführbares, zustandsbasiertes Modell erfüllt diese Anforderungen in der Regel. Aus diesen Gründen ist die Wahl eines Model Checking Ansatzes für die Generierung von Testfällen im Rahmen eines modellbasierten Modultestverfahrens eine logische Konsequenz.<br><br> Obwohl in der aktuellen Spezifikation der UML-RT keine eindeutigen Aussagen über den zur Verhaltensbeschreibung zu verwendenden Formalismus gemacht werden, ist es wahrscheinlich, dass es sich bei der UML-RT um eine zu Real-Time Object-Oriented Modeling (ROOM) kompatible Methode handelt. Alle in dieser Arbeit präsentierten Methoden und Ergebnisse sind somit auf die kommende UML-RT übertragbar und von sehr aktueller Bedeutung.<br><br> Aus den genannten Gründen verfolgt diese Arbeit das Ziel, die analytische Qualitätssicherung in der modellbasierten Softwareentwicklung mittels einer modellbasierten Methode für den Modultest zu verbessern. Zu diesem Zweck wird eine neuartige Testmethode präsentiert, die auf automatenbasierten Verhaltensmodellen und CTL Model Checking basiert. Die Testfallgenerierung kann weitgehend automatisch erfolgen, um Fehler durch menschlichen Einfluss auszuschließen. Das entwickelte Modultestverfahren ist in die technischen Konzepte Model Driven Architecture und ROOM, beziehungsweise UML-RT, sowie in die organisatorischen Konzepte eines qualitätsorientierten Prozessmodells, beispielsweise das V-Modell, integrierbar. / In consequence to the increasing complexity of technical software-systems the demand on highly productive methods and tools is increasing even in the field of safety-critical systems. In particular, object-oriented and model-based approaches to software-development provide excellent abilities to develop large and highly complex systems. Therefore, it can be expected that in the near future these methods will find application even in the safety-critical area. The Unified Modeling Language Real-Time (UML-RT) is a software-development methods for technical systems, which is propagated by the Object Management Group (OMG). For the practical application of this method in the field of technical and safety-critical systems it has to provide certain technical qualities, e.g. applicability of temporal analyses. Furthermore, it needs to be integrated into the existing quality assurance process. An important aspect of the integration of UML-RT in an quality-oriented process model, e.g. the V-Model, represents the availability of sophisticated concepts and methods for systematic unit-testing. <br><br> Unit-testing is the first quality assurance phase after implementation to reveal faults and to approve the quality of each independently testable software component. During this phase the systematic execution of test-cases is the most important quality assurance task. Despite the fact, that today many sophisticated, commercial methods and tools for model-based software-development are available, no convincing solutions exist for systematic model-based unit-testing. <br><br> The use of executable models and automatic code generation are important concepts of model-based software development, which enable the constructive reduction of faults through automation of error-prone tasks. Consequently, these concepts should be transferred into the testing phases by a model-based quality assurance approach. Therefore, a major requirement of a model-based unit-testing method is a high degree of automation. In the best case, this should result in fully automatic test-case generation. <br><br> Model checking already has been approved an efficient and flexible method for the automated generation of test-cases from specifications in the form of finite state-machines. The model checking approach has been developed for the verification of communication protocols and it was applied successfully to a wide range of problems in the field of technical software modelling. The application of model checking demands a formal, state-based representation of the system. Therefore, the use of model checking for the generation of test-cases is a beneficial approach to improve the quality in a model-based software development with executable, state-based models. <br><br> Although, in its current state the specification of UML-RT provides only little information on the semantics of the formalism that has to be used to specify a component’s behaviour, it can be assumed that it will be compatible to Real-Time Object-Oriented Modeling. Therefore, all presented methods and results in this dissertation are transferable to UML-RT.<br><br> For these reasons, this dissertations aims at the improvement of the analytical quality assurance in a model-based software development process. To achieve this goal, a new model-based approach to automated unit-testing on the basis of state-based behavioural models and CTL Model Checking is presented. The presented method for test-case generation can be automated to avoid faults due to error-prone human activities. Furthermore it can be integrated into the technical concepts of the Model Driven Architecture and ROOM, respectively UML-RT, and into a quality-oriented process model, like the V-Model.
120

Les rôles : médiateurs dynamiques entre modèles système et modèles de simulation / Roles : dynamic mediators between system models and simulation models

Schneider, Jean-Philippe 25 November 2015 (has links)
Les systèmes actuels tendent à être intégrés les uns avec les autres. Mais cette intégration n'est pas forcément prévue à l'origine du système. Cette tendance créée des systèmes de systèmes. Un système de système de systèmes est un système constitué de systèmes qui sont gérés par des équipes indépendantes, qui sont fonctionnellement indépendants, qui collaborent, qui évoluent et qui sont géographiquement distribués. La communication entre les différentes équipes facilite la conception d'un système de systèmes. Cette communication peut être réalisée par l'utilisation de modèles et de simulation. Cependant, la modélisation du système de systèmes et la modélisation des simulations ne reposent pas sur les mêmes langages. Pour assurer la cohérence des modèles, il faut pouvoir créer les modèles de simulation à partir des modèles système. Cependant, il faut tenir compte des contraintes liées aux propriétés des systèmes de systèmes. Il faut être capable de manipuler des modèles systèmes réalisés dans des langages différents, de réaliser des simulations de natures différentes et suivre les évolutions des langages de modélisation et des outils de simulation. Pour répondre à ces problématiques, nous avons défini l'environnement Role4AII pour la manipulation de modèles systèmes réalisés dans des langages hétérogènes. Role4AII est basé sur la notion de rôles. Les rôles permettent de créer des simulations en accédant aux informations contenues dans des éléments de modèles indépendamment de leur type. Role4AII est capable de prendre en entrée des modèles sérialisés par différents outils grâce à l'utilisation de parsers combinateurs. Ces derniers apportent modularité et extensibilité aux fonctionnalités d'import. L'environnement Role4AII a été utilisé sur un exemple de système de systèmes : l'observatoire sous-marin MeDON. / Current Systems tend to become integrated with each others. However, this intégration may not be designed for the System. This trend raises the concept of System of Systems. A System of Systems is a System made of Systems which are managed independently, functionaly independent, collaborating, evolving and geographically distributed. The communication among the different teams eases the design of the System of Systems. This communication may be made through the use of models and simulation.However, System of Systems models and simulation models do not rely on the same modeling languages. In order to ensure coherency between the two types of models, simulation models should be obtained from System models. But this approach should take into account the constraints coming from the properties of System of Systems. System models made in different modeling languages should be handled, simulation of different kinds should be generated and the evolution of both modeling languages and simulation tools should be managed.In order to tackle these issues, we defined the Role4AII environment to manipulate System models made in heterogeneous modeling languages. Role4AII is based on the concept of rôles. Rôles enable to create simulations by accessing to information stored in model éléments despite their types differences. Role4AII is able to take as input serialized models from different modeling tools by using parser combinators. Parser combinators bring modularity and extensibility to the import features. Role4AII has been used on a System of System example: the MeDON seafloor observatory.

Page generated in 0.0436 seconds