Spelling suggestions: "subject:"bnormal codels"" "subject:"bnormal 2models""
1 |
A representação social da teoria de Piaget no Brasil: implicações para as pesquisas acadêmicas / The social representation of Piaget\'s theory in Brazil: implications to the academic researchesMarçal, Vicente Eduardo Ribeiro 07 June 2019 (has links)
O objetivo desta Tese foi o de demonstrar que a Teoria do biólogo e epistemólogo suíço Jean Piaget foi vítima de sua própria Representação Social, (na acepção de Serge Moscovici, como explicitaremos já em nossa Introdução). Vítima no sentido de que suas descobertas na área da Biologia e sua criação no campo da construção de modelos formais na mesma ciência, feito inédito até então, caíram no ostracismo na História da ontogênese epigenética do ser humano, tanto nos aspectos biológicos, quanto na área da aquisição do conhecimento científico e lógico matemático. Esses fatos nos mostram Zelia Ramozzi-Chiarottino, (cuja análise e interpretação da Teoria de Jean Piaget constituir-se-á no referencial teórico desta Tese) ao lado de seus colaboradores, no artigo: Jean Piagets unrecognized epigenetic ontogenesis of the logical mathematical thought, (2017). Neste trabalho, restringimo-nos ao Brasil e à produção de Dissertações e Teses de Doutorado sobre a Teoria de Piaget e sua representação social, aqui realizadas nos últimos dez anos. Fizemos um levantamento das Dissertações e Teses a partir do Catálogo de Teses e Dissertações da CAPES. O método estatístico que utilizamos foi o do para analisar os dados coletados. Esta análise confirmou nossa conjectura / Our aim with this Ph.D Thesis was to demonstrate that the Theory of the swiss biologist and epistemologist Jean Piaget was victim of its own Social Representation (within the meaning of Serge Moscovici, as we will still explain in our Introduction). Victim in sense of that his findings in the field of Biology and his creation in the field of the construction of formal models in this same Science, unprecedent feat till then, have fallen in obscurity in the History of epigenetic ontogenesis of the human being, in both biologic and scientific aspects, in terms of the field of acquiring of scientific and logical mathematical knowledgement. These facts Zelia Ramozzi-Chiarottino (whose analysis and interpretation of Piagets Theory will be the theoretical framework of this Thesis) show us next to her co-workers, in the article: Jean Piagets unrecognized epigenetic ontogenesis of the logical mathematical thought, (2017). In the present work, we limited ourselves to Brazil, and to the production of Dissertations and Doctoral Theses on Piagets Theory and its Social Representation, produced here on the last ten years. We made a data survey on the Dissertations and Theses from the Catalogue of Theses and Dissertations from CAPES. Statistic method used was the to analyze the collected data. This analysis validated our conjecture
|
2 |
An approach to measuring software systems using new combined metrics of complex test / Une approche pour mesurer les systèmes logiciels utilisant de nouvelles métriques de test complexe combinéesDahab, Sarah 13 September 2019 (has links)
La plupart des métriques de qualité logicielle mesurables sont actuellement basées sur des mesures bas niveau, telles que la complexité cyclomatique, le nombre de lignes de commentaires ou le nombre de blocs dupliqués. De même, la qualité de l'ingénierie logicielle est davantage liée à des facteurs techniques ou de gestion, et devrait fournir des indicateurs utiles pour les exigences de qualité. Actuellement, l'évaluation de ces exigences de qualité n'est pas automatisée, elle n'est pas validée empiriquement dans des contextes réels et l'évaluation est définie sans tenir compte des principes de la théorie de la mesure. Par conséquent, il est difficile de comprendre où et comment améliorer le logiciel suivant le résultat obtenu. Dans ce domaine, les principaux défis consistent à définir des métriques adéquates et utiles pour les exigences de qualité, les documents de conception de logiciels et autres artefacts logiciels, y compris les activités de test.Les principales problématiques scientifiques abordées dans cette thèse sont les suivantes: définir des mesures et des outils de support pour mesurer les activités d'ingénierie logicielle modernes en termes d'efficacité et de qualité. La seconde consiste à analyser les résultats de mesure pour identifier quoi et comment s'améliorer automatiquement. Le dernier consiste en l'automatisation du processus de mesure afin de réduire le temps de développement. Une telle solution hautement automatisée et facile à déployer constituera une solution révolutionnaire, car les outils actuels ne le prennent pas en charge, sauf pour une portée très limitée. / Most of the measurable software quality metrics are currently based on low level metrics, such as cyclomatic complexity, number of comment lines or number of duplicated blocks. Likewise, quality of software engineering is more related to technical or management factoid, and should provide useful metrics for quality requirements. Currently the assessment of these quality requirements is not automated, not empirically validated in real contexts, and the assessment is defined without considering principles of measurement theory. Therefore it is difficult to understand where and how to improve the software following the obtained result. In this domain, the main challenges are to define adequate and useful metrics for quality requirements, software design documents and other software artifacts, including testing activities.The main scientific problematic that are tackled in this proposed thesis are the following : defining metrics and its supporting tools for measuring modern software engineering activities with respect to efficiency and quality. The second consists in analyzing measurement results for identifying what and how to improve automatically. The last one consists in the measurement process automation in order to reduce the development time. Such highly automated and easy to deploy solution will be a breakthrough solution, as current tools do not support it except for very limited scope.
|
3 |
Formal models for safety analysis of a Data Center system / Modèles formels pour l’analyse de la sûreté de fonctionnement d’un Data centerBennaceur, Mokhtar Walid 21 November 2019 (has links)
Un Data Center (DC) est un bâtiment dont le but est d'héberger des équipements informatiques pour fournir différents services Internet. Pour assurer un fonctionnement constant de ces équipements, le système électrique fournit de l'énergie, et pour les maintenir à une température constante, un système de refroidissement est nécessaire. Chacun de ces besoins doit être assuré en permanence, car la conséquence de la panne de l'un d'eux entraîne une indisponibilité de l'ensemble du système du DC, ce qui peut être fatal pour une entreprise.A notre connaissance, il n'existe pas de travaux d'étude sur l'analyse de sûreté de fonctionnement et de performance, prenant en compte l'ensemble du système du DC avec les différentes interactions entre ses sous-systèmes. Les études d'analyse existantes sont partielles et se concentrent sur un seul sous-système, parfois deux. L'objectif principal de cette thèse est de contribuer à l'analyse de sûreté de fonctionnement d'un Data Center. Pour cela, nous étudions, dans un premier temps, chaque sous-système (électrique, thermique et réseau) séparément, afin d'en définir ses caractéristiques. Chaque sous-système du DC est un système de production qui transforment les alimentations d'entrée (énergie pour le système électrique, flux d'air pour le système thermique, et paquets pour le réseau) en sorties, qui peuvent être des services Internet. Actuellement, les méthodes d'analyse de sûreté de fonctionnement existantes pour ce type de systèmes sont inadéquates, car l'analyse de sûreté doit tenir compte non seulement de l'état interne de chaque composant du système, mais également des différents flux de production qui circulent entre ces composants. Dans cette thèse, nous considérons une nouvelle technique de modélisation appelée Arbres de Production (AP) qui permet de modéliser la relation entre les composants d'un système avec une attention particulière aux flux circulants entre ces composants.La technique de modélisation en AP permet de traiter un seul type de flux à la fois. Son application sur le sous-système électrique est donc appropriée, car il n'y a qu'un seul type de flux (le courant électrique). Toutefois, lorsqu'il existe des dépendances entre les sous-systèmes, comme c'est le cas pour les sous-systèmes thermiques et les sous-systèmes de réseaux, différents types de flux doivent être pris en compte, ce qui rend l'application de la technique des APs inadéquate. Par conséquent, nous étendons cette technique pour traiter les dépendances entre les différents types de flux qui circulent dans le DC. En conséquence, il est facile d'évaluer les différents indicateurs de sûreté de fonctionnement du système global du DC, en tenant compte des interactions entre ses sous-systèmes. De plus, nous faisons quelques statistiques de performance. Nous validons les résultats de notre approche en les comparant à ceux obtenus par un outil de simulation que nous avons implémenté et qui est basé sur la théorie des files d'attente.Jusqu'à présent, les modèles d'arbres de production n'ont pas d'outils de résolution. C'est pourquoi nous proposons une méthode de résolution basée sur la Distribution de Probabilité de Capacité (Probability Distribution of Capacity - PDC) des flux circulants dans le système du DC. Nous implémentons également le modèle d'AP en utilisant le langage de modélisation AltaRica 3.0, et nous utilisons son simulateur stochastique dédié pour estimer les indices de fiabilité du système. Ceci est très important pour comparer et valider les résultats obtenus avec notre méthode d'évaluation. En parallèle, nous développons un outil qui implémente l'algorithme de résolution des APs avec une interface graphique basée qui permet de créer, éditer et analyser des modèles d'APs. L'outil permet également d'afficher les résultats et génère un code AltaRica, qui peut être analysé ultérieurement à l'aide du simulateur stochastique de l'outil AltaRica 3.0. / A Data Center (DC) is a building whose purpose is to host IT devices to provide different internet services. To ensure constant operation of these devices, energy is provided by the electrical system, and to keep them at a constant temperature, a cooling system is necessary. Each of these needs must be ensured continuously, because the consequence of breakdown of one of them leads to an unavailability of the whole DC system, and this can be fatal for a company.In our Knowledge, there exists no safety and performance studies’, taking into account the whole DC system with the different interactions between its sub-systems. The existing analysis studies are partial and focus only on one sub-system, sometimes two. The main objective of this thesis is to contribute to the safety analysis of a DC system. To achieve this purpose, we study, first, each DC sub-system (electrical, thermal and network) separately, in order to define their characteristics. Each DC sub-system is a production system and consists of combinations of components that transform entrance supplies (energy for the electrical system, air flow for the thermal one, and packets for the network one) into exits, which can be internet services. Currently the existing safety analysis methods for these kinds of systems are inadequate, because the safety analysis must take into account not only the internal state of each component, but also the different production flows circulating between components. In this thesis, we consider a new modeling methodology called Production Trees (PT) which allows modeling the relationship between the components of a system with a particular attention to the flows circulating between these components.The PT modeling technique allows dealing with one kind of flow at once. Thus its application on the electrical sub-system is suitable, because there is only one kind of flows (the electric current). However, when there are dependencies between sub-systems, as in thermal and network sub-systems, different kinds of flows need to be taken into account, making the application of the PT modeling technique inadequate. Therefore, we extend this technique to deal with dependencies between the different kinds of flows in the DC. Accordingly it is easy to assess the different safety indicators of the global DC system, taking into account the interactions between its sub-systems. Moreover we make some performance statistics. We validate the results of our approach by comparing them to those obtained by a simulation tool that we have implemented based on Queuing Network theory.So far, Production Trees models are not tool supported. Therefore we propose a solution method based on the Probability Distribution of Capacity (PDC) of flows circulating in the DC system. We implement also the PT model using the AltaRica 3.0 modeling language, and use its dedicated stochastic simulator to estimate the reliability indices of the system. This is very important to compare and validate the obtained results with our assessment method. In parallel, we develop a tool which implements the PT solution algorithm with an interactive graphical interface, which allows creating, editing and analyzing PT models. The tool allows also displaying the results, and generates an AltaRica code, which can be subsequently analyzed using the stochastic simulator of AltaRica 3.0 tool.
|
4 |
Shepherding Network Security Protocols as They Transition to New Atmospheres: A New Paradigm in Network Protocol AnalysisTalkington, Gregory Joshua 12 1900 (has links)
The solutions presented in this dissertation describe a new paradigm in which we shepherd these network security protocols through atmosphere transitions, offering new ways to analyze and monitor the state of the protocol. The approach involves identifying a protocols transitional weaknesses through adaption of formal models, measuring the weakness as it exists in the wild by statically analyzing applications, and show how to use network traffic analysis to monitor protocol implementations going into the future. Throughout the effort, we follow the popular Open Authorization protocol in its attempts to apply its web-based roots to a mobile atmosphere. To pinpoint protocol deficiencies, we first adapt a well regarded formal analysis and show it insufficient in the characterization of mobile applications, tying its transitional weaknesses to implementation issues and delivering a reanalysis of the proof. We then measure the prevalence of this weakness by statically analyzing over 11,000 Android applications. While looking through source code, we develop new methods to find sensitive protocol information, overcome hurdles like obfuscation, and provide interfaces for later modeling, all while achieving a false positive rate of below 10 percent. We then use network analysis to detect and verify application implementations. By collecting network traffic from Android applications that use OAuth, we produce a set of metrics that when fed into machine learning classifiers, can identify if the OAuth implementation is correct. The challenges include encrypted network communication, heterogeneous device types, and the labeling of training data.
|
5 |
Analyse et influence des paramètres d’affaires sur la qualité d’expérience des services Over-The-Top / Analysis and influence of business parameters on quality of experience for Over-The-Top servicesRivera Villagra, Diego 28 February 2017 (has links)
A l'époque où l'Internet est devenu la plateforme par défaut pour offrir de la valeur ajoutée, des nouveaux fournisseurs de services multimédia ont saisi cette opportunité en définissant les services Over-The-Top (OTT). Cependant, Internet n'étant pas un réseau de distribution fiable, il nécessaire de garantir de haut niveau de Qualité d'Expérience (QoE), ainsi que les revenues des Fournisseurs de Services d'Internet (ISP) et des OTTs.Le travail présenté dans ce document va au-delà de l'état de l'art, en proposant une solution qui prend en compte cet objectif. Les principaux apports qui y sont présentés peuvent être synthétisées en quatre contributions.En premier lieu, l'inclusion des paramètres liés aux modèles d'affaires dans l'analyse de la QoE a demandé un nouveau cadre pour calculer la QoE d'un service OTT. Ce cadre est basé sur le formalisme mathématique des Machines Étendues à États Finis (EFSM), ce qui profite de deux avantages des EFSMs~: les traces des machines suivent les décisions de l'utilisateur et les variables du contexte utilisés comme indicateurs de qualité, seront utilisées ultérieurement pour computer la QoE.La deuxième contribution consiste à mettre en œuvre deux algorithmes. Le premier fait le calcul d'une forme équivalent, ayant la forme d'un arbre qui représente les traces de la machine. Le deuxième utilise les traces et fait le calcul de la QoE pour les états terminaux de chaque trace. Les deux algorithmes peuvent être utilisés comme base d'un outil de monitorage capable de prévoir la valeur de la QoE d'un utilisateur. De plus, une mise en œuvre concrète des ces deux algorithmes comme une extension de l'Outil de Monitorage de Montimage (MMT) est aussi présentée.La troisième contribution présente la validation de l'approche avec un double objectif. D'une part, l'inclusion de paramètres du modèle d'affaires est validée et on détermine leur impact sur la QoE. D'autre part, le modèle de la QoE proposé est validé par la mise en œuvre d'une plateforme d'émulation d'un service OTT qui montre des vidéos perturbés. Cette implémentation est utilisée pour obtenir des valeurs estimées par utilisateurs réels qui sont utilisés pour dériver un modèle approprié de la QoE.La dernière contribution se base sur le cadre donné et fournit un analyse statique d'un service OTT. Cette procédure est réalisé par un troisième algorithme qui calcule la quantité des configurations contenues dans le modèle. En analysant à l'avance touts les scénarios possibles qu'un utilisateur peut rencontrer, le fournisseur des services OTT peut détecter des défauts dans le modèle et le service à une stade précoce du développement / At a time when the Internet has become the de facto platform for delivering value, the new multimedia providers took advantage of this opportunity to define Over-The-Top (OTT) services. Considering that Internet is not a reliable distribution network, it is necessary to ensure high levels of Quality of Experience (QoE) and revenues for Internet Service Providers (ISP) and OTTs. The work presented in this dissertation goes beyond the state of the art by providing a solution having this goal in mind. The main contributions presented here can be summarized in four main points.First, the inclusion of business-model related parameters in the QoE analysis required a new framework for calculating the QoE of an OTT service.This framework is based on the Extended Finite States Machine (EFSM) mathematical formalism, which takes advantage of two features of the EFSMs: (1) the traces of the machines that keep track of the user's decisions and; (2) the context variables used as quality indicators, correlated later with the QoE.The second contribution is the design and the implementation of two algorithms. The first computes the $l$-equivalent, a version in the form of a tree of the model that exposes the traces of the machine. The second uses the traces and computes the QoE at the final stages of each trace. Both algorithms can be used to design a monitoring tool that can predict a user’s QoE value. In addition, a concrete implementation is given as an extension of the Montimage Monitoring Tool (MMT).The third contribution presents the validation of the approach, having two objectives in mind. On the one hand, the inclusion of business-model related parameters was validated by determining the impact of such variables on the QoE. On the other hand, the proposed QoE model is validated by the implementation of an OTT emulation platform showing disrupted videos. This implementation is used to obtain QoE values evaluated from real users, values used to derive an appropriate QoE model.The last contribution uses the framework to perform a static analysis of an OTT service. This is done by a third algorithm that computes the amount of configurations contained in the model. By analyzing in advance all the possible scenarios a user can face -- and their respective QoE, the OTT provider can detect flaws in the model and the service from the early stages of development
|
6 |
Modelagem e verificação automática de um protocolo de controle de fluxo adaptativo usando traços de execução.MOREIRA, Anne Lorayne Gerônimo Silva Augusto. 22 May 2018 (has links)
Submitted by Maria Medeiros (maria.dilva1@ufcg.edu.br) on 2018-05-22T14:55:43Z
No. of bitstreams: 1
ANNE LORAYNE GERÕNIMO SILVA AUGUSTO MOREIRA - DISSERTAÇÃO (PPGCC) 2016.pdf: 843001 bytes, checksum: 3c03d468b4f80d420da1bad90adf7ca0 (MD5) / Made available in DSpace on 2018-05-22T14:55:43Z (GMT). No. of bitstreams: 1
ANNE LORAYNE GERÕNIMO SILVA AUGUSTO MOREIRA - DISSERTAÇÃO (PPGCC) 2016.pdf: 843001 bytes, checksum: 3c03d468b4f80d420da1bad90adf7ca0 (MD5)
Previous issue date: 2016 / Capes / O desenvolvimento de sistemas embarcados possibilitou uma forte expansão no número de aplicações dependentes de dispositivos programáveis em áreas tão distintas como automobilística, sistemas financeiros e sistemas médicos. Uma eventual falha em algum desses sistemas pode provocar diferentes graus de danos e prejuízos e, por isso, exige-se um alto grau de confiabilidade em seu funcionamento. O aumento da complexidade dos novos sistemas computacionais e a pressão econômica e busca de novos mercados, concorrem para a busca da redução nos prazos de entrega dos dispositivos programáveis e de seus softwares e sistemas embarcados. Este trabalho apresenta um estudo de caso para a utilização de um método de verificação formal de software aplicado a um sistema computacional de controle de fluxo adaptativo para Gateways Bluetooth Low-Energy utilizados em sistemas de monitoramento remoto de pacientes. Os resultados obtidos neste trabalho confirmam a viabilidade da aplicação do método na verificação formal do software proposto. / The embedded system development had a positive impact on the expansion of applications dependent on programmable devices inside many areas such as automotive industry, financial services, and medical systems. A failure in any of these systems can cause losses and damages on many levels. Therefore, embedded systems require a high level of reliability while operating. The increasing complexity of these new computational systems, the cost-effective pressure, and the new market demand, contribute to reduce the delivery deadlines of the programmable devices, their softwares, and embedded systems. This research presents a case study in which we evaluated the usage of a formal verification method applied to a computational controlling system, with adaptive flow, for Gateway Bluetooth Low Energy used in patient monitoring systems. The results obtained in this study confirm the application feasibility of the formal verification method of the proposed software.
|
7 |
On Scalable Reconfigurable Component Models for High-Performance Computing / Modèles à composants reconfigurables et passant à l'échelle pour le calcul haute performanceLanore, Vincent 10 December 2015 (has links)
La programmation à base de composants est un paradigme de programmation qui facilite la réutilisation de code et la séparation des préoccupations. Les modèles à composants dits « reconfigurables » permettent de modifier en cours d'exécution la structure d'une application. Toutefois, ces modèles ne sont pas adaptés au calcul haute performance (HPC) car ils reposent sur des mécanismes ne passant pas à l'échelle.L'objectif de cette thèse est de fournir des modèles, des algorithmes et des outils pour faciliter le développement d'applications HPC reconfigurables à base de composants. La principale contribution de la thèse est le modèle à composants formel DirectMOD qui facilite l'écriture et la réutilisation de code de transformation distribuée. Afin de faciliter l'utilisation de ce premier modèle, nous avons également proposé :• le modèle formel SpecMOD qui permet la spécialisation automatique d'assemblage de composants afin de fournir des fonctionnalités de génie logiciel de haut niveau ; • des mécanismes de reconfiguration performants à grain fin pour les applications AMR, une classe d'application importante en HPC.Une implémentation de DirectMOD, appelée DirectL2C, a été réalisée et a permis d'implémenter une série de benchmarks basés sur l'AMR pour évaluer notre approche. Des expériences sur grappes de calcul et supercalculateur montrent que notre approche passe à l'échelle. De plus, une analyse quantitative du code produit montre que notre approche est compacte et facilite la réutilisation. / Component-based programming is a programming paradigm which eases code reuse and separation of concerns. Some component models, which are said to be "reconfigurable", allow the modification at runtime of an application's structure. However, these models are not suited to High-Performance Computing (HPC) as they rely on non-scalable mechanisms.The goal of this thesis is to provide models, algorithms and tools to ease the development of component-based reconfigurable HPC applications.The main contribution of the thesis is the DirectMOD component model which eases development and reuse of distributed transformations. In order to improve on this core model in other directions, we have also proposed:• the SpecMOD formal component model which allows automatic specialization of hierarchical component assemblies and provides high-level software engineering features;• mechanisms for efficient fine-grain reconfiguration for AMR applications, an important application class in HPC.An implementation of DirectMOD, called DirectL2C, as been developed so as to implement a series of benchmarks to evaluate our approach. Experiments on HPC architectures show our approach scales. Moreover, a quantitative analysis of the benchmark's codes show that our approach is compact and eases reuse.
|
8 |
Automated Modeling of Human-in-the-Loop SystemsNoah M Marquand (11587612) 22 November 2021 (has links)
Safety in human in the loop systems, systems that change behavior with human input, is difficult to achieve. This difficulty can cost lives. As desired system capability grows, so too does the requisite complexity of the system. This complexity can result in designers not accounting for every use case of the system and unintentionally designing in unsafe behavior. Furthermore, complexity of operation and control can result in operators becoming confused during use or receiving insufficient training in the first place. All these cases can result in unsafe operations. One method of improving safety is implementing the use of formal models during the design process. These formal models can be analyzed mathematically to detect dangerous conditions, but can be difficult to produce without time, money, and expertise.<br> This document details the study of potential methods for constructing formal models autonomously from recorded observations of system use, minimizing the need for system expertise, saving time, money, and personnel in this safety critical process. I first discuss how different system characteristics affect system modeling, isolating specific traits that most clearly affect the modeling process Then, I develop a technique for modeling a simple, digital, menu-based system based on a record of user inputs. This technique attempts to measure the availability of different inputs for the user, and then distinguishes states by comparing input availabilities. From there, I compare paths between states and check for shared behaviors. I then expand the general procedure to capture the behavior of a flight simulator. This system more closely resembles real-world safety critical systems and can therefore be used to approximate a real use case of the method outlined. I use machine learning tools for statistical analysis, comparing patterns in system behavior and user behaviors. Last, I discuss general conclusions on how the modeling approaches outlined in this document can be improved and expanded upon.<br> For simple systems, we find that inputs alone can produce state machines, but without corresponding system information, they are less helpful for determining relative safety of different use cases than is needed. Through machine learning, we find that records of complex system use can be decomposed into sets of nominal and anomalous states but determining the causal link between user inputs and transitions between these conditions is not simple and requires further research.
|
9 |
Timetrees: a branching-time structure for modeling activity and state in the human-computer interfaceBrandenburg, Jeffrey Lynn 06 June 2008 (has links)
The design and construction of interactive systems with high usability requires a user-centered approach to system development. In order to support such an approach, it is necessary to provide tools and representations reflecting a behavioral view of the interface—a view centered on user activities and the system activities and states perceived by the user. While behavioral representations exist, there is no behavioral model of interaction between a user and a system. Such a model is necessary for formalization and extension of existing behavioral representations.
This dissertation presents a model of interactive behavior based on the timetree, a novel tree-based structure representing tasks, user actions, system activity, and system and interface state, all within a framework of branching sequential timelines. The model supports formal definitions, operations and abstraction techniques. Three application areas—a formal definition of an existing behavioral notation, connection between a behavioral representation and a formal model of input devices, and techniques for analysis of behavioral specifications—provide examples of the model's utility. / Ph. D.
|
10 |
XFM: An Incremental Methodology for Developing Formal ModelsSuhaib, Syed Mohammed 13 May 2004 (has links)
We present a methodology of an agile formal method named eXtreme Formal Modeling (XFM) recently developed by us, based on Extreme Programming concepts to construct abstract models from a natural language specification of a complex system. In particular, we focus on Prescriptive Formal Models (PFMs) that capture the specification of the system under design in a mathematically precise manner. Such models can be used as golden reference models for formal verification, test generation, etc. This methodology for incrementally building PFMs work by adding user stories (expressed as LTL formulae) gleaned from the natural language specifications, one by one, into the model. XFM builds the models, retaining correctness with respect to incrementally added properties by regressively model checking all the LTL properties captured theretofore in the model. We illustrate XFM with a graded set of examples including a traffic light controller, a DLX pipeline and a Smart Building control system. To make the regressive model checking steps feasible with current model checking tools, we need to keep the model size increments under control. We therefore analyze the effects of ordering LTL properties in XFM. We compare three different property-ordering methodologies: 'arbitrary ordering', 'property based ordering' and 'predicate based ordering'. We experiment on the models of the ISA bus monitor and the arbitration phase of the Pentium Pro bus. We experimentally show and mathematically reason that predicate based ordering is the best among these orderings. Finally, we present a GUI based toolbox for users to build PFMs using XFM. / Master of Science
|
Page generated in 0.0556 seconds