Spelling suggestions: "subject:"checking"" "subject:"hecking""
281 |
Internet operation of aero gas turbinesDiakostefanis, Michail 10 1900 (has links)
Internet applications have been extended to various aspects of everyday life and offer
services of high reliability and security. In the Academia, Internet applications offer
useful tools for the remote creation of simulation models and real-time conduction of
control experiments. The aim of this study was the design of a reliable, safe and secure
software system for real time operation of a remote aero gas turbine, with the use of
standard Internet technology at very low cost.
The gas turbine used in this application was an AMT Netherlands Olympus micro gas
turbine. The project presented three prototypes: operation from an adjacent
computer station, operation within the Local Area Netwok (LAN) of Cranfield
University and finally, remotely through the Internet. The gas turbine is a safety critical
component, thus the project was driven by risk assessment at all the stages of the
software process, which adhered to the Spiral Model. Elements of safety critical
systems design were applied, with risk assessment present in every round of the
software process.
For the implementation, various software tools were used, with the majority to be
open source API’s. LabVIEW with compatible hardware from National Instruments was
used to interface the gas turbine with an adjacent computer work station. The main
interaction has been established between the computer and the ECU of the engine,
with additional instrumentation installed, wherever required. The Internet user
interface web page implements AJAX technology in order to facilitate asynchronous
update of the individual fields that present the indications of the operating gas turbine.
The parameters of the gas turbine were acquired with high accuracy, with most
attention given to the most critical indications, exhaust gas temperature (EGT) and
rotational speed (RPM). These are provided to a designed real-time monitoring
application, which automatically triggers actions when necessary.
The acceptance validation was accomplished with a formal validation method – Model
Checking. The final web application was inspired by the RESTful architecture and
allows the user to operate the remote gas turbine through a standard browser,
without requiring any additional downloading or local data processing.
The web application was designed with provisions for generic applications. It can be
configured to function with multiple different gas turbines and also integrated with
external performance simulation or diagnostics Internet platforms. Also, an analytical
proposal is presented, to integrate this application with the TURBOMATCH WebEngine
web application, for gas turbine performance simulation, developed by Cranfield
University.
|
282 |
A Compositional Approach to Asynchronous Design Verification with Automated State Space ReductionAhrens, Jared 23 February 2007 (has links)
Model checking is the most effective means of verifying the correctness of asynchronous designs, and state space exploration is central to model checking. Although model checking can achieve very high verification coverage, the high degree of concurrency in asynchronous designs often leads to state explosion during state space exploration. To inhibit this explosion, our approach builds on the ideas of compositional verification. In our approach, a design modeled in a high level description is partitioned into a set of parallel components. Before state space exploration, each component is paired with an over-approximated environment to decouple it from the rest of the design. Then, a global state transition graph is constructed by reducing and incrementally composing component state transition graphs. We take great care during reduction and composition to preserve all failures found during the initial state space exploration of each component. To further reduce complexity, interface constraints are automatically derived for the over-approximated environment of each component. We prove that our approach is conservative in that false positive results are never produced. The effectiveness of our approach is demonstrated by the experimental results of several case studies showing that our approach can verify designs that cannot be handled by traditional at approaches. The experiments also show that constraints can reduce the size of the global state transition graph and prevent some false failures.
|
283 |
Methods and Algorithms for Scalable Verification of Asynchronous DesignsYao, Haiqiong 01 January 2012 (has links)
Concurrent systems are getting more complex with the advent of multi-core processors and the support of concurrent programs. However, errors of concurrent systems are too subtle to detect with the traditional testing and simulation. Model checking is an effective method to verify concurrent systems by exhaustively searching the complete state space exhibited by a system. However, the main challenge for model checking is state explosion, that is the state space of a concurrent system grows exponentially in the number of components of the system. The state space explosion problem prevents model checking from being applied to systems in realistic size.
After decades of intensive research, a large number of methods have been developed to attack this well-known problem. Compositional verification is one of the promising methods that can be scalable to large complex concurrent systems. In compositional verification, the task of verifying an entire system is divided into smaller tasks of verifying each component of the system individually. The correctness of the properties on the entire system can be derived from the results from the local verification on individual components. This method avoids building up the global state space for the entire system, and accordingly alleviates the state space explosion problem. In order to facilitate the application of compositional verification, several issues need to be addressed. The generation of over-approximate and yet accurate environments for components for local verification is a major focus of the automated compositional verification.
This dissertation addresses such issue by proposing two abstraction refinement methods that refine the state space of each component with an over-approximate environment iteratively. The basic idea of these two abstraction refinement methods is to examine the interface interactions among different components and remove the behaviors that are not allowed on the components' interfaces from their corresponding state space. After the extra behaviors introduced by the over-approximate environment are removed by the abstraction refinement methods, the initial coarse environments become more accurate. The difference between these two methods lies in the identification and removal of illegal behaviors generated by the over-approximate environments.
For local properties that can be verified on individual components, compositional reasoning can be scaled to large systems by leveraging the proposed abstraction refinement methods. However, for global properties that cannot be checked locally, the state space of the whole system needs to be constructed. To alleviate the state explosion problem when generating the global state space by composing the local state space of the individual components, this dissertation also proposes several state space reduction techniques to simplify the state space of each component to help the compositional minimization method to generate a much smaller global state space for the entire system. These state space reduction techniques are sound and complete in that they keep all the behaviors on the interface but do not introduce any extra behaviors, therefore, the same verification results derived from the reduced global state space are also valid on the original state space for the entire system.
An automated compositional verification framework integrated with all the abstraction refinement methods and the state space reduction techniques presented in this dissertation has been implemented in an explicit model checker Platu. It has been applied to experiments on several non-trivial asynchronous circuit designs to demonstrate its scalability. The experimental results show that our automated compositional verification framework is effective on these examples that are too complex for the monolithic model checking methods to handle.
|
284 |
Model Checking Parameterized Timed SystemsMahata, Pritha January 2005 (has links)
In recent years, there has been much advancement in the area of verification of infinite-state systems. A system can have an infinite state-space due to unbounded data structures such as counters, clocks, stacks, queues, etc. It may also be infinite-state due to parameterization, i.e., the possibility of having an arbitrary number of components in the system. For parameterized systems, we are interested in checking correctness of all the instances in one verification step. In this thesis, we consider systems which contain both sources of infiniteness, namely: (a) real-valued clocks and (b) parameterization. More precisely, we consider two models: (a) the timed Petri net (TPN) model, which is an extension of the classical Petri net model; and (b) the timed network (TN) model in which an arbitrary number of timed automata run in parallel. We consider verification of safety properties for timed Petri nets using forward analysis. Since forward analysis is necessarily incomplete, we provide a semi-algorithm augmented with an acceleration technique in order to make it terminate more often on practical examples. Then we consider a number of problems which are generalisations of the corresponding ones for timed automata and Petri nets. For instance, we consider zenoness where we check the existence of an infinite computation with a finite duration. We also consider two variants of boundedness problem: syntactic boundedness in which both live and dead tokens are considered and semantic boundedness where only live tokens are considered. We show that the former problem is decidable while the latter is not. Finally, we show undecidability of LTL model checking both for dense and discrete timed Petri nets. Next we consider timed networks. We show undecidability of safety properties in case each component is equipped with two or more clocks. This result contrasts previous decidability result for the case where each component has a single clock. Also ,we show that the problem is decidable when clocks range over the discrete time domain. This decidability result holds when the processes have any finite number of clocks. Furthermore, we outline the border between decidability and undecidability of safety for TNs by considering several syntactic and semantic variants.
|
285 |
Parameterized Systems : Generalizing and Simplifying Automatic VerificationRezine, Ahmed January 2008 (has links)
In this thesis we propose general and simple methods for automatic verification of parameterized systems. These are systems consisting of an arbitrary number of identical processes or components. The number of processes defines the size of the system. A parameterized system may be regarded as an infinite family of instances, namely one for each size. The aim is to perform a parameterized verification, i.e. to verify that behaviors produced by all instances, regardless of their size, comply with some safety or liveness property. In this work, we describe three approaches to parameterized verification. First, we extend the Regular Model Checking framework to systems where components are organized in tree-like structures. For such systems, we give a methodology for computing the set of reachable configurations (used to verify safety properties) and the transitive closure (used to verify liveness properties). Next, we introduce a methodology allowing the verification of safety properties for a large class of parameterized systems. We focus on systems where components are organized in linear arrays and manipulate variables or arrays of variables ranging over bounded or numerical domains. We perform backwards reachability analysis on a uniform over-approximation of the parameterized system at hand. Finally, we suggest a new approach that enables us to reduce the verification of termination under weak fairness conditions to a reachability analysis for systems with simple commutativity properties. The idea is that reachability calculations (associated with safety) are usually less expensive then termination (associated with liveness). This idea can also be used for other transition systems and not only those induced by parameterized systems.
|
286 |
Model Based Development of Embedded Systems using Logical Clock Constraints and Timed AutomataSuryadevara, Jagadish January 2013 (has links)
In modern times, human life is intrinsically depending on real-time embedded systems (RTES) with increasingly safety-critical and mission-critical features, for instance, in domains such as automotive and avionics. These systems are characterized by stringent functional requirements and require predictable timing behavior. However, the complexity of RTES has been ever increasing requiring systematic development methods. To address these concerns, model-based frameworks and component-based design methodologies have emerged as a feasible solution. Further, system artifacts such as requirements/specifications, architectural designs as well as behavioral models like statemachine views are integrated within the development process. However, several challenges remain to be addressed, out of which two are especially important: expressiveness, to represent the real-time and causality behavior, and analyzability, to support verification of functional and timing behavior. As the main research contribution, this thesis presents design and verification techniques for model-based development of RTES, addressing expressiveness and analyzability for architectural and behavioral models. To begin with, we have proposed a systematic design process to support component-based development. Next, we have provided a real-time semantic basis, in order to support expressiveness and verification for structural and behavioral models. This is achieved by defining an intuitive formal semantics for real-time component models, using ProCom, a component model developed at our research centre, and also using the CCSL (Clock Constraint Specification Language), an expressive language for specification of timed causality behavior. This paves the way for formal verification of both architectural and behavioral models, using model checking, as we show in this work, by transforming the models into timed automata and performing verification using UPPAAL, a model checking tool based on timed automata. Finally, the research contributions are validated using representative examples of RTES as well as an industrial case-study. / ARROWS
|
287 |
Type-safe Computation with Heterogeneous DataHuang, Freeman Yufei 14 September 2007 (has links)
Computation with large-scale heterogeneous data typically requires universal traversal to search for all occurrences of a substructure that matches a possibly complex search pattern, whose context may be different in different places within the data. Both aspects cause difficulty for existing general-purpose programming languages, because these languages are designed for homogeneous data and have problems typing the different substructures in heterogeneous data, and the complex patterns to match with the substructures. Programmers either have to hard-code the structures and search patterns, preventing programs from being reusable and scalable, or
have to use low-level untyped programming or programming with special-purpose query languages, opening the door to type mismatches that cause a high risk of program correctness and security problems.
This thesis invents the concept of pattern structures, and proposes a general solution to the above problems - a programming technique using pattern structures. In this solution, well-typed pattern structures are
defined to represent complex search patterns, and pattern searching over heterogeneous data is programmed with pattern parameters, in a statically-typed language that supports first-class typing of structures and patterns. The resulting programs are statically-typed, highly reusable for different data structures and different patterns, and highly scalable
in terms of the complexity of data structures and patterns. Adding new kinds of patterns for an application no longer requires changing the language in use or creating new ones, but is only a programming task. The thesis demonstrates the application of this approach to, and its
advantages in, two important examples of computation with heterogeneous data, i.e., XML data processing and Java bytecode analysis. / Thesis (Ph.D, Computing) -- Queen's University, 2007-08-27 09:43:38.888
|
288 |
RULES BASED MODELING OF DISCRETE EVENT SYSTEMS WITH FAULTS AND THEIR DIAGNOSISHuang, Zhongdong 01 January 2003 (has links)
Failure diagnosis in large and complex systems is a critical task. In the realm of discrete event systems, Sampath et al. proposed a language based failure diagnosis approach. They introduced the diagnosability for discrete event systems and gave a method for testing the diagnosability by first constructing a diagnoser for the system. The complexity of this method of testing diagnosability is exponential in the number of states of the system and doubly exponential in the number of failure types. In this thesis, we give an algorithm for testing diagnosability that does not construct a diagnoser for the system, and its complexity is of 4th order in the number of states of the system and linear in the number of the failure types. In this dissertation we also study diagnosis of discrete event systems (DESs) modeled in the rule-based modeling formalism introduced in [12] to model failure-prone systems. The results have been represented in [43]. An attractive feature of rule-based model is it's compactness (size is polynomial in number of signals). A motivation for the work presented is to develop failure diagnosis techniques that are able to exploit this compactness. In this regard, we develop symbolic techniques for testing diagnosability and computing a diagnoser. Diagnosability test is shown to be an instance of 1st order temporal logic model-checking. An on-line algorithm for diagnosersynthesis is obtained by using predicates and predicate transformers. We demonstrate our approach by applying it to modeling and diagnosis of a part of the assembly-line. When the system is found to be not diagnosable, we use sensor refinement and sensor augmentation to make the system diagnosable. In this dissertation, a controller is also extracted from the maximally permissive supervisor for the purpose of implementing the control by selecting, when possible, only one controllable event from among the ones allowed by the supervisor for the assembly line in automaton models.
|
289 |
Software para arquitecturas basadas en procesadores de múltiples núcleosFrati, Fernando Emmanuel January 2015 (has links)
Todos los procesadores disponibles en el mercado (incluso los procesadores utilizados en dispositivos móviles) poseen una arquitectura típica multicore. En consecuencia, el modelo de programación en memoria compartida se impuso sobre el modelo de programación secuencial como modelo por excelencia para obtener el máximo desempeño de estas arquitecturas.
En este modelo de programación las suposiciones de orden de ejecución entre instrucciones y atomicidad en el acceso a las variables heredadas del modelo de programación secuencial ya no son válidas. El no determinismo implícito en la ejecución de los programas concurrentes, obliga al programador a utilizar algún mecanismo de sincronización para asegurar esas propiedades.
Frecuentemente el programador se equivoca al sincronizar los procesos, dando lugar a nuevos errores de programación como son los deadlocks, condiciones de carrera, violaciones de orden, violaciones de atomicidad simple y violaciones de atomicidad multivariable. Los métodos tradicionales de depuración de programas no son aplicables en el contexto de los programas concurrentes, por lo que es necesario disponer de herramientas de depuración que puedan ayudar al programador a detectar esta clase de errores.
De estos errores, los deadlocks y las condiciones de carrera han gozado de mayor popularidad en la comunidad científica. Sin embargo, solo el 29,5 % de los errores son deadlocks: del 70,5 % restante, las violaciones de atomicidad representan más del 65 % de los errores, el 96 % ocurren entre dos threads y el 66 % involucran una sola variable. Por eso las violaciones de atomicidad simple se han definido en los últimos años como el caso más general de error de concurrencia y han recibido gran atención por numerosos grupos de investigación.
En 2005 aparecen las primeras propuestas que utilizan métodos de instrumentación dinámicos para la detección de violaciones de atomicidad, mejorando notablemente la capacidad de detección sobre las propuestas anteriores. De estas propuestas, AVIO(Lu, Tucek, Qin, y Zhou, 2006) se destaca como la propuesta con mejor rendimiento y capacidad de detección. Para detectar una violación de atomicidad, el método de AVIO consiste en monitorizar los accesos a memoria por parte de los procesos concurrentes durante la ejecución, registrando qué procesos acceden a cada variable, en búsqueda de interleavings no serializables. Pese a que AVIO es superior a las propuestas previas, el overhead que introduce (en promedio 25×) es demasiado elevado para ser utilizado en entornos en producción.
Muchas propuestas proponen reducir el overhead de los algoritmos de detección implementándolos directamente en el hardware a través de extensiones (cambios en el procesador, memoria cache, etc.), consiguiendo excelentes resultados. Sin embargo, este enfoque requiere que los fabricantes de procesadores decidieran incorporar esas modificaciones en sus diseños (cosa que no ha sucedido por el momento), por lo que es de esperar que tardarán en llegar al mercado y más aún en reemplazar las plataformas que actualmente están en producción.
Por otro lado, las implementaciones en software aplican métodos de instrumentación de programas. Debido a que requieren agregar llamadas a una rutina de análisis a cada instrucción que accede a la memoria, los métodos de detección de errores utilizan instrumentación a nivel de instrucción. Lamentablemente, este granularidad de instrumentación es lenta, penalizando el tiempo de la ejecución con más de un orden de magnitud.
Sin embargo, la posibilidad de error solamente existe si al menos dos threads acceden simultáneamente a datos compartidos. Esto significa que, si de la totalidad de la aplicación que está siendo monitorizada sólo un pequeño porcentaje de las operaciones acceden a datos compartidos, gran parte del tiempo invertido en instrumentar todos los accesos a memoria está siendo desperdiciado.
Para reducir el overhead de la instrumentación a nivel de instrucción restringiéndolo sólo a los accesos a memoria compartida, es necesario detectar el momento preciso en que esos accesos ocurren. La mejor opción para detectar este momento es cuando ocurre algún cambio en la memoria cache compartida entre los núcleos que ejecutan los procesos.
Una herramienta muy útil para esta tarea son los contadores hardware, un conjunto de registros especiales disponibles en todos los procesadores actuales. Esos registros pueden ser programados para contar el número de veces que un evento ocurre dentro del procesador durante la ejecución de una aplicación. Los eventos proveen información sobre diferentes aspectos de la ejecución de un programa (por ejemplo el número de instrucciones ejecutadas, el número de fallos en cache L1 o el número de operaciones en punto flotante ejecutadas).
Se plantea como estrategia encontrar un evento que detecte la ocurrencia de interleavings no serializables y en función de ello activar/desactivar AVIO. Lamentablemente, no existe un evento capaz de indicar la ocurrencia de casos de interleavings. Sin embargo, si es posible representar los casos a través de patrones de acceso a memoria.
La búsqueda de eventos asociados a los cambios de estado en el protocolo de coherencia cache reveló que para la arquitectura de pruebas existe un evento, cuya descripción indica que ocurre con uno de los patrones de acceso presentes en los casos de interleavings.
El patrón asociado al evento está presente en tres de los cuatro casos de interleavings no serializables que AVIO debe detectar. La experimentación realizada para validar el evento demostró que efectivamente ocurre con precisión con el patrón de acceso, y en consecuencia puede detectar la ocurrencia interleavings no serializables.
Luego de determinar la viabilidad del evento seleccionado, se experimentó con los contadores en un modo de operación llamado muestreo, el cual permite configurar los contadores para generar señales dirigidas a un proceso ante la ocurrencia de eventos. En este modo el programador especifica la cantidad de eventos que deben ocurrir antes de que la señal sea generada, permitiendo ajustar esta prestación de acuerdo a los requerimientos de la aplicación.
Este modo de operación fue utilizado para decidir cuándo activar la rutina de análisis de las herramientas de detección y en consecuencia reducir la instrumentación del código.
Por otro lado, el desactivado puede ser un poco más complejo. Debido a que no es posible configurar un contador para enviar una señal ante la no ocurrencia de eventos, se propone configurar un timer para verificar a intervalos regulares de tiempo si es seguro desactivar la rutina de análisis (por ejemplo porque en el último intervalo no se detectaron violaciones de atomicidad).
El modelo propuesto se utilizó para implementar una nueva versión llamada AVIO-SA, la cual inicia la ejecución de las aplicaciones monitorizadas con la rutina de análisis desactivada. En el momento en que detecta un evento la rutina es activada, funcionando por un tiempo como la versión original de AVIO. Eventualmente AVIO deja de detectar interleavings y la rutina de análisis es desactivada.
Debido a que no es posible estimar el valor óptimo para el tiempo del intervalo de muestreo analíticamente, se desarrollaron experimentos para encontrar este valor empíricamente. Se encontró que un intervalo de 5ms permite a AVIO-SA detectar aproximadamente la misma cantidad de interleavings que AVIO, pero con un tiempo de ejecución significativamente menor.
Para completar las pruebas de rendimiento se completaron los experimentos con HELGRIND, una herramienta libre de detección de condiciones de carrera y se estimó el overhead de cada herramienta con respecto a cada aplicación. En promedio, HELGRIND demostró un overhead de 223×, AVIO un overhead de 32× y AVIO-SA de 9×.
Aparte del rendimiento, se evaluó la capacidad de detección de errores de AVIO-SA. Para ello se hicieron 3 experimentos:
- Prueba de detección con kernels de bugs conocidos.
- Prueba de detección en aplicaciones reales (Apache).
- Comparación de bugs informados entre AVIO y AVIO-SA (a partir de SPLASH-2).
Afortunadamente AVIO-SA pasó las 3 pruebas satisfactoriamente. Los resultados obtenidos demuestran que el modelo propuesto no afecta negativamente la capacidad de detección de la herramienta, empleando en el proceso menos del 30 % del tiempo requerido por AVIO. Debido a que AVIO-SA altera menos la historia de ejecución de la aplicación monitorizada, es una mejor opción para ser utilizada en entornos de producción.
|
290 |
Contribution à la modélisation et à la vérification de processus workflowSbaï, Zohra 13 November 2010 (has links) (PDF)
La technologie de workflow, tendant à automatiser les processus d'entreprise et à fournir un support pour leur gestion, est aujourd'hui un secteur actif de recherche. C'est dans ce contexte que se situent ces travaux de thèse qui portent aussi bien sur la modélisation des processus workflow que sur leur vérification. Ces processus, pouvant être contraints par des ressources partagées ou encore par des durées de traitement, doivent être vérifiés avant d'être confiés aux systèmes de gestion de workflow qui vont les exécuter. Nous nous sommes intéressés par la vérification de la propriété de cohérence (soundness) des réseaux de workflow (WF-net) : sous-classes des réseaux de Petri (RdPs) modélisant les processus workflow.Dans ce cadre, en explorant la théorie structurelle des RdPs, nous avons identifié des sous-classes de WF-nets pour lesquelles la cohérence peut être vérifiée et caractérisée efficacement. Nous nous sommes focalisés en outre sur l'extension de ces sous-classes en tenant compte de la présence de ressources partagées et sur la propriété de cohérence en présence d'un nombre arbitraire d'instances prêtes à s'exécuter. Dans cette partie, nous avons dû automatiser le calcul des siphons minimaux dans un RdP. Pour ce faire, nous avons choisi un algorithme de la littérature et l'amélioré par la recherche et la contraction de circuits alternés.Ensuite, nous avons abordé la modélisation et la vérification de processus workflow tenant compte des contraintes temporelles. Nous avons en premier lieu proposé un modèle de TWF-net (WF-net Temporisé). Pour ce modèle, nous avons défini la propriété de cohérence temporelle et proposé une condition nécessaire et suffisante pour la vérifier. En deuxième lieu, nous avons relaxé les contraintes temporelles adoptées par la proposition d'un modèle temporel visant des processus à contraintes temporelles variant dans des intervalles de temps. Nous avons défini formellement le modèle de ITWF-net (Interval Timed WF-net) et donné sa sémantique. Par ailleurs, nous avons développé et testé un prototype de modélisation et de simulation des ITWF-nets.La dernière partie de cette thèse a concerné la vérification formelle des processus workflow par SPIN model checker. Nous avons dû en premier lieu traduire la spécification des workflows adoptée vers Promela : le langage de description des modèles à vérifier par SPIN. En second lieu, nous avons exprimé les propriétés de cohérence en Logique Linéaire Temporelle (LTL) et utilisé SPIN pour tester si chaque propriété est satisfaite par le modèle Promela du WF-net en question. Enfin, nous avons exprimé les propriétés de k-cohérence pour les WF-nets modélisant plusieurs instances et de (k,R)-cohérence pour les processus workflow concurrents et qui possèdent des ressources partagées.
|
Page generated in 0.0846 seconds