Spelling suggestions: "subject:"checking"" "subject:"hecking""
291 |
Évaluation de la Performance et de la Correction de Systèmes DistribuésRosa, Cristian 24 October 2011 (has links) (PDF)
Les systèmes distribués sont au cœur des technologies de l'information. Il est devenu classique de s'appuyer sur multiples unités distribuées pour améliorer la performance d'une application, la tolérance aux pannes, ou pour traiter problèmes dépassant les capacités d'une seule unité de traitement. La conception d'algorithmes adaptés au contexte distribué est particulièrement difficile en raison de l'asynchronisme et du non-déterminisme qui caractérisent ces systèmes. La simulation offre la possibilité d'étudier les performances des applications distribuées sans la complexité et le coût des plates-formes d'exécution réelles. Par ailleurs, le model checking permet d'évaluer la correction de ces systèmes de manière entièrement automatique. Dans cette thèse, nous explorons l'idée d'intégrer au sein d'un même outil un model checker et un simulateur de systèmes distribués. Nous souhaitons ainsi pouvoir évaluer la performance et la correction des applications distribuées. Pour faire face au problème de l'explosion combinatoire des états, nous présentons un algorithme de réduction dynamique par ordre partiel (DPOR), qui effectue une exploration basée sur un ensemble réduit de primitives de réseau. Cette approche permet de vérifier les programmes écrits avec n'importe laquelle des interfaces de communication proposées par le simulateur. Nous avons pour cela développé une spécification formelle complète de la sémantique de ces primitives réseau qui permet de raisonner sur l'indépendance des actions de communication nécessaire à la DPOR. Nous montrons au travers de résultats expérimentaux que notre approche est capable de traiter des programmes C non triviaux et non modifiés, écrits pour le simulateur SimGrid. Par ailleurs, nous proposons une solution au problème du passage à l'échelle des simulations limitées pour le CPU, ce qui permet d'envisager la simulation d'applications pair-à-pair comportant plusieurs millions de nœuds. Contrairement aux approches classiques de parallélisation, nous proposons une parallélisation des étapes internes de la simulation, tout en gardant l'ensemble du processus séquentiel. Nous présentons une analyse de la complexité de l'algorithme de simulation parallèle, et nous la comparons à l'algorithme classique séquentiel pour obtenir un critère qui caractérise les situations où un gain de performances peut être attendu avec notre approche. Un résultat important est l'observation de la relation entre la précision numérique des modèles utilisés pour simuler les ressources matérielles, avec le degré potentiel de parallélisation atteignables avec cette approche. Nous présentons plusieurs cas d'étude bénéficiant de la simulation parallèle, et nous détaillons les résultats d'une simulation à une échelle sans précédent du protocole pair-à-pair Chord avec deux millions de nœuds, exécutée sur une seule machine avec un modèle précis du réseau.
|
292 |
Formalizing and Enforcing Purpose RestrictionsTschantz, Michael Carl 09 May 2012 (has links)
Privacy policies often place restrictions on the purposes for which a governed entity may use personal information. For example, regulations, such as the Health Insurance Portability and Accountability Act (HIPAA), require that hospital employees use medical information for only certain purposes, such as treatment, but not for others, such as gossip. Thus, using formal or automated methods for enforcing privacy policies requires a semantics of purpose restrictions to determine whether an action is for a purpose. We provide such a semantics using a formalism based on planning. We model planning using a modified version of Markov Decision Processes (MDPs), which exclude redundant actions for a formal definition of redundant. We argue that an action is for a purpose if and only if the action is part of a plan for optimizing the satisfaction of that purpose under the MDP model. We use this formalization to define when a sequence of actions is only for or not for a purpose. This semantics enables us to create and implement an algorithm for automating auditing, and to describe formally and compare rigorously previous enforcement methods. We extend this formalization to Partially Observable Markov Decision Processes (POMDPs) to answer when information is used for a purpose. To validate our semantics, we provide an example application and conduct a survey to compare our semantics to how people commonly understand the word “purpose”.
|
293 |
Understanding repeated actions: Examining factors beyond anxiety in the persistence of compulsionsBucarelli, Bianca 28 January 2014 (has links)
Two decades of research on obsessive-compulsive disorder (OCD) has helped us develop a strong understanding of why obsessions are often followed by the performance of a compulsive act. What we have understood less well is why that act is repeated, even though it often results in an increase, rather than decrease, in discomfort. Emergent research on compulsive checking
implicates a number of beliefs—including perceived responsibility, perceived harm, need for certainty, and beliefs about one’s memory— that may influence behavioural parameters (e.g., check duration) of checking episodes. Furthermore, it has also been suggested that the act of compulsive checking may recur in part because of a self-perpetuating mechanism in which checking has paradoxical effects on these beliefs. Finally, some researchers have proposed that attentional focus (e.g., focus on threat) during checking may be related these paradoxical outcomes. At present, these ideas are mostly speculative, in part because there have been so few detailed studies of the actual phenomenology of compulsive rituals. The purpose of the present research was to gather phenomenological data on compulsions as performed by a clinical sample under ecologically valid conditions.
Study 1 extended emergent research suggesting that compulsions may persist because the act of checking has a number of ironic effects on beliefs. Individuals with a diagnosis of obsessive-compulsive disorder (OCD) and anxious controls (AC) completed a naturalistic stove task in our laboratory kitchen. Participants were fitted with portable eyetracking equipment and left on their own to boil a kettle, turn the stove off, and check to ensure that the stove is safe before leaving the kitchen. Surrounding the stove were household items that are “threatening” (e.g., matches) or “non-threatening” (e.g., mugs). Ratings of mood, responsibility, harm (severity, probability) and memory confidence were taken pre- and post-task and a portable eyetracker was used to monitor attention throughout the stove task. We examined the relations between behavioural indices (check duration, attentional focus) and pre- and post-task ratings of responsibility, perceived harm, mood, and memory confidence. Although we found that OCD (as compared to AC) participants took significantly longer to leave the kitchen after using the stove, we found no evidence that stronger pre-task ratings of responsibility, perceived harm, or memory confidence were associated with longer check duration. However, we found some evidence of an ironic effect whereby greater check duration was associated with greater perceived harm and decreased certainty about having properly ensured the stove was off. Of note, these ironic effects were not unique to participants with OCD, but were also observed in the AC group. With respect to the eyetracking data, we found minimal evidence linking threat fixations and beliefs in participants with OCD. In contrast, a number of interesting relations emerged in the eyetracking data of our anxious control participants. For AC participants, a greater proportion of time spent looking at the stove was associated with greater post-task sense of responsibility for preventing harm, greater post-task harm estimates, decreased certainty (about having ensured the stove was off), and decreased confidence in memory for the task.
In Study 2, individuals with a diagnosis of OCD completed a structured diary of their compulsions as they occurred naturally over a three˗day period. Participants recorded the circumstances leading to each compulsion and reported on the acts involved in the compulsive ritual, the duration and repetitiveness of the ritual, and the criteria used to determine completeness of the ritual. The findings of this study suggest that unsuccessful compulsions (i.e., compulsions in which certainty was not achieved) were associated with a longer duration (trend), more repetitions, a higher standard of evidence, and offered little in the way of distress reduction. These findings are discussed within the theoretical context of cognitive˗behavioural model of obsessive˗compulsive disorder and clinical implications are offered.
|
294 |
Evaluación de técnicas de detección de errores en programas concurrentesFrati, Fernando Emmanuel 24 June 2014 (has links)
Una característica fundamental de los sistemas de software es que se construyen desde el principio sabiendo que deberán incorporar cambios a lo largo de su ciclo de vida. Todos los libros que tratan sobre ingeniería de software coinciden en que los sistemas son evolutivos. Incluso al evaluar el esfuerzo que se debe invertir en un proyecto de software, se considera que un 20% está en el desarrollo y 80% se aplica al mantenimiento (Pfleeger & Atlee, 2009). Ian Sommerville estima que el 17% del esfuerzo de mantenimiento se invierte en localizar y eliminar los posibles defectos de los programas (Sommerville, 2006). Por ello, conseguir programas libres de errores es uno de los principales objetivos que se plantea (o se debería plantear) el desarrollador frente a cualquier proyecto de software.
Por otro lado, las limitaciones a la integración impuestas por factores físicos como son la temperatura y el consumo de energía, se han traducido en la integración de unidades de cómputo en un único chip, dando lugar a los procesadores de múltiples núcleos. Para obtener la máxima eficiencia de estas arquitecturas, es necesario el desarrollo de programas concurrentes (Grama, Gupta, Karypis, & Kumar, 2003). A diferencia de los programas secuenciales, en un programa concurrente existen múltiples hilos en ejecución accediendo a datos compartidos. El orden en que ocurren estos accesos a memoria puede variar entre ejecuciones, haciendo que los errores sean más difíciles de detectar y corregir.
En cómputo de altas prestaciones donde los tiempos de ejecución de las aplicaciones pueden variar de un par de horas hasta días, la presencia de un error no detectado en la etapa de desarrollo adquiere una importancia mayor. Por este motivo, resulta indispensable contar con herramientas que ayuden al programador en la tarea de verificar los algoritmos concurrentes y desarrollar tecnología robusta para tolerar los errores no detectados. En este contexto, la eficiencia de los programas monitorizados se ve comprometida por el overhead que introduce el proceso de monitorización.
Este trabajo forma parte de las investigaciones para la tesis doctoral del autor en el tema "Software para arquitecturas basadas en procesadores de múltiples núcleos. Detección automática de errores de concurrencia". Como tal, su aporte constituye un estudio de las técnicas y métodos vigentes en la comunidad científica aplicados a la detección y corrección de errores de programación en programas concurrentes.
Las siguientes secciones constituyen una introducción al proceso de detectar, localizar y corregir errores de software en programas secuenciales y se explican las complicaciones introducidas por los programas concurrentes. El Capítulo 2 trata los distintos errores que se pueden manifestar en programas concurrentes.
El Capítulo 3 resume los antecedentes en técnicas de detección y corrección de errores de concurrencia y se justifica la elección de las violaciones de atomicidad como caso de error más general. El Capítulo 4 explica las características de un algoritmo de detección de violaciones de atomicidad, y da detalles de su implementación. El Capítulo 5 contiene las características de la plataforma de experimentación y de la metodología empleada. El Capítulo 6 proporciona los resultados del trabajo experimental. Finalmente, se presentan las conclusiones del trabajo y se proponen las líneas de investigación futuras.
|
295 |
Mixed Methods Analysis of Injury in Youth Ice Hockey: Putting Injury into ContextDavey, Matthew 28 April 2014 (has links)
This thesis will discuss the results of a two-year 90 game study to consider the role violence and aggression plays in competitive minor hockey and its role as a mechanism for injury. The second objective of this thesis was to determine the contextual factors that lead to injury on the ice. Using a mixed methods approach, the study followed three minor hockey teams from the Ottawa-Gatineau region over two sporting seasons. The study found that players are not being injured due to aggressive or violent play but rather players are being hurt within the rules of the game. The contextual factors that were shown to lead to injury included: (1) body-checking, (2) time of the game, (3) player’s body mass, (4) position played and (5) legal plays. Injuries were
also broken down by anatomical site (head/neck, upper body and lower body); the upper body
was affected by injury most.
|
296 |
Verification of Security Properties Using Formal TechniquesAl-Shadly, Saleh 09 April 2013 (has links)
No description available.
|
297 |
Detection of Feature Interactions in Automotive Active Safety FeaturesJuarez Dominguez, Alma L. January 2012 (has links)
With the introduction of software into cars, many
functions are now realized with reduced cost,
weight and energy. The development of these software
systems is done in a distributed manner independently
by suppliers, following the traditional approach of
the automotive industry, while the car maker takes
care of the integration. However, the integration can
lead to unexpected and unintended interactions among
software systems, a phenomena regarded as feature
interaction. This dissertation addresses the problem
of the automatic detection of feature interactions
for automotive active safety features.
Active safety features control the vehicle's motion
control systems independently from the driver's request,
with the intention of increasing passengers' safety
(e.g., by applying hard braking in the case of an
identified imminent collision), but their unintended
interactions could instead endanger the passengers
(e.g., simultaneous throttle increase and sharp narrow
steering, causing the vehicle to roll over).
My method decomposes the problem into three parts:
(I) creation of a definition of feature interactions
based on the set of actuators and domain expert knowledge;
(II) translation of automotive active safety features
designed using a subset of Matlab's Stateflow into the
input language of the model checker SMV;
(III) analysis using model checking at design time to
detect a representation of all feature interactions
based on partitioning the counterexamples into
equivalence classes.
The key novel characteristic of my work is exploiting
domain-specific information about the feature interaction
problem and the structure of the model to produce a
method that finds a representation of all different
feature interactions for automotive active safety
features at design time.
My method is validated by a case study with the set
of non-proprietary automotive feature design models
I created. The method generates a set of counterexamples
that represent the whole set of feature interactions in
the case study.By showing only a set of representative
feature interaction cases, the information is concise
and useful for feature designers. Moreover, by generating
these results from feature models designed in Matlab's
Stateflow translated into SMV models, the feature
designers can trace the counterexamples generated by SMV
and understand the results in terms of the Stateflow
model. I believe that my results and techniques will
have relevance to the solution of the feature
interaction problem in other cyber-physical systems,
and have a direct impact in assessing the safety of
automotive systems.
|
298 |
Monitoring And Checking Of Discrete Event SimulationsUlu, Buket 01 January 2003 (has links) (PDF)
Discrete event simulation is a widely used technique for decision support. The
results of the simulation must be reliable for critical decision making problems.
Therefore, much research has concentrated on the verification and validation of
simulations. In this thesis, we apply a well-known dynamic verification
technique, assertion checking method, as a validation technique. Our aim is to
validate the particular runs of the simulation model, rather than the model itself.
As a case study, the operations of a manufacturing cell have been simulated. The
cell, which is METUCIM Laboratory at the Mechanical Engineering Department
of METU, has a robot and a conveyor to carry the materials, and two machines to
manufacture the items, and a quality control to measure the correctness of the
manufactured items.
This simulation is monitored and checked by using the Monitoring and Checking
(MaC) tool, a prototype developed at the University of Pennsylvania. The
separation of low-level implementation details (pertaining to the code) from the
high-level requirement specifications (pertaining to the simuland) helps keep
monitoring and checking the simulations at an abstract level.
|
299 |
Development Of A Library For Automated Verification Of Uml ModelsCelik, Makbule Filiz 01 April 2006 (has links) (PDF)
Software designs are mostly modeled as Unified Modeling Language (UML) diagrams when object oriented software development is concerned. Some popular topics in the industry such as Model Driven Development, generating test cases automatically in the early phases of software development, automated generation of code from design model etc. use the benefits of UML designs. All of these topics have something in common which is the need for accuracy against the meta-model not to face problems in the latter phases of the development process. Support on the full checking of the design models is necessary for the detection of design inconsistencies.
This thesis presents an approach for automated verification of UML design models and explains the implementation of the library called UMLChecker.
|
300 |
Model checking compositional Markov systemsJohr, Sven January 2007 (has links)
Zugl.: Saarbrücken, Univ., Diss., 2007
|
Page generated in 0.0461 seconds