• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 67
  • 20
  • 8
  • 7
  • 4
  • 4
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 129
  • 24
  • 22
  • 22
  • 20
  • 18
  • 18
  • 15
  • 14
  • 13
  • 13
  • 13
  • 12
  • 12
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Détermination de propriétés de flot de données pour améliorer les estimations de temps d'exécution pire-cas / Lookup of data flow properties to improve worst-case execution time estimations

Ruiz, Jordy 21 December 2017 (has links)
La recherche d'une borne supérieure au temps d'exécution d'un programme est une partie essentielle du processus de vérification de systèmes temps-réel critiques. Les programmes de tels systèmes ont généralement des temps d'exécution variables et il est difficile, voire impossible, de prédire l'ensemble de ces temps possibles. Au lieu de cela, il est préférable de rechercher une approximation du temps d'exécution pire-cas ou Worst-Case Execution Time (WCET). Une propriété cruciale de cette approximation est qu'elle doit être sûre, c'est-à-dire qu'elle doit être garantie de majorer le WCET. Parce que nous cherchons à prouver que le système en question se termine en un temps raisonnable, une surapproximation est le seul type d'approximation acceptable. La garantie de cette propriété de sûreté ne saurait raisonnablement se faire sans analyse statique, un résultat se basant sur une série de tests ne pouvant être sûr sans un traitement exhaustif des cas d'exécution. De plus, en l'absence de certification du processus de compilation (et de transfert des propriétés vers le binaire), l'extraction de propriétés doit se faire directement sur le code binaire pour garantir leur fiabilité. Toutefois, cette approximation a un coût : un pessimisme - écart entre le WCET estimé et le WCET réel - important entraîne des surcoûts superflus de matériel pour que le système respecte les contraintes temporelles qui lui sont imposées. Il s'agit donc ensuite, tout en maintenant la garantie de sécurité de l'estimation du WCET, d'améliorer sa précision en réduisant cet écart de telle sorte qu'il soit suffisamment faible pour ne pas entraîner des coûts supplémentaires démesurés. Un des principaux facteurs de surestimation est la prise en compte de chemins d'exécution sémantiquement impossibles, dits infaisables, dans le calcul du WCET. Ceci est dû à l'analyse par énumération implicite des chemins ou Implicit Path Enumeration Technique (IPET) qui raisonne sur un surensemble des chemins d'exécution. Lorsque le chemin d'exécution pire-cas ou Worst-Case Execution Path (WCEP), correspondant au WCET estimé, porte sur un chemin infaisable, la précision de cette estimation est négativement affectée. Afin de parer à cette perte de précision, cette thèse propose une technique de détection de chemins infaisables, permettant l'amélioration de la précision des analyses statiques (dont celles pour le WCET) en les informant de l'infaisabilité de certains chemins du programme. Cette information est passée sous la forme de propriétés de flot de données formatées dans un langage d'annotation portable, FFX, permettant la communication des résultats de notre analyse de chemins infaisables vers d'autres analyses. Les méthodes présentées dans cette thèse sont inclues dans le framework OTAWA, développé au sein de l'équipe TRACES à l'IRIT. Elles usent elles-mêmes d'approximations pour représenter les états possibles de la machine en différents points du programme. / The search for an upper bound of the execution time of a program is an essential part of the verification of real-time critical systems. The execution times of the programs of such systems generally vary a lot, and it is difficult, or impossible, to predict the range of the possible times. Instead, it is better to look for an approximation of the Worst-Case Execution Time (WCET). A crucial requirement of this estimate is that it must be safe, that is, it must be guaranteed above the real WCET. Because we are looking to prove that the system in question terminates reasonably quickly, an overapproximation is the only acceptable form of approximation. The guarantee of such a safety property could not sensibly be done without static analysis, as a result based on a battery of tests could not be safe without an exhaustive handling of test cases. Furthermore, in the absence of a certified compiler (and tech- nique for the safe transfer of properties to the binaries), the extraction of properties must be done directly on binary code to warrant their soundness. However, this approximation comes with a cost : an important pessimism, the gap between the estimated WCET and the real WCET, would lead to superfluous extra costs in hardware in order for the system to respect the imposed timing requirements. It is therefore important to improve the precision of the WCET by reducing this gap, while maintaining the safety property, as such that it is low enough to not lead to immoderate costs. A major cause of overestimation is the inclusion of semantically impossible paths, said infeasible paths, in the WCET computation. This is due to the use of the Implicit Path Enumeration Technique (IPET), which works on an superset of the possible execution paths. When the Worst-Case Execution Path (WCEP), corresponding to the estimated WCET, is infeasible, the precision of that estimation is negatively affected. In order to deal with this loss of precision, this thesis proposes an infeasible paths detection technique, enabling the improvement of the precision of static analyses (namely for WCET estimation) by notifying them of the infeasibility of some paths of the program. This information is then passed as data flow properties, formatted in the FFX portable annotation language, and allowing the communication of the results of our infeasible path analysis to other analyses.
42

Collaboration de techniques formelles pour la vérification de propriétés de sûreté sur des systèmes de transition / Collaboration of formal techniques for the verification of safety properties over transition systems

Champion, Adrien 07 January 2014 (has links)
Ce travail porte sur la vérification de composants logiciels dans les systèmes embarqués critiques avioniques. Les conséquences d’une erreur dans de tels systèmes pouvant s'avérer catastrophiques, il se doivent de respecter leur spécification. La vérification formelle tend à prouver cette adéquation si elle est vraie, ou à produire un contre-exemple si elle ne l’est pas. Les méthodes actuelles ne sont pas capable de traiter les les systèmes industriels. La découverte d’informations supplémentaires (invariants) sur le système permet de réduire l’espace de recherche afin de renforcer l’objectif de preuve: les informations découvertes sont suffisantes pour conclure “facilement”. Nous définissons une architecture parallèle permettant à des méthodes de découverte d’invariants de collaborer autour d’un moteur de kinduction. Dans ce contexte nous proposons HullQe, une nouvelle heuristique de génération d’invariants potentiels combinant un calcul de pré-image par élimination de quantificateurs et des calculs d’enveloppes convexes. Nous montrons que HullQe est capable, automatiquement, de renforcer des objectifs de preuve correspondant à la vérification de patrons de conception courants en avionique. Pour autant que nous sachions, les méthodes actuelles sont incapables de conclure sur ces problèmes. Nous détaillons nos améliorations de l’algorithme d’élimination de quantificateurs de Monniaux afin d’assurer le passage à l’échelle sur nos systèmes. Notre framework formel Stuff est une implémentation de notre architecture parallèle composée de HullQe, d'une technique de découverte d’invariants basée sur des templates, et d'une généralisation de PDR à l’arithmétique. / This work studies the verification of software components in avionics critical embedded systems. As the failure of suchsystems can have catastrophic consequences, it is mandatory to make sure they are consistent with their specification.Formal verification consists in proving that a system respects its specification if it does, or to produce a counterexample if itdoes not. Current methods are unable to handle the verification problems stemming from realistic systems. Discoveringadditional information (invariants) on the system can however restrict the search space enough to strengthen the proofobjective: the information discovered allow to "easily" reach a conclusion. We define a parallel architecture for invariantdiscovery methods allowing them to collaborate around a k-induction engine. In this context we propose a new heuristic forthe generation of potential invariants by combining an iterated preimage calculus by quantifier elimination with convexhull computations, called HullQe. We show that HullQe is able to automatically strengthen proof objectives correspondingto safety properties on widespread design patterns in our field. To the best of our knowledge, these systems elude currenttechniques. We also detail our improvements to the quantifier elimination algorithm by David Monniaux in 2008, so that itscales to computing preimages on our systems. Our formal framework Stuff is an implementation of the parallel architecturewe propose in which we implemented not only HullQe, but also a template-based invariant discovery technique, and ageneralisation to Property Directed Reachability to linear real arithmetic and integer octagons.
43

Fracionamento químico de fósforo em testemunho de sedimento do Reservatório Macela, Itabaiana-Sergipe

Canuto, Fabiana Alves Bezerra 26 July 2013 (has links)
This work concerns the fractionation of phosphorus present in sediments from the Macela Reservoir, located in the city of Itabaiana (Sergipe State, Brazil). Two sediment cores were obtained, each to a depth of approximately 30 cm, which were divided into 5 cm sections. The analytical method employed a Standards, Measurements, and Testing (SMT) protocol, in which the phosphorus was split into the fractions: total (PT), inorganic (PI), organic (PO), non-apatite (PNAP), and apatite (PAP). The technique was validated in terms of the limits of detection (DL) and quantification (QL) for each fraction. No significant contamination was observed. The accuracy was in the range 99-101%, and the relative standard deviation (RSD) was better than 2%. The measured phosphorus concentrations were in the ranges 441.60-1335.47 µg g-1 (PT), 409.54-1209.86 µg g-1 (PI), and 21.35-195.87 µg g-1 (PO). For the inorganic forms, the concentrations of PNAP and PAP were in the ranges 106.82-541.09 µg g-1 and 238.56-698.01 µg g-1, respectively. The concentrations of the phosphorus fractions were highest in Core 2. The contents of Fe, Al, Ca, and Corg were 3.45-4.95%, 4.85-7.73%, 1.02-1.89%, and 1.88-8.55%, respectively. Correlation analysis using the Spearman test identified iron and aluminum as the most important controlling factors for P in the sediments studied. The application of principal components analysis (PCA) and hierarchical clustering analysis (HCA) to the measured parameters divided the sediment samples into two groups, according to their similarities. This was also confirmed using analysis of variance (ANOVA, p<0.05). / Neste trabalho foi realizado o fracionamento do fósforo em sedimentos do Reservatório Macela, localizado na cidade de Itabaiana, Sergipe, Brasil. Foram tomados dois testemunhos com profundidade de aproximadamente 30 cm cada, que foram seccionados de 5 cm em 5 cm. O método analítico utilizado foi o Protocolo desenvolvido pelo Standards, Measurements and Testing (SMT), que fracionou o fósforo nos sedimentos em total (PT), inorgânico (PI), orgânico (PO), não apatita (PNAP) e apatita (PAP). A validação da metodologia foi verificada através dos valores de limite de detecção (LD) e de limite de quantificação (LQ) para PT, PI, PO, PNAP e PAP, onde se observou que não houve contaminação significativa. A exatidão do método foi conferida através dos valores de concordância variando 99 a 101% e o desvio padrão relativo (RSD) foi melhor que 2%. A concentração de P variaram entre 441,60 e 1335,47 µg g-1 para PT, entre 409,54 e 1209,86 µg g-1 para PI e entre 21,35 e 195,87 µg g-1 para Po. Para as formas inorgânicas o valor das concentrações de PNAP variou de 106,82 e 541,09 µg g-1 e de PAP variou de 238,56 a 698,01 µg g-1. As concentrações das frações de fósforo nos sedimentos analisados foram maiores no testemunho II. Os conteúdos de Fe, Al, Ca e Corg variaram de 3,45 a 4,95 %, de 4,85 a 7,73 %, de 1,02 a 1,89 % e de 1,88 a 8,55 %. A matriz de correlação de Sperman identificou o ferro e o alumínio como os fatores controladores mais importantes dos P nos sedimentos analisados. A análise de componentes principais (PCA) e analise de agrupamento hierárquico (HCA), aplicada aos parâmetros medidos, dividiu as amostras de sedimentos em dois grupos, de acordo com suas similaridades. Esta tendência foi confirmada pela análise de variância (ANOVA, p< 0,05).
44

Konstrukce dopravníkové nástavby mobilního robotu MiR / Construction of conveyor superstructure of mobile robot MiR

Adámek, Tomáš January 2021 (has links)
The subject of this diploma thesis is the design of a conveyor superstructure of a collaborative mobile robot MiR. The first part is a theoretical research basis focused on mobile collaborative technologies and information related to Mobile industrial Robots. The following is a practical part built on previous acquired knowledge. The key issue of the solution is the logistics transport of the PCB magazine in the field of SMT industry. There are created two structural design variants of the superstructure arrangement for the transport of two binders and the most suitable variant is selected on the basis of the multicriteria basic method. The selected alternative of the conveyor top module is then subjected to design calculations. In order to obtain a comprehensive overview of the prototype production, the following section contains drawing documentation, including relevant comments, an economic cost estimate and a risk analysis.
45

Role of corticospinal influences in post-stroke spasticity

Hernandez, Alejandro 06 1900 (has links)
Chez les personnes post-AVC (Accident Vasculaire Cérébral), spasticité, faiblesse et toute autre coactivation anormale proviennent de limitations dans la régulation de la gamme des seuils des réflexes d'étirement. Nous avons voulu savoir si les déficits dans les influences corticospinales résiduelles contribuaient à la limitation de la gamme des seuils et au développement de la spasticité chez les patients post-AVC. La stimulation magnétique transcranienne (SMT) a été appliquée à un site du cortex moteur où se trouvent les motoneurones agissant sur les fléchisseurs et extenseurs du coude. Des potentiels évoqués moteurs (PEM) ont été enregistrés en position de flexion et d'extension du coude. Afin d'exclure l'influence provenant de l'excitabilité motoneuronale sur l'évaluation des influences corticospinales, les PEM ont été suscités lors de la période silencieuse des signaux électromyographiques (EMG) correspondant à un bref raccourcissement musculaire juste avant l'enclenchement de la SMT. Chez les sujets contrôles, il y avait un patron réciproque d'influences corticospinales (PEM supérieurs en position d'extension dans les extenseurs et vice-versa pour les fléchisseurs). Quant à la plupart des sujets post-AVC ayant un niveau clinique élevé de spasticité, la facilitation corticospinale dans les motoneurones des fléchisseurs et extenseurs était supérieure en position de flexion (patron de co-facilitation). Les résultats démontrent que la spasticité est associée à des changements substantiels des influences corticospinales sur les motoneurones des fléchisseurs et des extenseurs du coude. / In post-stroke patients, spasticity, weakness and abnormal coactivation result from limitations in the range of regulation of stretch reflex thresholds. We investigated whether the deficits in residual corticospinal influences contribute to the limitation in the regulation of those thresholds and as a result to spasticity in post-stroke subjects. A single-pulse transcranial magnetic stimulation (TMS) was applied to the site of the motor cortex projecting to motoneurons of elbow flexors and extensors. Responses to TMS (motor evoked potentials or MEPs) were recorded at a flexion and an extension position of the elbow joint. To exclude the influence of background motoneuronal excitability on the evaluation of corticospinal influences, MEPs were elicited during the electromyographic (EMG) silent period produced by brief muscle shortening prior to TMS. In control subjects, corticospinal facilitation of flexor motoneurons was usually larger whereas that of extensor motoneurons was smaller during actively maintained flexion than when the extension position was maintained (reciprocal pattern of position-related changes in flexor and extensor MEPs). In most post-stroke subjects with high clinical spasticity scores, corticospinal facilitation of both flexor and extensor motoneurons was greater at the actively established flexion position (co-facilitation pattern). Results show that spasticity is associated with substantial changes in the corticospinal influences on flexor and extensor motoneurons. Corticospinal co-facilitation of the two groups of motoneurons may be related to the necessity to overcome resistance of spastic muscles during active changes in the elbow joint angle.
46

Modelling and simulation of paradigms for printed circuit board assembly to support the UK's competency in high reliability electronics

Wilson, Antony R. January 2012 (has links)
The fundamental requirement of the research reported within this thesis is the provision of physical models to enable model based simulation of mainstream printed circuit assembly (PCA) process discrete events for use within to-be-developed (or under development) software tools which codify cause & effects knowledge for use in product and process design optimisation. To support a national competitive advantage in high reliability electronics UK based producers of aircraft electronic subsystems require advanced simulation tools which offer model based guidance. In turn, maximization of manufacturability and minimization of uncontrolled rework must therefore enhance inservice sustainability for 'power-by-the-hour' commercial aircraft operation business models.
47

An Analyzer for Message Passing Programs

Huang, Yu 01 May 2016 (has links)
Asynchronous message passing systems are fast becoming a common means for communication between devices. Two problems existing in message passing programs are difficult to solve. The first problem, intended or otherwise, is message-race where a receive may match with more than one send in the runtime system. This non-determinism often leads to intermittent and unexpected behavior depending on the resolution of the race. Another problem is deadlock, which is a situation in that each member process of the group is waiting for some member process to communicate with it, but no member is attempting to communicate with it. Detecting if message-race and/or deadlocks exist in a message passing program are both NP-complete. The difficulty of solving the two problems also comes from three factors that complicate the semantics: asynchronous communication, synchronous barrier, and buffering settings including infinite buffering (the system can buffer messages) and zero buffering (the system has no internal buffering). To solve the above problems with complicating factors, this research provides a novel predictive analysis that initializes a concrete execution and then predicts the behavior of other executions that arise from the initial execution. This research starts with Satisfiability Modulo Theories (SMT) based model checking that provides precise analysis for the program behavior. Unfortunately, a precise analysis using SMT does not scale to large programs. As such, the SMT based model checking is combined with heuristic search for witnessing program properties. The heuristic search is efficient in identifying how sends may match with receives in the runtime as it only looks for the match relations for sends and receives in a small searching space initially; the space is increased only if the program property is not witnessed, until all possible match relations for sends and receives reflected in message non-determinism are found. This research also gives a static analysis approach that is scalable as it does not need to analyze the full set of program behaviors; rather, the static analysis only uses polynomial-time algorithms to identify all potential deadlocks in a send-receive templates given a set of pre-defined deadlock patterns. Given the predictive analysis consisting of SMT based model checking with heuristic search and static analysis, this research is able to solve the two problems above. The work in this dissertation also demonstrates that the predictive analysis is more efficient than the existing tools for verifying message passing programs.
48

Fragments de l'arithmétique dans une combinaison de procédures de décision

Caminha Barbosa De Oliveira, Diego 14 March 2011 (has links) (PDF)
Les méthodes formelles pour la conception des software et hardware génèrent souvent des formules qui doivent être validées, de manière interactive ou automatique. Parmi les outils automatiques, les solveurs SMT (Satisfiabilité Modulo Théories) sont particulièrement adaptés à la résolution de ces obligations de preuve, puisque leur langage d'entrée est la logique équationnelle avec des symboles provenant de divers fragments décidables utiles tels que les symboles non interprétés, l'arithmétique linéaire et des structures de données habituelles comme les tableaux ou les listes. Dans cette thèse, nous présentons une approche pour combiner des procédures de décision et des solveurs propositionnels dans un solveur SMT. Cette approche est fondée non seulement sur l'échange d'égalités déductibles entre les procédures de décision, mais aussi sur la génération d'égalités de modèle par des procédures de décision. Cela étend très bien la procédure classique de combinaison due à Nelson-Oppen dans une simple plate-forme pour combiner sans heurts des théories convexes et non convexes. Deuxièmement, nous présentons un algorithme original pour le fragment de l'arithmétique, appelé la logique de différence, et les détails sur la façon de mettre en oeuvre une procédure de décision basée sur cet algorithme. La logique de différence est modélisée en utilisant la théorie des graphes. Les déductions et les vérification de la cohérence effectués par l'algorithme se font par des recherches de cycles négatifs et des calculs de plus courts chemins de manière incrémentale. La dernière partie de la thèse présente une variation incrémentale originale de la méthode du simplexe que nous utilisons pour construire une procédure de décision pour l'arithmétique linéaire. Comme pour la logique de différence, nous présentons les détails de la procédure de décision qui la rend approprié pour notre plate-forme de combinaison utilisée par des solveurs SMT. Les méthodes et les techniques décrites dans cette thèse ont été mises en oeuvre et sont disponibles dans notre solveur SMT open-source, veriT.
49

Kartläggning av truckflöden på Primary Products : Ur ett Lean Production perspektiv

Wallin, Kristina, Westblom, Maria January 2008 (has links)
<p>The Transport division is responsible for the internal transports on Sandvik Material</p><p>Technology AB. At the moment there is no clear picture of how the straddle carriers are</p><p>moving at the division Primary Products and if it is working in a satisfying way. It is</p><p>important to see the whole picture of the transportation flow so improvements can be made.</p><p>The purpose of this essay is to survey and analyse the truck flow in a Lean Production</p><p>perspective. The essay is delimited to only include straddle carriers transportation ways in the</p><p>Primary Products division. To survey the truck flow we have used a method called Walkthrough.</p><p>This means that we have walked thorough the whole production flow and</p><p>interviewed some of the employed on the way to get the necessary information. We have also</p><p>made an extensive literature study to gather all the necessary theories that is relevant for this</p><p>essay.</p><p>Since 2003 Sandvik Material Technology is running a project called “ledtidsprojektet”, the</p><p>purpose of this project is to create a more efficient production. This project is based on the</p><p>Japanese philosophy Lean Production but is audited to be suitable to Sandviks organisation.</p><p>Today most of the producing divisions within SMT are working according to</p><p>”ledtidsprogrammet”, unfortunately it is not introduced on the Transport division yet.</p><p>The result we got by studying the use of the straddle carriers showed that 40 % where with</p><p>cargo, 35 % without and 25 % where remaining times were breaks are included. The basis of</p><p>the study is from a GPS-transmitter which is placed on one of the straddle carrier. We have</p><p>also made our own study where we followed on the straddle carrier and measured the time for</p><p>the different activities. To get a more statistical secured investigation it is necessary to make a</p><p>more extensive study, when these study where made during a limited time.</p><p>On the basis of the truck flows survey some suggestions for improvements have been</p><p>identified. It has showed that today’s key measurements have to be overlooked to give a fair</p><p>picture of the situation. They also have to improve the way they are planning the transports.</p><p>This can be made by better communication with the producing divisions.</p><p>For the straddle carriers to be more Lean first the producing divisions have to work according</p><p>to the Lean philosophy. It is up to them to try to reduce the stock level and get a more evenly</p><p>production.</p>
50

Kartläggning av truckflöden på Primary Products : Ur ett Lean Production perspektiv

Wallin, Kristina, Westblom, Maria January 2008 (has links)
The Transport division is responsible for the internal transports on Sandvik Material Technology AB. At the moment there is no clear picture of how the straddle carriers are moving at the division Primary Products and if it is working in a satisfying way. It is important to see the whole picture of the transportation flow so improvements can be made. The purpose of this essay is to survey and analyse the truck flow in a Lean Production perspective. The essay is delimited to only include straddle carriers transportation ways in the Primary Products division. To survey the truck flow we have used a method called Walkthrough. This means that we have walked thorough the whole production flow and interviewed some of the employed on the way to get the necessary information. We have also made an extensive literature study to gather all the necessary theories that is relevant for this essay. Since 2003 Sandvik Material Technology is running a project called “ledtidsprojektet”, the purpose of this project is to create a more efficient production. This project is based on the Japanese philosophy Lean Production but is audited to be suitable to Sandviks organisation. Today most of the producing divisions within SMT are working according to ”ledtidsprogrammet”, unfortunately it is not introduced on the Transport division yet. The result we got by studying the use of the straddle carriers showed that 40 % where with cargo, 35 % without and 25 % where remaining times were breaks are included. The basis of the study is from a GPS-transmitter which is placed on one of the straddle carrier. We have also made our own study where we followed on the straddle carrier and measured the time for the different activities. To get a more statistical secured investigation it is necessary to make a more extensive study, when these study where made during a limited time. On the basis of the truck flows survey some suggestions for improvements have been identified. It has showed that today’s key measurements have to be overlooked to give a fair picture of the situation. They also have to improve the way they are planning the transports. This can be made by better communication with the producing divisions. For the straddle carriers to be more Lean first the producing divisions have to work according to the Lean philosophy. It is up to them to try to reduce the stock level and get a more evenly production.

Page generated in 0.031 seconds