• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 269
  • 111
  • 90
  • 36
  • 26
  • 24
  • 21
  • 16
  • 7
  • 6
  • 6
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 737
  • 140
  • 138
  • 131
  • 101
  • 90
  • 87
  • 82
  • 81
  • 68
  • 67
  • 64
  • 63
  • 63
  • 62
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Synthèse automatique de circuits asynchrones QDI

DINH DUC, Anh Vu 14 March 2003 (has links) (PDF)
Contrairement aux circuits synchrones, les circuits asynchrones fonctionnent avec un mécanisme de synchronisation local (sans signal d'horloge). Ils ont montré depuis de nombreuses années leur pertinence vis-à-vis des circuits synchrones grâce à leurs propriétés de robustesse, de faible consommation, de faible bruit et de modularité. Cependant, le manque actuel de méthodes et d'outils de conception est un frein à leur développement. Ce travail de thèse porte sur la définition d'une méthodologie de conception de circuits intégrés asynchrones quasi-insensibles aux délais (QDI). Les circuits QDI font partie de la classe des circuits asynchrones les plus robustes, propriété avantageuse pour les technologies à venir. <br />La méthode de conception proposée permet d'une part la modélisation dans un langage de haut niveau, et d'autre part la génération de circuits en portes logiques élémentaires et en portes de Muller. Cette méthode a été prototypée par le développement d'un outil de conception automatique de circuits asynchrones TAST («TIMA Asynchronous Synthesis Tools»). C'est un environnement de conception principalement composé d'un compilateur et d'un synthétiseur offrant la possibilité de générer des circuits asynchrones QDI avec différents modèles de circuits cibles (séquentiel, WCHB, PCHB et PCFB) en partant de descriptions de haut niveau décrites en langage CHP. Le résultat produit par le synthétiseur est une description VHDL de niveau porte qui peut cibler soit une technologie spécifique pour l'asynchrone, soit une bibliothèque de cellules standard (circuits précaractérisés ou FPGAs).
192

Design and Evaluation of a Single Instruction Processor / Design och utveckling av en eninstruktions processor

Mu, Rongzeng January 2003 (has links)
<p>A new path of DSP processor design is described in this thesis with an example, to design a FFT processor. It is an innovative concept for DSP processor design developed by the Electronic Systems Division in the department of Electrical Engineer department in Linköping University. </p><p>The project described in this thesis is to design a Sande-Tukey FFT processor step by step. It will go through all steps from the simplest MATLAB specification to the final synthesizable VHDL specification. The steps should be as small as possible in order to avoid error and MATLAB should be used as for as possible.</p>
193

Analysis of Intent Specification and SystemUpgrade Traceability / Analys av Intent Specification och spårbarhet vid systemuppgradering

Elmqvist, Jonas January 2003 (has links)
<p>The impact of computer programs in safety-critical systems has increased in the past decades and computer-based systems can now be found in a wide range of applications. </p><p>A new approach for developing dependable systems is documenting all design rationale that affects safety using Intent Specifications. A recent approach for developing complex systems efficiently and in an upgradeable manner is Component-Based System Engineering. In this thesis, these approaches are combined in the development and upgrade of a Remote Robot Control Unit by using the tool SpecTRM. </p><p>The case study showed that Intent Specifications and CBSE could successfully be combined. The tool SpecTRM was also studied during this work. It showed that traceability is of great importance during system upgrades. Since SpecTRM does not support formal verification of safety properties in design models, the SpecTRM-RL models were translated to Esterel code. Further analysis showed that functional verification can be efficient and useful when using the automatic model-checking tool available in Esterel Studio. This work also proposes a practical guideline for system upgrades in component-based systems, presented as the Sigma model. </p><p>The work also showed some shortcomings. First of all, the tool SpecTRM is at an early development stage and not yet mature for industrial use. Secondly, neither of the two languages SpecTRM-RL and Esterel was expressive enough for some of the numerical computations and data-exchange structures needed for navigation in the system. Finally, the verifier was not able to prove any data properties since valued signals are abstracted into pure signals during verification in Esterel Studio.</p>
194

Breaking the Customer Code : A model to Translate Customer Expectations into Specification Limits

Gregorio, Ruben January 2010 (has links)
<p>Today, firms compete with services rather than goods. Large service organizations are beginning to use Six Sigma as continuous improvement tool. An important part of the Six Sigma methodology is the calculation of number of defects in the process, i.e. points outside the specification limits. Unlike goods quality, which can be measured objectively by number of defects, in service goods the setting up of specification limits is a complicated issue because it is marked by the use and expectations among the different customers. As Six Sigma was originally created for manufacturing, this crucial fact is not contemplated in the Six-Sigma roadmap Define- Measure-Analyze-Improve-Control (DMAIC).</p><p>The aim of this thesis is to develop a new model to help the Service Division, Siemens Industrial Turbomachinery AB to set the specification limits according to the customer expectations.</p><p>A review of relevant literature is used to develop a new integrated model with ideas from the Kano model, SERVQUAL, Taguchi loss function, Importance Performance Analysis (IPA) and a new model, the ”Trade-Off Importance”. A survey was carried out for 18 external customers and internal stakeholders.</p><p>The model has demonstrated its robustness and credibility to set the specification limits. Additionally it is a very powerful tool to set the strategic directions and for service quality measurement. As far as we know, this thesis is the first attempt to create a roadmap to set the specification limits in services. Researchers should find a proposed model to fill the research gap. From a managerial standpoint, the practical benefits in Siemens Industrial Turbomachinery AB, suggest a new way of communicating to customers.</p>
195

Techniques for Automatic Generation of Tests from Programs and Specifications

Edvardsson, Jon January 2006 (has links)
<p>Software testing is complex and time consuming. One way to reduce the effort associated with testing is to generate test data automatically. This thesis is divided into three parts. In the first part a mixed-integer constraint solver developed by Gupta et. al is studied. The solver, referred to as the Unified Numerical Approach (una), is an important part of their generator and it is responsible for solving equation systems that correspond to the program path currently under test.</p><p>In this thesis it is shown that, in contrast to traditional optimization methods, the una is not bounded by the size of the solved equation system. Instead, it depends on how the system is composed. That is, even for very simple systems consisting of one variable we can easily get more than a thousand iterations. It is also shown that the una is not complete, that is, it does not always find a mixed-integer solution when there is one. It is found that a better approach is to use a traditional optimization method, like the simplex method in combination with branch-and-bound and/or a cutting-plane algorithm as a constraint solver.</p><p>The second part explores a specification-based approach for generating tests developed by Meudec. Tests are generated by partitioning the specification input domain into a set of subdomains using a rule-based automatic partitioning strategy. An important step of Meudec’s method is to reduce the number of generated subdomains and find a minimal partition. This thesis shows that Meudec’s minimal partition algorithm</p><p>is incorrect. Furthermore, two new efficient alternative algorithms are developed. In addition, an algorithm for finding the upper and lower bound on the number of subdomains in a partition is also presented.</p><p>Finally, in the third part, two different designs of automatic testing tools are studied. The first tool uses a specification as an oracle. The second tool, on the other hand, uses a reference program. The fault-detection effectiveness of the tools is evaluated using both randomly and systematically generated inputs.</p>
196

Reasoning about Agents in Goal-Oriented Requirements Engineering

Letier, Emmanuel 22 May 2002 (has links)
The thesis proposes a number of techniques for elaborating requirements constructively from high-level goals. The techniques are based on the KAOS goal-oriented method for requirements engineering. This method consists in identifying goals and refining them into subgoals until the latter can be assigned as responsibilities of single agents such as humans, devices and software. Domain properties and assumptions about the software environment are also used during the goal refinement process. The method supports the exploration of alternative goal refinements and alternative responsibility assignments of goals to agents. It also supports the identification and resolution of conflicts between goals, and the identification and resolution of exceptional agent behaviors, called obstacles, that violate goals and assumptions produced during the goal refinement process. The thesis enriches the KAOS framework through three kinds of techniques: (a) techniques for identifying agents, goal refinements, and alternative responsibility assignments, and for deriving agent interfaces from such responsibility assignments; (b) techniques for deriving operational requirements from goal specifications; (c) techniques for generating obstacles to the satisfaction of idealized goals and assumptions, and for generating alternative obstacle resolutions. The result is a coherent body of systematic techniques for requirements elaboration that are both theoretically well-founded (a formal model of agent is defined) and effective in practice (the techniques are validated on two real case studies of significant size: the London ambulance despatching system, and the Bay Area Rapid Transit train system).
197

Design, implementation and evaluation of MPVS : a tool to support the teaching of a programming method

Dony, Isabelle 14 September 2007 (has links)
Teaching formal methods is notoriously difficult and is linked to motivation problems among the students; we think that formal methods need to be supported by adequate tools to get better acceptance from the students. One of the goals of the thesis is to build a practical tool to help students to deeply understand the classical programming methodology based on specifications, loop invariants, and decomposition into subproblems advocated by Dijkstra, Gries, and Hoare to name only a few famous computer scientists. Our motivation to build this tool is twofold. On the one hand, we demonstrate that existing verification tools (e.g., ESC/Java, Spark, SMV) are too complex to be used in a pedagogical context; moreover they often lack completeness, (and sometimes, even soundness). On the other hand teaching formal (i.e., rigorous) program construction with pen and paper does not motivate students at all. Thus, since students love to use tools, providing them with a tool that checks not only their programs but also their specifications and the structure of their reasoning seemed appealing to us. Obviously, building such a system is far from an easy task. It may even be thought completely unfeasible to experts in the field. Our approach is to restrict our ambition to a very simple programming language with simple types (limited to finite domains) and arrays. In this context, it is possible to specify problems and subproblems, both clearly and formally, using a specific assertion language based on mathematical logic. It appears that constraint programming over finite domains is especially convenient to check the kind of verification conditions that are needed to express the correctness of imperative programs. However, to conveniently generate the constraint problems equivalent to a given verification condition, we wish to have at hand a powerful language that allows us to interleave constraints generation, constraints solving, and to specify a distribution strategy to overcome the incompleteness of the usual consistency techniques used by finite domain constraint programming. We show in this thesis that the Oz language includes all programming mechanisms needed to reach our goals. Such a tool has been fully implemented and is intended to provide interesting feedback to students learning the programming method: it detects programming and/or reasoning errors and it provides typical counter-examples. We argue that our system is adapted to our pedagogical context and we report on experiments of using the tool with students in a third year programming course.
198

Comparison of Two Methods for the Assessment of Chloride Ion Penetration in Concrete: A Field Study

Ryan, Eric William 01 August 2011 (has links)
The currently accepted method for assessing the resistance of concrete to penetration of chloride ions is ASTM C1202, commonly known as the “Rapid Chloride Permeability (RCP) Test.” The test is time-consuming, laborious, has rather high variability, and, is to a degree, user sensitive, making it problematic for inclusion in a performance-based specification. A potential alternative to the RCP test is the “Surface Resistivity (SR) Test”, a method which is dramatically easier, faster, and has less variability than the RCP test. The research reported herein was directed toward determining a correlation between RCP and SR test measurements for Tennessee bridge deck concrete, based on cylinders collected from concrete bridge decks being constructed across the state, and evaluating the appropriateness of the SR test as an alternative to the RCP test for inclusion in a performance-based specification. Results of the testing showed a clear correlation between the two methods and identified potential limits for possible inclusion in a performance-based specification.
199

Reasoning about Agents in Goal-Oriented Requirements Engineering

Letier, Emmanuel 22 May 2002 (has links)
The thesis proposes a number of techniques for elaborating requirements constructively from high-level goals. The techniques are based on the KAOS goal-oriented method for requirements engineering. This method consists in identifying goals and refining them into subgoals until the latter can be assigned as responsibilities of single agents such as humans, devices and software. Domain properties and assumptions about the software environment are also used during the goal refinement process. The method supports the exploration of alternative goal refinements and alternative responsibility assignments of goals to agents. It also supports the identification and resolution of conflicts between goals, and the identification and resolution of exceptional agent behaviors, called obstacles, that violate goals and assumptions produced during the goal refinement process. The thesis enriches the KAOS framework through three kinds of techniques: (a) techniques for identifying agents, goal refinements, and alternative responsibility assignments, and for deriving agent interfaces from such responsibility assignments; (b) techniques for deriving operational requirements from goal specifications; (c) techniques for generating obstacles to the satisfaction of idealized goals and assumptions, and for generating alternative obstacle resolutions. The result is a coherent body of systematic techniques for requirements elaboration that are both theoretically well-founded (a formal model of agent is defined) and effective in practice (the techniques are validated on two real case studies of significant size: the London ambulance despatching system, and the Bay Area Rapid Transit train system).
200

Développement d'une méthodologie d'essais de fatigue accélérée de pièces mécaniques sur table vibrante

Marin, Frederic 22 February 2010 (has links)
Dans leurs conditions réelles de fonctionnement, les pièces mécaniques, exposées à un environnement vibratoire de longue durée, sont sujettes à un endommagement par fatigue dynamique pouvant conduire à une perte de performance, voire d'intégrité de la structure. Les tests de fatigue accélérée ont pour but de soumettre une pièce mécanique à un essai de vibration, de durée réduite, en laboratoire, tel que le dommage subi soit équivalent au dommage rencontré par la pièce en service. D'une manière générale, les secteurs d'application potentiellement concernés par les tests de fatigue accélérée relèvent de l'industrie mécanique au sens large (industries automobile,aéronautique, spatiale, ferroviaire, ...). La société R-Tech, bureau d'études du groupe Schréder, fabricant d'appareils d'éclairage public, est à l'origine du projet de recherche ayant conduit à la réalisation de ce travail. Celui-ci s'inscrit dans le cadre d'une convention de First-Doctorat Entreprise, cofinancée par la Région wallonne et le partenaire industriel. L'intérêt porté par R-Tech aux résultats de la recherche est lié au fait que tous les luminaires conçus sont dimensionnés par un calcul statique basé sur la méthode des éléments finis et testés en fatigue sur table vibrante avant d'être commercialisés. Malgré ces précautions, dans quelques cas, des problèmes de bris se produisent sur site. Dans d'autres cas, les calculs par éléments finis conduisent à un surdimensionnement des pièces constitutives du luminaire. Il est donc important pour R-Tech de pouvoir disposer d'outils de conception et de validation de conception permettant de développer et de produire de façon économique des luminaires résistant aux sollicitations auxquelles ils sont normalement exposés. En permettant à R-Tech/Schréder de certifier la conception structurale de leurs produits et d'en garantir la durée de vie dès la phase de développement, cette recherche devrait leur apporter un gain de compétitivité. Par conséquent, la méthodologie développée, bien que générale, a été appliquée en priorité aux cas tests fournis par le partenaire industriel. Lorsque la pièce mécanique devant subir l'essai de vibrations n'est pas connue ou complètement définie, la méthodologie proposée se réfère à une approche analytique basée sur un système de référence à un degré de liberté et conduisant à la définition des critères de sévérité que sont les Spectre de Réponse Extrême (SRE), Spectre de Dommage par Fatigue (SDF) ou Spectre d'Energie Dissipée (SED). Cette approche, indépendante du spécimen à tester et dès lors particulièrement indiquée dans le cas de l'élaboration d'une spécification, ne peut cependant fournir que des résultats qualitatifs. L'estimation précise de la contrainte maximale présente dans la pièce testée ainsi que celle de l'endommagement qu'elle génère au cours du temps n'est envisageable, au moyen d'une telle approche, que pour de rares applications relativement simples permettant d'établir la relation qui existe entre contrainte et déplacement relatif. Dans un cas contraire, le recours à un modèle éléments finis du spécimen est nécessaire. Dans le cas particulier des appareils d'éclairage public, l'environnement vibratoire auquel ils sont soumis est essentiellement dû aux sollicitations éoliennes. Des données météorologiques recueillies par l'IRM à Uccle ont permis d'alimenter un modèle de vent qui rend compte des sollicitations aléatoires liées à la turbulence du vent ainsi qu'à un éventuel largage de tourbillons de Von Karman. Ces excitations sont ensuite appliquées à un modèle éléments finis simplifié du système poteau/luminaire. D'autres environnements vibratoires auxquels les luminaires peuvent être soumis au cours de leur vie, tels les sollicitations sismiques induites à la base d'un poteau installé sur pont ou viaduc et l'impact qui lui est appliqué lors de la collision avec un véhicule manoeuvrant sur un parking, ont également été envisagés dans ce travail. Une fois l'environnement vibratoire réel de la pièce mécanique à tester déterminé, soit expérimentalement, soit analytiquement, la méthodologie proposée conduit à l'obtention d'une spécification équivalente au moyen d'un processus d'optimisation développé au sein du logiciel BOSS Quattro (Samtech). Ce dernier minimise la fonction objectif représentant l'écart entre le critère de sévérité retenu pour caractériser les environnements de référence et équivalent. Le test aléatoire équivalent au vent ainsi que le test d'impact sur poteau font désormais partie intégrante de la procédure de qualification des luminaires R-Tech/Schréder. Lorsque la géométrie de la pièce mécanique à tester et la spécification d'essai à reproduire en laboratoire sur table vibrante sont fixées, une modélisation éléments finis du spécimen permet, si pas d'annuler complètement le risque de bris qui pourrait survenir lors de la phase de qualification expérimentale, de le réduire très fortement. Dans l'exemple d'un luminaire, la structure étant rendue complexe par le nombre de pièces qui la constituent et les assemblages qui existent entre elles, une confrontation des résultats de simulation et expérimentaux est nécessaire : analyses modales au marteau d'impact et sur table vibrante en ajoutant un composant à la fois afin de recaler le modèle, comparaison des niveaux d'accélération simulés et mesurés lors de l'essai en vue du recalage des coefficients d'amortissements modaux, validation du modèle sur base des contraintes mesurées au moyen de jauges et/ou rosettes. Une fois les zones jugées critiques repérées, le modèle peut dès lors être utilisé pour prédire la durée de vie du spécimen soumis à son environnement vibratoire. Celle-ci est finalement obtenue en faisant appel aux concepts de contrainte équivalente de Von Mises, approximation bilinéaire de la courbe de Wöhler, contrainte plastique de Neuber et évolution linéaire ou non-linéaire de l'endommagement.

Page generated in 0.0364 seconds