• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1857
  • 935
  • 762
  • 17
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 3576
  • 3545
  • 2361
  • 1799
  • 1786
  • 1775
  • 811
  • 798
  • 797
  • 789
  • 789
  • 789
  • 788
  • 646
  • 645
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
291

Reasoning on words and trees with data

Figueira, Diego 06 December 2010 (has links) (PDF)
A data word (resp. a data tree) is a &#64257-nite word (resp. tree) whose every position carries a letter from a &#64257-nite alphabet and a datum form an in&#64257-nite domain. In this thesis we investigate automata and logics for data words and data trees with decidable reasoning problems: we focus on the emptiness problem in the case of automata, and the satis&#64257-ability problem in the case of logics. On data words, we present a decidable extension of the model of alternating register automata studied by Demri and Lazi'c. Further, we show the decidability of the satis&#64257-ability problem for the linear-time temporal logic on data words LTL_\downarrow (X, F, U) (studied by Demri and Lazi'c) with quanti&#64257-cation over data values. We also prove that the lower bounds of non-primitive recursiveness shown by Demri and Lazi'c for LTL&#8595- (X, F) carry over to LTL&#8595- (F). On data trees, we consider three decidable automata models with di&#64256-erent characteristics. We &#64257-rst introduce the Downward Data automaton (DD automata). Its execution consists in a transduction of the &#64257-nite labeling of the tree, and a veri&#64257-cation of data properties for every subtree of the transduced tree. This model is closed under boolean operations, but the tests it can make on the order of the siblings is very limited. Its emptiness problem is 2ExpTime. On the contrary, the other two automata models we introduce have an emptiness problem with a non-primitive recursive complexity, and are closed under intersection and union, but not complementation. They are both alternating automata with one register to store and compare data values. The automata class ATRA(guess, spread) extends the top-down automata ATRA of Jurdzinski and Lazic. We exhibit similar decidable extensions as the one showed in the case of data words. This class can test for any tree regular language--in contrast to DD automata. Finally, we consider a bottom-up alternating tree automaton with one register (called BUDA). Although the BUDA class is one-way, it has features that allow to test data properties by navigating the tree in both directions: upward and downward. In opposition to ATRA(guess, spread), this automaton cannot test for properties on the the sequence of siblings (like, for example, the order in which labels appear). All these three models have connections with the logic XPath--a logic conceived for xml documents, which can be seen as data trees. Through the aforementioned automata we show that the satis&#64257-ability of three natural fragments of XPath are decidable. These fragments are: downward XPath, where navigation can only be done by child and descendant axes- forward XPath, where navigation also contains the next sibling axis and its transitive closure- and vertical XPath, whose navigation consists in the child, descendant, parent and ancestor axes. Whereas downward XPath is ExpTime-complete, forward and vertical XPath have non-primitive recursive lower bounds.
292

Intégration des effets des dilatations thermiques dans le tolérancement

Benichou, Sami 05 July 2012 (has links) (PDF)
La cotation fonctionnelle doit garantir la montabilité et le bon fonctionnement d'un mécanisme en imposant les spécifications fonctionnelles à respecter sur les pièces. Ces spécifications sont exprimées avec les normes ISO de cotation et doivent être vérifiées à 20°C. Pour les mécanismes soumis à de fortes températures, il est nécessaire de cumuler l'influence des tolérances et des dilatations aux différents régimes thermiques. Après avoir formulé des hypothèses de comportement dans les liaisons avec contact ou avec jeux affectés par les déformations thermiques et l'influence des incertitudes sur les températures, la méthodologie proposée permet de séparer le calcul thermique et le tolérancement. Le bureau de calcul thermique détermine les champs de température et les déplacements des mailles par la méthode des éléments finis à partir des modèles nominaux des pièces. Le cumul des tolérances et des dilatations est basé sur la méthode des droites d'analyse. Pour chaque exigence, la surface terminale est discrétisée en différents points d'analyse. Dans chaque jonction, les relations de transfert déterminent les points de contact et l'influence des dilatations et des écarts thermiques en ces points sur l'exigence. Une application à un mécanisme industriel démontre l'intérêt d'optimiser les dimensions nominales des modèles afin de maximiser les tolérances tout en respectant l'ensemble des exigences.
293

Numerical and statistical approaches for model checking of stochastic processes

Djafri, Hilal 19 June 2012 (has links) (PDF)
We propose in this thesis several contributions related to the quantitative verification of systems. This discipline aims to evaluate functional and performance properties of a system. Such a verification requires two ingredients: a formal model to represent the system and a temporal logic to express the desired property. Then the evaluation is done with a statistical or numerical method. The spatial complexity of numerical methods which is proportional to the size of the state space of the model makes them impractical when the state space is very large. The method of stochastic comparison with censored Markov chains is one of the methods that reduces memory requirements by restricting the analysis to a subset of the states of the original Markov chain. In this thesis we provide new bounds that depend on the available information about the chain. We introduce a new quantitative temporal logic named Hybrid Automata Stochastic Logic (HASL), for the verification of discrete event stochastic processes (DESP). HASL employs Linear Hybrid Automata (LHA) to select prefixes of relevant execution paths of a DESP. LHA allows rather elaborate information to be collected on-the-fly during path selection, providing the user with a powerful mean to express sophisticated measures. In essence HASL provides a unifying verification framework where temporal reasoning is naturally blended with elaborate reward-based analysis. We have also developed COSMOS, a tool that implements statistical verification of HASL formulas over stochastic Petri nets. Flexible manufacturing systems (FMS) have often been modelized by Petri nets. However the modeler should have a good knowledge of this formalism. In order to facilitate such a modeling we propose a methodology of compositional modeling that is application oriented and does not require any knowledge of Petri nets by the modeler.
294

Computational methods for de novo assembly of next-generation genome sequencing data

Chikhi, Rayan 02 July 2012 (has links) (PDF)
In this thesis, we discuss computational methods (theoretical models and algorithms) to perform the reconstruction (de novo assembly) of DNA sequences produced by high-throughput sequencers. This problem is challenging, both theoretically and practically. The theoretical difficulty arises from the complex structure of genomes. The assembly process has to deal with reconstruction ambiguities. The output of sequencing predicts up to an exponential number of reconstructions, yet only one is correct. To deal with this problem, only a fragmented approximation of the genome is returned. The practical difficulty stems from the huge volume of data produced by sequencers, with high redundancy. Significant computing power is required to process it. As larger genomes and meta-genomes are being sequenced, the need for efficient computational methods for de novo assembly is increasing rapidly. This thesis introduces novel contributions to genome assembly, both in terms of incorporating more information to improve the quality of results, and efficiently processing data to reduce the computation complexity. Specifically, we propose a novel algorithm to quantify the maximum theoretical genome coverage achievable by sequencing data (paired reads), and apply this algorithm to several model genomes. We formulate a set of computational problems that take into account pairing information in assembly, and study their complexity. Then, two novel concepts that cover practical aspects of assembly are proposed: localized assembly and memory-efficient reads indexing. Localized assembly consists in constructing and traversing a partial assembly graph. These ingredients are implemented in a complete de novo assembly software package, the Monument assembler. Monument is compared with other state of the art assembly methods. Finally, we conclude with a series of smaller projects, exploring concepts beyond classical de novo assembly.
295

Vers un matériau virtuel pour l'optimisation qualitative d'une nouvelle famille de CMCs

Tranquart, Bastien 23 March 2012 (has links) (PDF)
Ces travaux portent sur le développement d'un matériau virtuel pour la simulation et l'optimisation des matériaux à microstructure hétérogène, en particulier des composites à matrice céramique de nouvelle génération. Pour ce faire une modélisation du fil est mise en place, au travers d'une démarche intégrée qui prend en compte la complexité de la microstructure et de sa variabilité issues du procédé de fabrication. La démarche proposée repose sur deux étapes : i) la construction d'une morphologie synthétique du fil, basée sur l'étude de micrographies et ii) une méthode de simulation multiéchelle inspirée de la méthode des éléments finis généralisée. L'originalité de cette dernière provient de l'utilisation de motifs, sorte de situations physiques ou topologiques élémentaires, pour décrire à la fois la microstructure et la cinématique locale. La démarche est validée et appliquée à diverses sections de fil synthétiques 2D, pour lesquelles le choix des motifs est discuté. L'extension au traitement de tronçon 3D du fil, ainsi qu'à la simulation de la fissuration à l'aide d'une méthode discrète est discutée et des premiers éléments de réponse sont apportés.
296

Reproducible research, software quality, online interfaces and publishing for image processing

Limare, Nicolas 21 June 2012 (has links) (PDF)
This thesis is based on a study of reproducibility issues in image processing research. We designed, created and developed a scientific journal, Image Processing On Line (IPOL), in which articles are published with a complete implementation of the algorithms described, validated by the rapporteurs. A demonstration web service is attached, allowing testing of the algorithms with freely submitted data and an archive of previous experiments. We also propose copyrights and license policy, suitable for manuscripts and research software software, and guidelines for the evaluation of software. The IPOL scientific project seems very beneficial to research in image processing. With the detailed examination of the implementations and extensive testing via the demonstration web service, we publish articles of better quality. IPOL usage shows that this journal is useful beyond the community of its authors, who are generally satisfied with their experience and appreciate the benefits in terms of understanding of the algorithms, quality of the software produced, and exposure of their works and opportunities for collaboration. With clear definitions of objects and methods, and validated implementations, complex image processing chains become possible.
297

Tolérancement flexible d'assemblages de grandes structures aéronautiques

Stricher, Alain 08 February 2013 (has links) (PDF)
Comme son nom l'indique, le tolérancement flexible a pour objectif de tenir compte de la souplesse des pièces dans un processus de tolérancement. Il permet d'évaluer les défauts géométriques admissibles par des critères aussi bien géométriques que mécaniques. Ces travaux abordent en premier lieu l'élaboration de modèles adéquats permettant de prédire le comportement mécanique d'un assemblage de grandes pièces relativement souples lorsqu'elles sont sujettes à des défauts géométriques issues du procédé de fabrication. Une méthode a alors été proposée pour y implémenter des variations géométriques aléatoires réalistes vis-à-vis de ces hypothétiques défauts géométriques. Pour simuler les opérations d'assemblage, le phénomène de contact unilatéral et les variations de rigidité dues aux variabilités géométriques ont été prises en compte. En fonction de ces hypothèses, les stratégies d'analyse de tolérance avec Monte Carlo ou la méthode des coefficients d'influence ont été comparées afin de choisir celle minimisant les coûts de calcul tout en conservant la justesse des résultats. Finalement, ces travaux s'achèvent sur une étude de cas industriel : un treillis supportant des équipements sous le plancher du fuselage d'un Airbus A350.
298

Stratégie multiparamétrique pour la conception robuste en fatigue

Relun, Nicolas 12 December 2011 (has links) (PDF)
La conception robuste de pièce mécaniques consiste à prendre en compte dans la modélisation les sources d'incertitudes.Le modèle devient alors assez représentatif de la réalité pour pouvoir diminuer les marges de sécurité, qui permettent de garantir que la pièce en fonctionnement ne sera pas mise en défaut.Dans le cas de pièces aérospatiales, une diminution des marges de sécurité est un enjeu économique majeur car cela entraîne une diminution du poids des pièces.La probabilité de défaillance est une des quantités critiques lors de la conception robuste. Celle-ci quantifie le risque de défaillance de la pièce en comparant la probabilité de résistance du matériau (caractérisée à partir d'essais sur éprouvettes) avec la probabilité de sollicitation du matériau, qui est déterminée à partir des contraintes extérieures à la pièce et des caractéristiques du matériau. C'est ce dernier problème qui a fait l'objet de cette thèse.Dans le cas d'un comportement non linéaire du matériau, la détermination de la probabilité de sollicitation impose d'exécuter de nombreuses fois un calcul de la pièce pour différentes valeurs des conditions aux limites et des paramètres du comportement matériau.Ceci devient rapidement hors de portée sans une stratégie adaptée, un calcul pouvant prendre jusqu'à 12 heures.Une stratégie dédiée à la résolution de l'ensemble de ces calculs est proposée dans ce travail. Elle tire parti de la similarité des calculs pour diminuer le temps total nécessaire. Un gain allant jusqu'à 30 est atteint sur des pièces industrielles simples en quasi-statique avec un comportement élasto-viscoplastique.
299

Formal verification of secured routing protocols

Arnaud, Mathilde 13 December 2011 (has links) (PDF)
With the development of digital networks, such as Internet, communication protocols are omnipresent. Digital devices have to interact with each other in order to perform the numerous and complex tasks we have come to expect as commonplace, such as using a mobile phone, sending or receiving electronic mail, making purchases online and so on. In such applications, security is important. For instance, in the case of an online purchase, the right amount of money has to be paid without leaking the buyer personal information to outside parties. Communication protocols are the rules that govern these interactions. In order to make sure that they guarantee a certainlevel of security, it is desirable to analyze them. Doing so manually or by testing them is not enough, as attacks can be quite subtle. Some protocols have been used for years before an attack was discovered. Because of their increasing ubiquity in many important applications, e.g. electronic commerce, a very important research challenge consists in developing methods and verification tools to increase our trust on security protocols, and so on the applications that rely on them. For example, more than 28 billion Euros were spent in France using Internet transactions, and the number is growing. Moreover, new types of protocols are continuously appearing in order to face new technological and societal challenges, e.g. electronic voting, electronic passport to name a few.
300

Adaptive modeling of plate structures

Bohinc, Uroš 05 May 2011 (has links) (PDF)
The primary goal of the thesis is to provide some answers to the questions related to the key steps in the process of adaptive modeling of plates. Since the adaptivity depends on reliable error estimates, a large part of the thesis is related to the derivation of computational procedures for discretization error estimates as well as model error estimates. A practical comparison of some of the established discretization error estimates is made. Special attention is paid to what is called equilibrated residuum method, which has a potential to be used both for discretization error and model error estimates. It should be emphasized that the model error estimates are quite hard to obtain, in contrast to the discretization error estimates. The concept of model adaptivity for plates is in this work implemented on the basis of equilibrated residuum method and hierarchic family of plate finite element models.The finite elements used in the thesis range from thin plate finite elements to thick plate finite elements. The latter are based on a newly derived higher order plate theory, which includes through the thickness stretching. The model error is estimated by local element-wise computations. As all the finite elements, representing the chosen plate mathematical models, are re-derived in order to share the same interpolation bases, the difference between the local computations can be attributed mainly to the model error. This choice of finite elements enables effective computation of the model error estimate and improves the robustness of the adaptive modeling. Thus the discretization error can be computed by an independent procedure.Many numerical examples are provided as an illustration of performance of the derived plate elements, the derived discretization error procedures and the derived modeling error procedure. Since the basic goal of modeling in engineering is to produce an effective model, which will produce the most accurate results with the minimum input data, the need for the adaptive modeling will always be present. In this view, the present work is a contribution to the final goal of the finite element modeling of plate structures: a fully automatic adaptive procedure for the construction of an optimal computational model (an optimal finite element mesh and an optimal choice of a plate model for each element of the mesh) for a given plate structure.

Page generated in 0.0365 seconds