• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 28
  • 6
  • 4
  • 3
  • 1
  • 1
  • Tagged with
  • 43
  • 43
  • 11
  • 10
  • 10
  • 10
  • 8
  • 7
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Interference Effects On The Collapse Loads For Footings And Anchors Using An Upper Bound Finite Element Limit Analysis

Kouzer, K M 04 1900 (has links)
The present thesis is an attempt to investigate the interference effects on the magnitudes of the ultimate failure loads for a group of closely spaced strip footings and strip plate anchors. On account of an increase in the number of different civil engineering structures, footings and anchors are often need to be placed very close to each other. In such a situation, the ultimate bearing capacity/pullout capacity of an interfering footing/anchor becomes significantly different from that of a single isolated footing/anchor. The effect of interference on the magnitude of failure load is usually expressed in terms of an efficiency factor (%y); where £,y is defined as the ratio of the magnitude of the failure load for a strip footing/anchor of a given width in the presence of other footings/anchors to that of the magnitude of the failure load for an isolated single strip footing/anchor having exactly the same width. No rigorous analysis seems to have been carried out so far in literature to investigate the interference effect for a group of footings and anchors. In the present study, it is intended to use rigorous numerical upper bound limit analysis in combination with finite elements and linear programming in order to determine the collapse loads for the problems of both isolated and a group of footings and anchors. Three noded triangular elements are used throughout the thesis for carrying out the analysis for different problems. The velocity discontinuities are employed along the interfaces of all the elements. The plastic strains within the elements are incorporated by using an associated flow rule. The Mohr Coulomb yield surface is linearised by means of an exterior regular polygon circumscribing the actual failure surface so that the finite element formulation leads to a linear programming problem. In solving the different problems taken in this thesis, computer programs were developed using 'MATLAB' with the usage of 'LINPROG' - a library subprogram for doing the necessary optimization. The bearing capacity factor Ny for an isolated single rigid strip footing placed on a cohesionless ground surface has been computed and its variation with respect to the footing-soil roughness angle (8) has been examined in detail. It is clearly noted that an increase in 8 leads to a continuous increase in Ny. The solution is also obtained for a perfectly rough footing without considering any velocity discontinuity surface along the footing-soil interface. With 5 = <|), the magnitude of NY becomes almost the same as that for a perfectly rough footing. The size of the plastic zone increases with an increase in the values of 8 and <j). The obtained values of Ny for 5=0 and § compare quite favorably with the solutions reported earlier in literature. The ultimate bearing capacity for a group of two and an infinite number of multiple interfering rough strip footings placed on a cohesionless medium has been computed; all the footings are assumed to be perfectly rigid. It is specified that the footings are loaded simultaneously to failure exactly at the same magnitude of the failure load. For different clear spacing (S) between the adjacent footings, the magnitude of the efficiency factor (£,y) is determined. In the case of two footings, the value of E,y at S/B = 0 becomes exactly equal to 2.0, and the maximum ^occurs at a critical spacing (Scr). For S/B < Sor/B, the ultimate bearing pressure for a footing becomes equal to that of an isolated footing having the width (2B+S), and the ground mass encompassed between the two footings deforms mainly in the downward direction. In contrast, for S/B > Scr/B, ground heave is noticed along both the sides of the footing. As compared to the available theories in literature, the analysis presented in this thesis provides generally lower values of ^y for S/B > Scr/B. ' In the case of a group of multiple strip footings, the value of £y is found to increase continuously with a decrease in S/B. The effect of the variation of spacing on §y is found to be very extensive for small values of S/B; the magnitude of ^y approaches infinity at S/B = 0. For all the values of S/B ground heave is invariably observed on both the sides of the footings. The magnitudes of ^Y for given values of S/B and <|) for the two footings case are found to be smaller than the multiple footings case. The vertical uplift capacity of an isolated strip anchor embedded horizontally at shallow depths in sand has been examined; the anchor plate is assumed to be perfectly rigid and rough. The collapse load is expressed in terms of a non-dimensional uplift factor FY, the value of which needs to be known before calculating the failure load for an interfering anchor. The magnitude of Fr is found to increase continuously with increase in both embedment ratio (k) and the friction angle (<|>) of sand. Even though the analysis considers the development of plastic strain within all elements, however, at collapse, the soil mass just above the anchor is found to move as a single rigid block bounded by planar rupture surfaces; the rupture surfaces emerging from the anchor edges are seen to make approximately an angle <|> with the vertical. The vertical uplift capacity of a group of two and an infinite number of multiple interfering rigid rough strip anchors embedded horizontally in sand at shallow depths has been examined. At collapse, it is specified that all the anchors in the group are loaded to failure simultaneously exactly at the same magnitude of the failure load. For different clear spacing (S) between the anchors, the magnitude of the efficiency factor (£Y) is determined. On account of interference, the magnitude of 4y is found to reduce continuously with a decrease in the spacing between the anchors. For all values of X and §, the magnitude of ^y for the multiple anchors case is found to be always smaller than that for the two anchors case. In contrast to a group of footings under compression, the magnitude of ^v for a group of anchors is found to decrease invariably with an increase in $ for a given value of S/B. For S > 2c/tan<j) , the uplift resistance of anchors in the group becomes equal to that of an isolated anchor, and no interference is seen to exist; where d is the depth of anchor. By examining the nodal velocity patterns, it was noted that in the event of collapse, a wedge of soil mass just above the anchors and encompassed within linear rupture surfaces moves vertically upward almost as a single rigid unit with the velocity same as that of the anchor plate itself. On this basis, a closed form solution of the problem has been developed. The results from the closed form solution for the group of two anchors as well as for multiple anchors are found to provide an excellent comparison with the rigorous upper bound numerical solution especially for the value of § greater than or equal to about 35°. For all the problems taken in this study, it has been seen that an upper bound limit analysis in combination with finite elements and linear programming is a very useful numerical tool for determining the magnitudes of collapse loads.
42

Quantum coin flipping and bit commitment : optimal bounds, pratical constructions and computational security / Pile-ou-face et mise-en-gage de bit quantique : bornes optimales, constructions pratiques et sécurité calculatoire

Chailloux, André 24 June 2011 (has links)
L'avènement de l'informatique quantique permet de réétudier les primitives cryptographiques avec une sécurité inconditionnelle, c'est à dire sécurisé même contre des adversaires tout puissants. En 1984, Bennett et Brassard ont construit un protocole quantique de distribution de clé. Dans ce protocole, deux joueurs Alice et Bob coopèrent pour partager une clé secrète inconnue d'une tierce personne Eve. Ce protocole a une sécurité inconditionnelle et n'a pasd'équivalent classique.Dans ma thèse, j'ai étudié les primitives cryptographiques à deux joueurs où ces joueurs ne se font pas confiance. J'étudie principalement le pile ou face quantique et la mise-en-gage quantique de bit. En informatique classique, ces primitivessont réalisables uniquement avec des hypothèses calculatoires, c'est-à-dire en supposant la difficulté d'un problème donné. Des protocoles quantiques ont été construits pour ces primitives où un adversaire peut tricher avec une probabilité constante strictement inférieure à 1, ce qui reste impossible classiquement. Néanmoins, Lo et Chau ont montré l'impossibilité de créer ces primitives parfaitement même en utilisant l'informatique quantique. Il reste donc à déterminer quelles sont les limites physiques de ces primitives.Dans une première partie, je construis un protocole quantique de pile ou face où chaque joueur peut tricher avec probabilité au plus 1/racine(2) + eps pour tout eps > 0. Ce résultat complète un résultat de Kitaev qui dit que dans un jeu de pile ou face quantique, un joueur peut toujours tricher avec probabilité au moins 1/racine(2). J'ai également construit un protocole de mise-en-gage de bit quantique optimal où un joueur peut tricher avec probabilité au plus 0,739 + eps pour tout eps > 0 puis ai montré que ce protocole est en fait optimal. Finalement, j'ai dérivé des bornes inférieures et supérieures pour une autre primitive: la transmission inconsciente, qui est une primitive universelle.Dans une deuxième partie, j'intègre certains aspects pratiques dans ces protocoles. Parfois les appareils de mesure ne donnent aucun résultat, ce sont les pertes dans la mesure. Je construis un protocole de lancer de pièce quantique tolérant aux pertes avec une probabilité de tricher de 0,859. Ensuite, j'étudie le modèle dispositif-indépendant où on ne suppose plus rien sur les appareils de mesure et de création d'état quantique.Finalement, dans une troisième partie, j'étudie ces primitives cryptographiques avec un sécurité computationnelle. En particulier, je fais le lien entre la mise en gage de bit quantique et les protocoles zero-knowledge quantiques. / Quantum computing allows us to revisit the study of quantum cryptographic primitives with information theoretic security. In 1984, Bennett and Brassard presented a protocol of quantum key distribution. In this protocol, Alice and Bob cooperate in order to share a common secret key k, which has to be unknown for a third party that has access to the communication channel. They showed how to perform this task quantumly with an information theoretic security; which is impossible classically.In my thesis, I study cryptographic primitives with two players that do not trust each other. I study mainly coin flipping and bit commitment. Classically, both these primitives are impossible classically with information theoretic security. Quantum protocols for these primitives where constructed where cheating players could cheat with probability stricly smaller than 1. However, Lo, Chau and Mayers showed that these primitives are impossible to achieve perfectly even quantumly if one requires information theoretic security. I study to what extent imperfect protocols can be done in this setting.In the first part, I construct a quantum coin flipping protocol with cheating probabitlity of 1/root(2) + eps for any eps > 0. This completes a result by Kitaev who showed that in any quantum coin flipping protocol, one of the players can cheat with probability at least 1/root(2). I also constructed a quantum bit commitment protocol with cheating probability 0.739 + eps for any eps > 0 and showed that this protocol is essentially optimal. I also derived some upper and lower bounds for quantum oblivious transfer, which is a universal cryptographic primitive.In the second part, I study some practical aspects related to these primitives. I take into account losses than can occur when measuring a quantum state. I construct a Quantum Coin Flipping and Quantum Bit Commitment protocols which are loss-tolerant and have cheating probabilities of 0.859. I also construct these primitives in the device independent model, where the players do not trust their quantum device. Finally, in the third part, I study these cryptographic primitives with information theoretic security. More precisely, I study the relationship between computational quantum bit commitment and quantum zero-knowledge protocols.
43

Assessing the robustness of genetic codes and genomes

Sautié Castellanos, Miguel 06 1900 (has links)
Deux approches principales existent pour évaluer la robustesse des codes génétiques et des séquences de codage. L'approche statistique est basée sur des estimations empiriques de probabilité calculées à partir d'échantillons aléatoires de permutations représentant les affectations d'acides aminés aux codons, alors que l'approche basée sur l'optimisation repose sur le pourcentage d’optimisation, généralement calculé en utilisant des métaheuristiques. Nous proposons une méthode basée sur les deux premiers moments de la distribution des valeurs de robustesse pour tous les codes génétiques possibles. En se basant sur une instance polynomiale du Problème d'Affectation Quadratique, nous proposons un algorithme vorace exact pour trouver la valeur minimale de la robustesse génomique. Pour réduire le nombre d'opérations de calcul des scores et de la borne supérieure de Cantelli, nous avons développé des méthodes basées sur la structure de voisinage du code génétique et sur la comparaison par paires des codes génétiques, entre autres. Pour calculer la robustesse des codes génétiques naturels et des génomes procaryotes, nous avons choisi 23 codes génétiques naturels, 235 propriétés d'acides aminés, ainsi que 324 procaryotes thermophiles et 418 procaryotes non thermophiles. Parmi nos résultats, nous avons constaté que bien que le code génétique standard soit plus robuste que la plupart des codes génétiques, certains codes génétiques mitochondriaux et nucléaires sont plus robustes que le code standard aux troisièmes et premières positions des codons, respectivement. Nous avons observé que l'utilisation des codons synonymes tend à être fortement optimisée pour amortir l'impact des changements d'une seule base, principalement chez les procaryotes thermophiles. / There are two main approaches to assess the robustness of genetic codes and coding sequences. The statistical approach is based on empirical estimates of probabilities computed from random samples of permutations representing assignments of amino acids to codons, whereas, the optimization-based approach relies on the optimization percentage frequently computed by using metaheuristics. We propose a method based on the first two moments of the distribution of robustness values for all possible genetic codes. Based on a polynomially solvable instance of the Quadratic Assignment Problem, we propose also an exact greedy algorithm to find the minimum value of the genome robustness. To reduce the number of operations for computing the scores and Cantelli’s upper bound, we developed methods based on the genetic code neighborhood structure and pairwise comparisons between genetic codes, among others. For assessing the robustness of natural genetic codes and genomes, we have chosen 23 natural genetic codes, 235 amino acid properties, as well as 324 thermophilic and 418 non-thermophilic prokaryotes. Among our results, we found that although the standard genetic code is more robust than most genetic codes, some mitochondrial and nuclear genetic codes are more robust than the standard code at the third and first codon positions, respectively. We also observed that the synonymous codon usage tends to be highly optimized to buffer the impact of single-base changes, mainly, in thermophilic prokaryotes.

Page generated in 0.0417 seconds