• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 46
  • 25
  • 13
  • 6
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 117
  • 26
  • 22
  • 19
  • 18
  • 17
  • 13
  • 13
  • 12
  • 11
  • 11
  • 10
  • 10
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Aspects of viscous shocks

Siklos, Malin January 2004 (has links)
<p>This thesis consists of an introduction and five papers concerning different numerical and mathematical aspects of viscous shocks. </p><p>Hyperbolic conservation laws are used to model wave motion and advect- ive transport in a variety of physical applications. Solutions of hyperbolic conservation laws may become discontinuous, even in cases where initial and boundary data are smooth. Shock waves is one important type of discontinu- ity. It is also interesting to study the corresponding slightly viscous system, i.e., the system obtained when a small viscous term is added to the hyper- bolic system of equations. By a viscous shock we denote a thin transition layer which appears in the solution of the slightly viscous system instead of a shock in the corresponding purely hyperbolic problem. </p><p>A slightly viscous system, a so called modified equation, is often used to model numerical solutions of hyperbolic conservation laws and their beha- vior in the vicinity of shocks. Computations presented elsewhere show that numerical solutions of hyperbolic conservation laws obtained by higher order accurate shock capturing methods in many cases are only first order accurate downstream of shocks. We use a modified equation to model numerical solu- tions obtained by a generic second order shock capturing scheme for a time dependent system in one space dimension. We present analysis that show how the first order error term is related to the viscous terms and show that it is possible to eliminate the first order downstream error by choosing a special viscosity term. This is verified in computations. We also extend the analysis to a stationary problem in two space dimensions. </p><p>Though the technique of modified equation is widely used, rather little is known about when (for what methods etc.) it is applicable. The use of a modified equation as a model for a numerical solution is only relevant if the numerical solution behaves as a continuous function. We have experimentally investigated a range of high resolution shock capturing methods. Our experiments indicate that for many of the methods there is a continuous shock profile. For some of the methods, however, this not the case. In general the behavior in the shock region is very complicated.</p><p>Systems of hyperbolic conservation laws with solutions containing shock waves, and corresponding slightly viscous equations, are examples where the available theoretical results on existence and uniqueness of solutions are very limited, though it is often straightforward to find approximate numerical solu- tions. We present a computer-assisted technique to prove existence of solu- tions of non-linear boundary value ODEs, which is based on using an approx- imate, numerical solution. The technique is applied to stationary solutions of the viscous Burgers' equation.We also study a corresponding method suggested by Yamamoto in SIAM J. Numer. Anal. 35(5)1998, and apply also this method to the viscous Burgers' equation.</p>
62

Proofs and "Puzzles"

Abramovitz, Buma, Berezina, Miryam, Berman, Abraham, Shvartsman, Ludmila 10 April 2012 (has links) (PDF)
It is well known that mathematics students have to be able to understand and prove theorems. From our experience we know that engineering students should also be able to do the same, since a good theoretical knowledge of mathematics is essential for solving practical problems and constructing models. Proving theorems gives students a much better understanding of the subject, and helps them to develop mathematical thinking. The proof of a theorem consists of a logical chain of steps. Students should understand the need and the legitimacy of every step. Moreover, they have to comprehend the reasoning behind the order of the chain’s steps. For our research students were provided with proofs whose steps were either written in a random order or had missing parts. Students were asked to solve the \"puzzle\" – find the correct logical chain or complete the proof. These \"puzzles\" were meant to discourage students from simply memorizing the proof of a theorem. By using our examples students were encouraged to think independently and came to improve their understanding of the subject.
63

Markov chain Analysis of Evolution Strategies / Analyse Markovienne des Stratégies d'Evolution

Chotard, Alexandre 24 September 2015 (has links)
Cette thèse contient des preuves de convergence ou de divergence d'algorithmes d'optimisation appelés stratégies d'évolution (ESs), ainsi que le développement d'outils mathématiques permettant ces preuves.Les ESs sont des algorithmes d'optimisation stochastiques dits ``boîte noire'', i.e. où les informations sur la fonction optimisée se réduisent aux valeurs qu'elle associe à des points. En particulier, le gradient de la fonction est inconnu. Des preuves de convergence ou de divergence de ces algorithmes peuvent être obtenues via l'analyse de chaînes de Markov sous-jacentes à ces algorithmes. Les preuves de convergence et de divergence obtenues dans cette thèse permettent d'établir le comportement asymptotique des ESs dans le cadre de l'optimisation d'une fonction linéaire avec ou sans contrainte, qui est un cas clé pour des preuves de convergence d'ESs sur de larges classes de fonctions.Cette thèse présente tout d'abord une introduction aux chaînes de Markov puis un état de l'art sur les ESs et leur contexte parmi les algorithmes d'optimisation continue boîte noire, ainsi que les liens établis entre ESs et chaînes de Markov. Les contributions de cette thèse sont ensuite présentées:o Premièrement des outils mathématiques généraux applicables dans d'autres problèmes sont développés. L'utilisation de ces outils permet d'établir aisément certaines propriétés (à savoir l'irreducibilité, l'apériodicité et le fait que les compacts sont des small sets pour la chaîne de Markov) sur les chaînes de Markov étudiées. Sans ces outils, établir ces propriétés était un processus ad hoc et technique, pouvant se montrer très difficile.o Ensuite différents ESs sont analysés dans différents problèmes. Un (1,\lambda)-ES utilisant cumulative step-size adaptation est étudié dans le cadre de l'optimisation d'une fonction linéaire. Il est démontré que pour \lambda > 2 l'algorithme diverge log-linéairement, optimisant la fonction avec succès. La vitesse de divergence de l'algorithme est donnée explicitement, ce qui peut être utilisé pour calculer une valeur optimale pour \lambda dans le cadre de la fonction linéaire. De plus, la variance du step-size de l'algorithme est calculée, ce qui permet de déduire une condition sur l'adaptation du paramètre de cumulation avec la dimension du problème afin d'obtenir une stabilité de l'algorithme. Ensuite, un (1,\lambda)-ES avec un step-size constant et un (1,\lambda)-ES avec cumulative step-size adaptation sont étudiés dans le cadre de l'optimisation d'une fonction linéaire avec une contrainte linéaire. Avec un step-size constant, l'algorithme résout le problème en divergeant lentement. Sous quelques conditions simples, ce résultat tient aussi lorsque l'algorithme utilise des distributions non Gaussiennes pour générer de nouvelles solutions. En adaptant le step-size avec cumulative step-size adaptation, le succès de l'algorithme dépend de l'angle entre les gradients de la contrainte et de la fonction optimisée. Si celui ci est trop faible, l'algorithme convergence prématurément. Autrement, celui ci diverge log-linéairement.Enfin, les résultats sont résumés, discutés, et des perspectives sur des travaux futurs sont présentées. / In this dissertation an analysis of Evolution Strategies (ESs) using the theory of Markov chains is conducted. Proofs of divergence or convergence of these algorithms are obtained, and tools to achieve such proofs are developed.ESs are so called "black-box" stochastic optimization algorithms, i.e. information on the function to be optimized are limited to the values it associates to points. In particular, gradients are unavailable. Proofs of convergence or divergence of these algorithms can be obtained through the analysis of Markov chains underlying these algorithms. The proofs of log-linear convergence and of divergence obtained in this thesis in the context of a linear function with or without constraint are essential components for the proofs of convergence of ESs on wide classes of functions.This dissertation first gives an introduction to Markov chain theory, then a state of the art on ESs and on black-box continuous optimization, and present already established links between ESs and Markov chains.The contributions of this thesis are then presented:o General mathematical tools that can be applied to a wider range of problems are developed. These tools allow to easily prove specific Markov chain properties (irreducibility, aperiodicity and the fact that compact sets are small sets for the Markov chain) on the Markov chains studied. Obtaining these properties without these tools is a ad hoc, tedious and technical process, that can be of very high difficulty.o Then different ESs are analyzed on different problems. We study a (1,\lambda)-ES using cumulative step-size adaptation on a linear function and prove the log-linear divergence of the step-size; we also study the variation of the logarithm of the step-size, from which we establish a necessary condition for the stability of the algorithm with respect to the dimension of the search space. Then we study an ES with constant step-size and with cumulative step-size adaptation on a linear function with a linear constraint, using resampling to handle unfeasible solutions. We prove that with constant step-size the algorithm diverges, while with cumulative step-size adaptation, depending on parameters of the problem and of the ES, the algorithm converges or diverges log-linearly. We then investigate the dependence of the convergence or divergence rate of the algorithm with parameters of the problem and of the ES. Finally we study an ES with a sampling distribution that can be non-Gaussian and with constant step-size on a linear function with a linear constraint. We give sufficient conditions on the sampling distribution for the algorithm to diverge. We also show that different covariance matrices for the sampling distribution correspond to a change of norm of the search space, and that this implies that adapting the covariance matrix of the sampling distribution may allow an ES with cumulative step-size adaptation to successfully diverge on a linear function with any linear constraint.Finally, these results are summed-up, discussed, and perspectives for future work are explored.
64

On the infinitary proof theory of logics with fixed points / Théorie de la preuve infinitaire pour les logiques à points fixes

Doumane, Amina 27 June 2017 (has links)
Cette thèse traite de la theorie de la preuve pour les logiques a points fixes, telles que le μ-calcul, lalogique lineaire a points fixes, etc. ces logiques sont souvent munies de systèmes de preuves finitairesavec des règles d’induction à la Park. Il existe néanmoins d’autres sytèmes de preuves pour leslogiques à points fixes, qui reposent sur la notion de preuve infinitaire, mais qui sont beaucoupmoins developpés dans la litterature. L’objectif de cette thèse est de pallier à cette lacune dansl’état de l’art, en developpant la théorie de la preuve infnitaire pour les logiques a points fixes,avec deux domaines d’application en vue: les langages de programmation avec types de données(co)inductifs et la vérification des systèmes réactifs.Cette thèse contient trois partie. Dans la première, on rappelle les deux principales approchespour obtenir des systèmes de preuves pour les logiques à points fixes: les systèmes finitaires avecrègle explicite d’induction et les systèmes finitaires, puis on montre comment les deux approchesse relient. Dans la deuxième partie, on argumente que les preuves infinitaires ont effectivement unréel statut preuve-theorique, en montrant que la logique lineaire additive multiplicative avec pointsfixes admet les propriétés d’élimination des coupures et de focalisation. Dans la troisième partie,on utilise nos developpements sur les preuves infinitaires pour monter de manière constructive lacomplétude du μ-calcul lineaire relativement à l’axiomatisation de Kozen. / The subject of this thesis is the proof theory of logics with fixed points, such as the μ-calculus,linear-logic with fixed points, etc. These logics are usually equipped with finitary deductive systemsthat rely on Park’s rules for induction. other proof systems for these logics exist, which relyon infinitary proofs, but they are much less developped. This thesis contributes to reduce thisdeficiency by developing the infinitary proof-theory of logics with fixed points, with two domainsof application in mind: programming languages with (co)inductive data types and verification ofreactive systems.This thesis contains three parts. In the first part, we recall the two main approaches to theproof theory for logics with fixed points: the finitary and the infinitary one, then we show theirrelationships. In the second part, we argue that infinitary proofs have a true proof-theoreticalstatus by showing that the multiplicative additive linear-logic with fixed points admits focalizationand cut-elimination. In the third part, we apply our proof-theoretical investigations to obtain aconstructive proof of completeness for the linear-time μ-calculus w.r.t. Kozen’s axiomatization.
65

Visualizing Algorithm Analysis Topics

Farghally, Mohammed Fawzi Seddik 30 November 2016 (has links)
Data Structures and Algorithms (DSA) courses are critical for any computer science curriculum. DSA courses emphasize concepts related to procedural dynamics and Algorithm Analysis (AA). These concepts are hard for students to grasp when conveyed using traditional textbook material relying on text and static images. Algorithm Visualizations (AVs) emerged as a technique for conveying DSA concepts using interactive visual representations. Historically, AVs have dealt with portraying algorithm dynamics, and the AV developer community has decades of successful experience with this. But there exist few visualizations to present algorithm analysis concepts. This content is typically still conveyed using text and static images. We have devised an approach that we term Algorithm Analysis Visualizations (AAVs), capable of conveying AA concepts visually. In AAVs, analysis is presented as a series of slides where each statement of the explanation is connected to visuals that support the sentence. We developed a pool of AAVs targeting the basic concepts of AA. We also developed AAVs for basic sorting algorithms, providing a concrete depiction about how the running time analysis of these algorithms can be calculated. To evaluate AAVs, we conducted a quasi-experiment across two offerings of CS3114 at Virginia Tech. By analyzing OpenDSA student interaction logs, we found that intervention group students spent significantly more time viewing the material as compared to control group students who used traditional textual content. Intervention group students gave positive feedback regarding the usefulness of AAVs to help them understand the AA concepts presented in the course. In addition, intervention group students demonstrated better performance than control group students on the AA part of the final exam. The final exam taken by both the control and intervention groups was based on a pilot version of the Algorithm Analysis Concept Inventory (AACI) that was developed to target fundamental AA concepts and probe students' misconceptions about these concepts. The pilot AACI was developed using a Delphi process involving a group of DSA instructors, and was shown to be a valid and reliable instrument to gauge students' understanding of the basic AA topics. / Ph. D.
66

Hardness of Constraint Satisfaction and Hypergraph Coloring : Constructions of Probabilistically Checkable Proofs with Perfect Completeness

Huang, Sangxia January 2015 (has links)
A Probabilistically Checkable Proof (PCP) of a mathematical statement is a proof written in a special manner that allows for efficient probabilistic verification. The celebrated PCP Theorem states that for every family of statements in NP, there is a probabilistic verification procedure that checks the validity of a PCP proof by reading only 3 bits from it. This landmark theorem, and the works leading up to it, laid the foundation for many subsequent works in computational complexity theory, the most prominent among them being the study of inapproximability of combinatorial optimization problems. This thesis focuses on a broad class of combinatorial optimization problems called Constraint Satisfaction Problems (CSPs). In an instance of a CSP problem of arity k, we are given a set of variables taking values from some finite domain, and a set of constraints each involving a subset of at most k variables. The goal is to find an assignment that simultaneously satisfies as many constraints as possible. An alternative formulation of the goal that is commonly used is Gap-CSP, where the goal is to decide whether a CSP instance is satisfiable or far from satisfiable, where the exact meaning of being far from satisfiable varies depending on the problems.We first study Boolean CSPs, where the domain of the variables is {0,1}. The main question we study is the hardness of distinguishing satisfiable Boolean CSP instances from those for which no assignment satisfies more than some epsilon fraction of the constraints. Intuitively, as the arity increases, the CSP gets more complex and thus the hardness parameter epsilon should decrease. We show that for Boolean CSPs of arity k, it is NP-hard to distinguish satisfiable instances from those that are at most 2^{~O(k^{1/3})}/2^k-satisfiable. We also study coloring of graphs and hypergraphs. Given a graph or a hypergraph, a coloring is an assignment of colors to vertices, such that all edges or hyperedges are non-monochromatic. The gap problem is to distinguish instances that are colorable with a small number of colors, from those that require a large number of colors. For graphs, we prove that there exists a constant K_0&gt;0, such that for any K &gt;= K_0, it is NP-hard to distinguish K-colorable graphs from those that require 2^{Omega(K^{1/3})} colors. For hypergraphs, we prove that it is quasi-NP-hard to distinguish 2-colorable 8-uniform hypergraphs of size N from those that require 2^{(log N)^{1/4-o(1)}} colors. In terms of techniques, all these results are based on constructions of PCPs with perfect completeness, that is, PCPs where the probabilistic proof verification procedure always accepts a correct proof. Not only is this a very natural property for proofs, but it can also be an essential requirement in many applications. It has always been particularly challenging to construct PCPs with perfect completeness for NP statements due to limitations in techniques. Our improved hardness results build on and extend many of the current approaches. Our Boolean CSP result and GraphColoring result were proved by adapting the Direct Sum of PCPs idea by Siu On Chan to the perfect completeness setting. Our proof for hypergraph coloring hardness improves and simplifies the recent work by Khot and Saket, in which they proposed the notion of superposition complexity of CSPs. / Ett probabilistiskt verifierbart bevis (eng: Probabilistically Checkable Proof, PCP) av en matematisk sats är ett bevis skrivet på ett speciellt sätt vilket möjliggör en effektiv probabilistisk verifiering. Den berömda PCP-satsen säger att för varje familj av påståenden i NP finns det en probabilistisk verifierare som kontrollerar om en PCP bevis är giltigt genom att läsa endast 3 bitar från det. Denna banbrytande sats, och arbetena som ledde fram till det, lade grunden för många senare arbeten inom komplexitetsteorin, framförallt inom studiet av approximerbarhet av kombinatoriska optimeringsproblem. I denna avhandling fokuserar vi på en bred klass av optimeringsproblem i form av villkorsuppfyllningsproblem (engelska ``Constraint Satisfaction Problems'' CSPs). En instans av ett CSP av aritet k ges av en mängd variabler som tar värden från någon ändlig domän, och ett antal villkor som vart och ett beror på en delmängd av högst k variabler. Målet är att hitta ett tilldelning av variablerna som samtidigt uppfyller så många som möjligt av villkoren. En alternativ formulering av målet som ofta används är Gap-CSP, där målet är att avgöra om en CSP-instans är satisfierbar eller långt ifrån satisfierbar, där den exakta innebörden av att vara ``långt ifrån satisfierbar'' varierar beroende på problemet.Först studerar vi booleska CSPer, där domänen är {0,1}. Den fråga vi studerar är svårigheten av att särskilja satisfierbara boolesk CSP-instanser från instanser där den bästa tilldelningen satisfierar högst en andel epsilon av villkoren. Intuitivt, när ariten ökar blir CSP mer komplexa och därmed bör svårighetsparametern epsilon avta med ökande aritet. Detta visar sig vara sant och ett första resultat är att för booleska CSP av aritet k är det NP-svårt att särskilja satisfierbara instanser från dem som är högst 2^{~O(k^{1/3})}/2^k-satisfierbara. Vidare studerar vi färgläggning av grafer och hypergrafer. Givet en graf eller en hypergraf, är en färgläggning en tilldelning av färger till noderna, så att ingen kant eller hyperkant är monokromatisk. Problemet vi analyserar är att särskilja instanser som är färgbara med ett litet antal färger från dem som behöver många färger. För grafer visar vi att det finns en konstant K_0&gt;0, så att för alla K &gt;= K_0 är det NP-svårt att särskilja grafer som är K-färgbara från dem som kräver minst 2^{Omega(K^{1/3})} färger. För hypergrafer visar vi att det är kvasi-NP-svårt att särskilja 2-färgbara 8-likformiga hypergrafer som har N noder från dem som kräv minst 2^{(log N)^{1/4-o(1)}} färger. Samtliga dessa resultat bygger på konstruktioner av PCPer med perfekt fullständighet. Det vill säga PCPer där verifieraren alltid accepterar ett korrekt bevis. Inte bara är detta en mycket naturlig egenskap för PCPer, men det kan också vara ett nödvändigt krav för vissa tillämpningar. Konstruktionen av PCPer med perfekt fullständighet för NP-påståenden ger tekniska komplikationer och kräver delvis utvecklande av nya metoder. Vårt booleska CSPer resultat och vårt Färgläggning resultat bevisas genom att anpassa ``Direktsumman-metoden'' introducerad av Siu On Chan till fallet med perfekt fullständighet. Vårt bevis för hypergraffärgningssvårighet förbättrar och förenklar ett färskt resultat av Khot och Saket, där de föreslog begreppet superpositionskomplexitet av CSP. / <p>QC 20150916</p>
67

Preuves interactives quantiques

Blier, Hugue 07 1900 (has links)
Cette thèse est consacrée à la complexité basée sur le paradigme des preuves interactives. Les classes ainsi définies ont toutes en commun qu’un ou plusieurs prouveurs, infiniment puissants, tentent de convaincre un vérificateur, de puissance bornée, de l’appartenance d’un mot à un langage. Nous abordons ici le modèle classique, où les participants sont des machines de Turing, et le modèle quantique, où ceux-ci sont des circuits quantiques. La revue de littérature que comprend cette thèse s’adresse à un lecteur déjà familier avec la complexité et l’informatique quantique. Cette thèse présente comme résultat la caractérisation de la classe NP par une classe de preuves interactives quantiques de taille logarithmique. Les différentes classes sont présentées dans un ordre permettant d’aborder aussi facilement que possible les classes interactives. Le premier chapitre est consacré aux classes de base de la complexité ; celles-ci seront utiles pour situer les classes subséquemment présentées. Les chapitres deux et trois présentent respectivement les classes à un et à plusieurs prouveurs. La présentation du résultat ci-haut mentionné est l’objet du chapitre quatre. / This thesis is devoted to complexity theory based on the interactive proof paradigm. All classes defined in this way involve one or many infinitely powerful provers attempting to convince a verifier of limited power that a string belongs to a certain language. We will consider the classical model, in which the various participants are Turing machines, as well as the quantum model, in which they are quantum circuits. The literature review included in this thesis assume that the reader is familiar with the basics of complexity theory and quantum computing. This thesis presents the original result that the class NP can be characterized by a class of quantum interactive proofs of logarithmic size. The various classes are presented in an order that facilitates the treatment of interactive classes. The first chapter is devoted to the basic complexity classes; these will be useful points of comparison for classes presented subsequently. Chapters two and three respectively present classes with one and many provers. The presentation of the result mentioned above is the object of chapter four.
68

L'enseignement des mathématiques en anglais langue seconde. Etude didactique de l’articulation des apprentissages linguistiques et mathématiques, à travers l’expérimentation de situations intégrées de type CLIL / Teaching Mathematics in English as a Second Language

Larue, Christian 24 November 2015 (has links)
La thèse met en lumière les conditions d’enseignement et d’apprentissage des mathématiques en langue seconde en étudiant avec précision l’articulation des savoirs mathématiques et des savoirs linguistiques. Elle traite le cas spécifique de l’enseignement des mathématiques en anglais dans un contexte CLIL et les séances expérimentales ont lieu en classes européennes de lycée. Le thème commun à ces séances est celui des preuves visuelles et multimodales. La Théorie des Situations Didactiques (TSD) offre un cadre théorique privilégié – notamment pour la construction des situations expérimentales - cadre qu’il a fallu compléter par des approches théoriques sémiotiques et linguistiques. Ainsi l’approche adoptée s’est révélée en adéquation avec la perspective actionnelle et la phraséodidactique a apporté de nombreux éléments permettant de mettre en relief le rôle de la phraséologie dans un enseignement intégré. Une focalisation particulière a dû être opérée sur les objets mathématiques et les processus d’abstraction mais aussi sur certains faits de langue. Les investigations ont permis d’affiner les descriptions des raisonnements produits tout en conservant une référence aux niveaux de milieux, au sens de la TSD. L’étude a nécessité de développer le concept de représentation et de décliner les représentations produites dans le contexte de la L2. Ce sont ces concepts et celui d’adidacticité, central dans la TSD, qui ont permis d’organiser les séances de manière optimale, en faisant apparaître le rôle essentiel joué par la perception active dans les processus de conceptualisation. / The purpose of this thesis is to investigate learning and teaching conditions of mathematics in English as a second language by closely examining how mathematical and language knowledge can fit together. This study deals with the specific case of CLIL teaching and the related experimental situations are performed in European classes in a French high school. The situations have a common topic, namely that of visual and multimodal proof. The theory of Didactical Situations is the central theoretical framework but our study has proven to be compatible with task-based pedagogy. Besides, phraseodidactics provided a useful and adequate auxiliary framework by shedding some light onto the essential role played by4phraseology. We particularly kept focused on mathematical objects and processes of abstraction but also on some specific language features. The concept of representation is central in our research works and thus had to be precisely defined. The success of our experimental situations owes a lot to the use of adidacticity, a central concept in TSD, and our focusing on the crucial part played by active perception during processes of conceptualisation. The purpose of one of the experimental situations (conducted in a second language) was to ensure that pupils divised, by themselves, a visual proof of an arithmetic property previously conjectured, carried out on the very level of schematisation an explicit generalisation and used real cubes to perform another type of proof, thus making the inductive step of the induction explicit.
69

A Machine-Checked Proof of Correctness of Pastry / Une preuve certifiée par la machine de la correction du protocole Pastry

Azmy, Noran 24 November 2016 (has links)
Les réseaux pair-à-pair (P2P) constituent un modèle de plus en plus populaire pour la programmation d’applications Internet car ils favorisent la décentralisation, le passage à l’échelle, la tolérance aux pannes et l’auto-organisation. à la différence du modèle traditionnel client-serveur, un réseau P2P est un système réparti décentralisé dans lequel tous les nœuds interagissent directement entre eux et jouent à la fois les rôles de fournisseur et d’utilisateur de services et de ressources. Une table de hachage distribuée (DHT) est réalisée par un réseauP2P et offre les mêmes services qu’une table de hachage classique, hormis le fait que les différents couples (clef, valeur) sont stockés dans différents nœuds du réseau. La fonction principale d’une DHT est la recherche d’une valeur associée à une clef donnée. Parmi les protocoles réalisant une DHT on peut nommer Chord, Pastry, Kademlia et Tapestry. Ces protocoles promettent de garantir certaines propriétés de correction et de performance ; or, les tentatives de démontrer formellement de telles propriétés se heurtent invariablement à des cas limites dans lesquels certaines propriétés sont violées. Tian-xiang Lu a ainsi décrit des problèmes de correction dans des versions publiées de Pastry. Il a conçu un modèle, appelé LuPastry, pour lequel il a fourni une preuve partielle, mécanisée dans l’assistant à la preuve TLA+ Proof System, démontrant que les messages de recherche de clef sont acheminés au bon nœud du réseau dans le cas sans départ de nœuds. En analysant la preuve de Lu j’ai découvert qu’elle contenait beaucoup d’hypothèses pour lesquelles aucune preuve n’avait été fournie, et j’ai pu trouver des contre-exemples à plusieurs de ces hypothèses. La présente thèse apporte trois contributions. Premièrement, je présente LuPastry+, une spécification TLA+ revue de LuPastry. Au-delà des corrections nécessaires d’erreurs, LuPastry+ améliore LuPastry en introduisant de nouveaux opérateurs et définitions, conduisant à une spécification plus modulaire et isolant la complexité de raisonnement à des parties circonscrites de la preuve, contribuant ainsi à automatiser davantage la preuve. Deuxièmement, je présente une preuve TLA+ complète de l’acheminement correct dans LuPastry+. Enfin, je démontre que l’étape finale du processus d’intégration de nœuds dans LuPastry (et LuPastry+) n’est pas nécessaire pour garantir la cohérence du protocole. Concrètement, j’exhibe une nouvelle spécification avec un processus simplifié d’intégration de nœuds, que j’appelle Simplified LuPastry+, et je démontre qu’elle garantit le bon acheminement de messages de recherche de clefs. La preuve de correction pour Simplified LuPastry+ est obtenue en réutilisant la preuve pour LuPastry+, et ceci représente un bon succès pour la réutilisation de preuves, en particulier considérant la taille de ces preuves. Chacune des deux preuves requiert plus de 30000 étapes interactives ; à ma connaissance, ces preuves constituent les preuves les plus longues écrites dans le langage TLA+ à ce jour, et les seuls exemples d’application de preuves mécanisées de théorèmes pour la vérification de protocoles DHT / A distributed hash table (DHT) is a peer-to-peer network that offers the function of a classic hash table, but where different key-value pairs are stored at different nodes on the network. Like a classic hash table, the main function provided by a DHT is key lookup, which retrieves the value stored at a given key. Examples of DHT protocols include Chord, Pastry, Kademlia and Tapestry. Such DHT protocols certain correctness and performance guarantees, but formal verification typically discovers border cases that violate those guarantees. In his PhD thesis, Tianxiang Lu reported correctness problems in published versions of Pastry and developed a model called {\LP}, for which he provided a partial proof of correct delivery of lookup messages assuming no node failure, mechanized in the {\TLA} Proof System. In analyzing Lu's proof, I discovered that it contained unproven assumptions, and found counterexamples to several of these assumptions. The contribution of this thesis is threefold. First, I present {\LPP}, a revised {\TLA} specification of {\LP}. Aside from needed bug fixes, {\LPP} contains new definitions that make the specification more modular and significantly improve proof automation. Second, I present a complete {\TLA} proof of correct delivery for {\LPP}. Third, I prove that the final step of the node join process of {\LP}/{\LPP} is not necessary to achieve consistency. In particular, I develop a new specification with a simpler node join process, which I denote by {\SLP}, and prove correct delivery of lookup messages for this new specification. The proof of correctness of {\SLP} is written by reusing the proof for {\LPP}, which represents a success story in proof reuse, especially for proofs of this size. Each of the two proofs amounts to over 32,000 proof steps; to my knowledge, they are currently the largest proofs written in the {\TLA} language, and---together with Lu's proof---the only examples of applying full theorem proving for the verification of DHT protocols
70

O Processo penal e a busca pela verdade

Ferreira, Rosana Miranda 29 March 2006 (has links)
Made available in DSpace on 2016-04-26T20:23:55Z (GMT). No. of bitstreams: 1 Dissertacao Rosana Miranda Ferreira.pdf: 675636 bytes, checksum: 5495752d2e8bd4722a38bc7a635c12b7 (MD5) Previous issue date: 2006-03-29 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / In this paper we present the performance of the criminal proceeding as an instrument of search for the truth. To base our knowledge on the truth we search the philosophical approach, starting in Greece with Socrates, and finishing on native grounds with Miguel Reale, and in synthesis we describe as each one formulates the knowledge of the truth. For this, we present the truth in the process. We detach real truth as unattainable and impossible to reach, as well as to the president of criminal prosecution, rank that the gauging situation and circumstances, such and which had occurred, never will be obtained to reproduce. We appraise the truths: formal, material, procedural, by approximation and the probability pointing out the most modern trend of the search for certainty close to the judicial truth, this last one happened not of evidence but of a judgment being demarcated by justice primarily. We stress, however, the conquest of the truth, improbable for the criminal proceeding; the persistence in the search of the true reconstitution of the facts is a value that legitimizes the proper criminal persecution. From the presented historical synthesis we search to survey the way of the verification of the truth, ever since the most violent ways of the Inquisition until our days, where a civilian has to wait years for the federal reply. To illustrate the idea we present Franz Kafka, portraying in his workmanship somebody "Before the Law . When disserting the basic right of the access to justice we point out the supremacy of the principle of dignity of the human being, who also must be reflected in the process before the duty of the State "administer justice". We describe some notions of proof, the allegations, the responsibilities, and some of the obstacles inside of the proceeding that interpose as barriers for the search of the truth. We discuss the question of the determined judge to be able or have to evaluate all raised found evidences and even other ones he believes important to include. The decision, finally, emanated from free conviction through arguments and transparency in the briefings, represents the longed for and pursued truth, that exercises, likewise, a social function in the sense of accomplishing the right, applying ethics, to reconcile the society, and to look for the common good / Nessa dissertação apresentamos a atuação do processo penal como um instrumento de busca pela verdade. Para alicerçar nosso conhecimento sobre a verdade, buscamos o enfoque filosófico, começando pela Grécia, em Sócrates e finalizando em solo pátrio com Miguel Reale, e em síntese descrevemos como cada um formula o conhecimento da verdade. A partir disso, apresentamos a verdade no processo. Destacamos a verdade real como inatingível e de impossível alcance, outrossim, ao presidente da persecução penal, posto que a aferição de uma situação fática e suas circunstâncias, tal e qual ocorreram, jamais se conseguirão reproduzir. Conceituamos as verdades: formal, material, processual, a aproximativa e a verossimilhança apontando a tendência mais moderna da busca da certeza próxima da verdade judicial, essa última advinda não da prova mas de um juízo, sendo demarcada pela justiça como fundamento. Ressaltamos que apesar da conquista da verdade ser improvável, o empenho na busca da verdadeira reconstituição dos fatos é um valor que legitima a própria persecução penal. Da síntese histórica apresentada buscamos aferir a maneira de apuração da verdade, desde os modos mais violentos da Inquisição até os nossos dias, onde o cidadão, chega a esperar por anos, pela resposta estatal. Para ilustrar a idéia apresentamos Franz Kafka, retratando em sua obra alguém Diante da Lei . Ao discorrer do direito fundamental do acesso à justiça, apontamos a supremacia do princípio da dignidade da pessoa humana, que também deve estar refletido no processo, ante o dever do Estado de dizer o direito . Descrevemos algumas noções de prova, as alegações, os ônus e alguns dos óbices dentro do próprio processo que se interpõem como entraves à busca da verdade. Aventamos do papel do julgador investido do poder- dever de valorar todas as provas levantadas, e até de outras, que no seu entender, ache necessário que se produza. A decisão, por fim, emanada do livre convencimento com aportes argumentativos e transparência nas elucidações, representa a verdade almejada e perseguida, que presta, outrossim, uma função social, no sentido de efetivar o direito, exercitar a ética, apaziguar a sociedade e buscar o bem comum

Page generated in 0.0555 seconds