• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 73
  • 20
  • 20
  • 9
  • 8
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 162
  • 40
  • 36
  • 24
  • 23
  • 22
  • 19
  • 17
  • 16
  • 16
  • 16
  • 15
  • 13
  • 12
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Error isolation in distributed systems

Behrens, Diogo 14 January 2016 (has links)
In distributed systems, if a hardware fault corrupts the state of a process, this error might propagate as a corrupt message and contaminate other processes in the system, causing severe outages. Recently, state corruptions of this nature have been observed surprisingly often in large computer populations, e.g., in large-scale data centers. Moreover, since the resilience of processors is expected to decline in the near future, the likelihood of state corruptions will increase even further. In this work, we argue that preventing the propagation of state corruption should be a first-class requirement for large-scale fault-tolerant distributed systems. In particular, we propose developers to target error isolation, the property in which each correct process ignores any corrupt message it receives. Typically, a process cannot decide whether a received message is corrupt or not. Therefore, we introduce hardening as a class of principled approaches to implement error isolation in distributed systems. Hardening techniques are (semi-)automatic transformations that enforce that each process appends an evidence of good behavior in the form of error codes to all messages it sends. The techniques “virtualize” state corruptions into more benign failures such as crashes and message omissions: if a faulty process fails to detect its state corruption and abort, then hardening guarantees that any corrupt message the process sends has invalid error codes. Correct processes can then inspect received messages and drop them in case they are corrupt. With this dissertation, we contribute theoretically and practically to the state of the art in fault-tolerant distributed systems. To show that hardening is possible, we design, formalize, and prove correct different hardening techniques that enable existing crash-tolerant designs to handle state corruption with minimal developer intervention. To show that hardening is practical, we implement and evaluate these techniques, analyzing their effect on the system performance and their ability to detect state corruptions in practice.
102

Scalable error isolation for distributed systems: modeling, correctness proofs, and additional experiments

Behrens, Diogo, Serafini, Marco, Arnautov, Sergei, Junqueira, Flavio, Fetzer, Christof 01 June 2016 (has links)
This technical report complements the paper entitled “Scalable error isolation for distributed systems” published at USENIX NSDI 15.
103

U-RANS Simulation of fluid forces exerted upon an oscillating tube array

Divaret, Lise January 2011 (has links)
The aim of this master thesis is to characterize the fluid forces applied to a fuel assembly inthe core of a nuclear power plant in case of seism. The forces are studied with a simplifiedtwo-dimensional model constituted of an array of 3 by 3 infinite cylinders oscillating in aclosed box. The axial flow of water, which convects the heat in the core of a nuclear powerplant, is also taken into account. The velocity of the axial flow reaches 4m/s in the middle ofthe assembly and modifies the forces features when the cylinders move laterally.The seism is modeled as a lateral displacement with high amplitude (several cylinderdiameters) and low frequencies (below 20 Hz). In order to study the effects of the amplitudeand of the frequency of the displacement, the displacement taken is a sine function withboth controlled amplitude and frequency. Four degrees of freedom of the system will bestudied: the amplitude of the displacement, its frequency, the axial velocity amplitude andthe confinement (due to the closed box).The fluid forces exerted on the cylinders can be seen as a combination of three terms: anadded mass, related to the acceleration of cylinders, a drift force, related to the damping ofthe fluid and a force due to the interaction of the cylinder with residual vortices. The firsttwo components will be characterized through the Morison expansion, and their evolutionwith the variation of the degree of freedom of the system will be quantified. The effect ofthe interaction with the residual vortices will be observed in the plots of the forces vs. timebut also in the velocity and vorticity map of the fluid.The fluid forces are calculated with the CFD code Code_Saturne, which uses a second orderaccurate finite volume method. Unsteady Reynolds Averaged Navier Stokes simulations arerealized with a k-epsilon turbulence model. The Arbitrary Lagrange Euler model is used todescribe the structure displacement. The domain is meshed with hexahedra with thesoftware gmsh [1] and the flow is visualized with Paraview [2]. The modeling techniquesused for the simulations are described in the first part of this master thesis.
104

PLPrepare: A Grammar Checker for Challenging Cases

Hoyos, Jacob 01 May 2021 (has links)
This study investigates one of the Polish language’s most arbitrary cases: the genitive masculine inanimate singular. It collects and ranks several guidelines to help language learners discern its proper usage and also introduces a framework to provide detailed feedback regarding arbitrary cases. The study tests this framework by implementing and evaluating a hybrid grammar checker called PLPrepare. PLPrepare performs similarly to other grammar checkers and is able to detect genitive case usages and provide feedback based on a number of error classifications.
105

The Reliability Assessment and Optimization of Arbitrary-State Monotone Systems under Epistemic Uncertainty / L'évaluation et L'optimisation De La Fiabilité Des Systèmes Monotones et à Etat arbitraire Sous Incertitude Épistémique

Sun, Muxia 03 July 2019 (has links)
Dans ce travail, nous étudions l’évaluation de la fiabilité, la modélisation et l’optimisation de systèmes à états arbitraires à incertitude épistémique. Tout d'abord, une approche universelle de modélisation à l'état arbitraire est proposée afin d'étudier efficacement les systèmes industriels modernes aux structures, mécanismes de fonctionnement et exigences de fiabilité de plus en plus complexes. De simples implémentations de modèles de fiabilité binaires, continus ou multi-états traditionnels ont montré leurs lacunes en termes de manque de généralité lors de la modélisation de structures, systèmes, réseaux et systèmes de systèmes industriels modernes et complexes. Dans ce travail, nous intéressons aussi particulièrement aux systèmes monotones, non seulement parce que la monotonie est apparue couramment dans la plupart des modèles de fiabilité standard, mais aussi qu’une propriété mathématique aussi simple permet une simplification énorme de nombreux problèmes extrêmement complexes. Ensuite, pour les systèmes de fiabilité monotones à états arbitraires, nous essayons de résoudre les problèmes suivants, qui sont apparus dans les principes mêmes de la modélisation mathématique: 1. L’évaluation de la fiabilité dans un environnement incertain épistémique avec des structures hiérarchiques être exploitées par toute approche de programmation 2; l'optimisation de la fiabilité / maintenance pour les systèmes à grande fiabilité avec incertitude épistémique. / In this work, we study the reliability assessment, modeling and optimization of arbitrary-state systems with epistemic uncertainty. Firstly, a universal arbitrary-state modelling approach is proposed, in order to effectively study the modern industrial systems with increasingly complicated structures, operation mechanisms and reliability demands. Simple implementations of traditional binary, continuous or multi-state reliability models have been showing their deficiencies in lack of generality, when modelling such complex modern industrial structures, systems, networks and systems-of-systems. In this work, we are also particularly interested in monotone systems, not only because monotonicity commonly appeared in most of the standard reliability models, but also that such a simple mathematical property allows a huge simplification to many extremely complex problems. Then, for the arbitrary-state monotone reliability systems, we try to solve the following challenges that appeared in its very fundamentals of mathematical modeling: 1. The reliability assessment under epistemic uncertain environment with hierarchy structures; 2. The reliability/maintenance optimization for large reliability systems under epistemic uncertainty.
106

Analýza algoritmů booleovských operací nad obecnými polygony / Analysis of General Polygon Boolean Operation Algorithms

Daněk, Tomáš January 2008 (has links)
This thesis deals with general polygon boolean operation algorithms. Boolean operations are e.g. intersection, union or difference. A general polygon can be e.g. a selfinterecting polygon with inner hole. Clipping of polygons against a rectangular window is probably the most familiar boolean operation on polygons. At first, basic definitions are listed. Then the principles of a selected set of boolean operation algorithms are reviewed. Finally, a complex comparison of the algorithms is undertaken. Performance as well as the ability to handle degenerate cases are tested. The output of this thesis is an overall evaluation of algorithm properties and a dynamic library that contains the implementation of all of the tested algorithms.
107

Identification Of Genes Involved In The Production Of Novel Antimicrobial Products Capable Of Inhibiting Multi-Drug Resistant Pathogens

Harris, Ryan A. 12 August 2019 (has links)
No description available.
108

The Existence of a Discontinuous Homomorphism Requires a Strong Axiom of Choice

Andersen, Michael Steven 01 December 2014 (has links) (PDF)
Conner and Spencer used ultrafilters to construct homomorphisms between fundamental groups that could not be induced by continuous functions between the underlying spaces. We use methods from Shelah and Pawlikowski to prove that Conner and Spencer could not have constructed these homomorphisms with a weak version of the Axiom of Choice. This led us to define and examine a class of pathological objects that cannot be constructed without a strong version of the Axiom of Choice, which we call the class of inscrutable objects. Objects that do not need a strong version of the Axiom of Choice are scrutable. We show that the scrutable homomorphisms from the fundamental group of a Peano continuum are exactly the homomorphisms induced by a continuous function.We suspect that any proposed theorem whose proof does not use a strong Axiom of Choice cannot have an inscrutable counterexample.
109

[en] DEEP MORIN SINGULARITIES OF THE MCKEAN-SCOVEL OPERATOR / [pt] SINGULARIDADES DE MORIN PROFUNDAS DO OPERADOR MCKEAN-SCOVEL

LUIS ANTONIO GOMEZ ARDILA 04 November 2021 (has links)
[pt] O operador de McKean-Scovel agindo sobre funções que satisfazem condições de Dirichlet é o operador não-linear de Sturm-Liouville mais simples: a não-linearidade é elevar ao quadrado. Nesse texto, demonstra-se uma conjetura que de mais de trinta anos: seu conjunto crítico só contém singularidades de Morin, que podem ter profundidade arbitrária. / [en] The McKean-Scovel operator is the simplest nonlinear Sturm-Liouville operator acting on functions satisfying Dirichlet boundary conditions: its nonlinearity is just taking the square of the incoming function. This text contains a proof of a conjecture from the late 80: its critical set consists only of Morin singularities, which attain arbitrary depth.
110

Design Of Polynomial-based Filters For Continuously Variable Sample Rate Conversion With Applications In Synthetic Instrumentati

Hunter, Matthew 01 January 2008 (has links)
In this work, the design and application of Polynomial-Based Filters (PBF) for continuously variable Sample Rate Conversion (SRC) is studied. The major contributions of this work are summarized as follows. First, an explicit formula for the Fourier Transform of both a symmetrical and nonsymmetrical PBF impulse response with variable basis function coefficients is derived. In the literature only one explicit formula is given, and that for a symmetrical even length filter with fixed basis function coefficients. The frequency domain optimization of PBFs via linear programming has been proposed in the literature, however, the algorithm was not detailed nor were explicit formulas derived. In this contribution, a minimax optimization procedure is derived for the frequency domain optimization of a PBF with time-domain constraints. Explicit formulas are given for direct input to a linear programming routine. Additionally, accompanying Matlab code implementing this optimization in terms of the derived formulas is given in the appendix. In the literature, it has been pointed out that the frequency response of the Continuous-Time (CT) filter decays as frequency goes to infinity. It has also been observed that when implemented in SRC, the CT filter is sampled resulting in CT frequency response aliasing. Thus, for example, the stopband sidelobes of the Discrete-Time (DT) implementation rise above the CT designed level. Building on these observations, it is shown how the rolloff rate of the frequency response of a PBF can be adjusted by adding continuous derivatives to the impulse response. This is of great advantage, especially when the PBF is used for decimation as the aliasing band attenuation can be made to increase with frequency. It is shown how this technique can be used to dramatically reduce the effect of alias build up in the passband. In addition, it is shown that as the number of continuous derivatives of the PBF increases the resulting DT implementation more closely matches the Continuous-Time (CT) design. When implemented for SRC, samples from a PBF impulse response are computed by evaluating the polynomials using a so-called fractional interval, µ. In the literature, the effect of quantizing µ on the frequency response of the PBF has been studied. Formulas have been derived to determine the number of bits required to keep frequency response distortion below prescribed bounds. Elsewhere, a formula has been given to compute the number of bits required to represent µ to obtain a given SRC accuracy for rational factor SRC. In this contribution, it is shown how these two apparently competing requirements are quite independent. In fact, it is shown that the wordlength required for SRC accuracy need only be kept in the µ generator which is a single accumulator. The output of the µ generator may then be truncated prior to polynomial evaluation. This results in significant computational savings, as polynomial evaluation can require several multiplications and additions. Under the heading of applications, a new Wideband Digital Downconverter (WDDC) for Synthetic Instruments (SI) is introduced. DDCs first tune to a signal's center frequency using a numerically controlled oscillator and mixer, and then zoom-in to the bandwidth of interest using SRC. The SRC is required to produce continuously variable output sample rates from a fixed input sample rate over a large range. Current implementations accomplish this using a pre-filter, an arbitrary factor resampler, and integer decimation filters. In this contribution, the SRC of the WDDC is simplified reducing the computational requirements to a factor of three or more. In addition to this, it is shown how this system can be used to develop a novel computationally efficient FFT-based spectrum analyzer with continuously variable frequency spans. Finally, after giving the theoretical foundation, a real Field Programmable Gate Array (FPGA) implementation of a novel Arbitrary Waveform Generator (AWG) is presented. The new approach uses a fixed Digital-to-Analog Converter (DAC) sample clock in combination with an arbitrary factor interpolator. Waveforms created at any sample rate are interpolated to the fixed DAC sample rate in real-time. As a result, the additional lower performance analog hardware required in current approaches, namely, multiple reconstruction filters and/or additional sample clocks, is avoided. Measured results are given confirming the performance of the system predicted by the theoretical design and simulation.

Page generated in 0.035 seconds