231 |
Incorporating Functionally Graded Materials and Precipitation Hardening into Microstructure Sensitive DesignLyon, Mark Edward 07 August 2003 (has links) (PDF)
The methods of MSD are applied to the design of functionally graded materials. Analysis models are presented to allow the design of compliant derailleur for a case study and constraints are placed on the design. Several methods are presented for relating elements of the microstructure to the properties of the material, including Taylor yield theory, Hill elastic bounds, and precipitation hardening. Applying n-point statistics to the MSD framework is also discussed. Some results are presented for the information content of the 2-point correlation statistics that follow from the methods used to integrate functionally graded materials into MSD. For the compliant beam case study, the best design (98%Al-2%Li) was a 97% improvement over the worst (100%Al). The improvements were primarily due to the precipitation hardening, although anisotropy also significantly impacted the design. Under the constraints for the design, allowing the beam to be functionally graded had little effect on the overall design, unless there was significant stiffening occurring along with particulate formation.
|
232 |
Fractional calculus operator and its applications to certain classes of analytic functions. A study on fractional derivative operator in analytic and multivalent functions.Amsheri, Somia M.A. January 2013 (has links)
The main object of this thesis is to obtain numerous applications of fractional derivative operator concerning analytic and -valent (or multivalent) functions in the open unit disk by introducing new classes and deriving new properties. Our finding will provide interesting new results and indicate extensions of a number of known results. In this thesis we investigate a wide class of problems. First, by making use of certain fractional derivative operator, we define various new classes of -valent functions with negative coefficients in the open unit disk such as classes of -valent starlike functions involving results of (Owa, 1985a), classes of -valent starlike and convex functions involving the Hadamard product (or convolution) and classes of -uniformly -valent starlike and convex functions, in obtaining, coefficient estimates, distortion properties, extreme points, closure theorems, modified Hadmard products and inclusion properties. Also, we obtain radii of convexity, starlikeness and close-to-convexity for functions belonging to those classes. Moreover, we derive several new sufficient conditions for starlikeness and convexity of the fractional derivative operator by using certain results of (Owa, 1985a), convolution, Jack¿s lemma and Nunokakawa¿ Lemma. In addition, we obtain coefficient bounds for the functional of functions belonging to certain classes of -valent functions of complex order which generalized the concepts of starlike, Bazilevi¿ and non-Bazilevi¿ functions. We use the method of differential subordination and superordination for analytic functions in the open unit disk in order to derive various new subordination, superordination and sandwich results involving the fractional derivative operator. Finally, we obtain some new strong differential subordination, superordination, sandwich results for -valent functions associated with the fractional derivative operator by investigating appropriate classes of admissible functions. First order linear strong differential subordination properties are studied. Further results including strong differential subordination and superordination based on the fact that the coefficients of the functions associated with the fractional derivative operator are not constants but complex-valued functions are also studied.
|
233 |
PROBABLY APPROXIMATELY CORRECT BOUNDS FOR ESTIMATING MARKOV TRANSITION KERNELSImon Banerjee (17555685) 06 December 2023 (has links)
<p dir="ltr">This thesis presents probably approximately correct (PAC) bounds on estimates of the transition kernels of Controlled Markov chains (CMC’s). CMC’s are a natural choice for modelling various industrial and medical processes, and are also relevant to reinforcement learning (RL). Learning the transition dynamics of CMC’s in a sample efficient manner is an important question that is open. This thesis aims to close this gap in knowledge in literature.</p><p dir="ltr">In Chapter 2, we lay the groundwork for later chapters by introducing the relevant concepts and defining the requisite terms. The two subsequent chapters focus on non-parametric estimation. </p><p dir="ltr">In Chapter 3, we restrict ourselves to a finitely supported CMC with d states and k controls and produce a general theory for minimax sample complexity of estimating the transition matrices.</p><p dir="ltr">In Chapter 4 we demonstrate the applicability of this theory by using it to recover the sample complexities of various controlled Markov chains, as well as RL problems.</p><p dir="ltr">In Chapter 5 we move to a continuous state and action spaces with compact supports. We produce a robust, non-parametric test to find the best histogram based estimator of the transition density; effectively reducing the problem into one of model selection based on empricial processes.</p><p dir="ltr">Finally, in Chapter 6 we move to a parametric and Bayesian regime, and restrict ourselves to Markov chains. Under this setting we provide a PAC-Bayes bound for estimating model parameters under tempered posteriors.</p>
|
234 |
Sensor Networks: Studies on the Variance of Estimation, Improving Event/Anomaly Detection, and Sensor Reduction Techniques Using Probabilistic ModelsChin, Philip Allen 19 July 2012 (has links)
Sensor network performance is governed by the physical placement of sensors and their geometric relationship to the events they measure. To illustrate this, the entirety of this thesis covers the following interconnected subjects: 1) graphical analysis of the variance of the estimation error caused by physical characteristics of an acoustic target source and its geometric location relative to sensor arrays, 2) event/anomaly detection method for time aggregated point sensor data using a parametric Poisson distribution data model, 3) a sensor reduction or placement technique using Bellman optimal estimates of target agent dynamics and probabilistic training data (Goode, Chin, & Roan, 2011), and 4) transforming event monitoring point sensor data into event detection and classification of the direction of travel using a contextual, joint probability, causal relationship, sliding window, and geospatial intelligence (GEOINT) method. / Master of Science
|
235 |
New Theoretical Techniques For Analyzing And Mitigating Password Cracking AttacksPeiyuan Liu (18431811) 26 April 2024 (has links)
<p dir="ltr">Brute force guessing attacks continue to pose a significant threat to user passwords. To protect user passwords against brute force attacks, many organizations impose restrictions aimed at forcing users to select stronger passwords. Organizations may also adopt stronger hashing functions in an effort to deter offline brute force guessing attacks. However, these defenses induce trade-offs between security, usability, and the resources an organization is willing to investigate to protect passwords. In order to make informed password policy decisions, it is crucial to understand the distribution over user passwords and how policy updates will impact this password distribution and/or the strategy of a brute force attacker.</p><p dir="ltr">This first part of this thesis focuses on developing rigorous statistical tools to analyze user password distributions and the behavior of brute force password attackers. In particular, we first develop several rigorous statistical techniques to upper and lower bound the guessing curve of an optimal attacker who knows the user password distribution and can order guesses accordingly. We apply these techniques to analyze eight password datasets and two PIN datasets. Our empirical analysis demonstrates that our statistical techniques can be used to evaluate password composition policies, compare the strength of different password distributions, quantify the impact of applying PIN blocklists, and help tune hash cost parameters. A real world attacker may not have perfect knowledge of the password distribution. Prior work introduced an efficient Monte Carlo technique to estimate the guessing number of a password under a particular password cracking model, i.e., the number of guesses an attacker would check before this particular password. This tool can also be used to generate password guessing curves, but there is no absolute guarantee that the guessing number and the resulting guessing curves are accurate. Thus, we propose a tool called Confident Monte Carlo that uses rigorous statistical techniques to upper and lower bound the guessing number of a particular password as well as the attacker's entire guessing curve. Our empirical analysis also demonstrate that this tool can be used to help inform password policy decisions, e.g., identifying and warning users with weaker passwords, or tuning hash cost parameters.</p><p dir="ltr">The second part of this thesis focuses on developing stronger password hashing algorithms to protect user passwords against offline brute force attacks. In particular, we establish that the memory hard function Scrypt, which has been widely deployed as password hash function, is maximally bandwidth hard. We also present new techniques to construct and analyze depth robust graph with improved concrete parameters. Depth robust graph play an essential rule in the design and analysis of memory hard functions.</p>
|
236 |
The exponent of Hölder calmness for polynomial systemsHeerda, Jan 27 April 2012 (has links)
Diese Arbeit befasst sich mit Untersuchung der Hölder Calmness, eines Stabilitätskonzeptes das man als Verallgemeinerung des Begriffs der Calmness erhält. Ausgehend von Charakterisierungen dieser Eigenschaft für Niveaumengen von Funktionen, werden, unter der Voraussetzung der Hölder Calmness, Prozeduren zur Bestimmung von Elementen dieser Mengen analysiert. Ebenso werden hinreichende Bedingungen für Hölder Calmness studiert. Da Hölder Calmness (nichtleerer) Lösungsmengen endlicher Ungleichungssysteme mittels (lokaler) Fehlerabschätzungen beschrieben werden kann, werden auch Erweiterungen der lokalen zu globalen Ergebnissen diskutiert. Als Anwendung betrachten wir speziell den Fall von Niveaumengen von Polynomen bzw. allgemeine Lösungsmengen polynomialer Gleichungen und Ungleichungen. Eine konkrete Frage, die wir beantworten wollen, ist die nach dem Zusammenhang zwischen dem größten Grad der beteiligten Polynome sowie dem Typ, d.h. dem auftretenden Exponenten, der Hölder Calmness des entsprechenden Systems. / This thesis is concerned with an analysis of Hölder calmness, a stability property derived from the concept of calmness. On the basis of its characterization for (sub)level sets, we will cogitate about procedures to determine points in such sets under a Hölder calmness assumption. Also sufficient conditions for Hölder calmness of (sub)level sets and of inequality systems will be given and examined. Further, since Hölder calmness of (nonempty) solution sets of finite inequality systems may be described in terms of (local) error bounds, we will as well amplify the local propositions to global ones. As an application we investigate the case of (sub)level sets of polynomials and of general solution sets of polynomial equations and inequalities. A concrete question we want to answer here is, in which way the maximal degree of the involved polynomials is connected to the exponent of Hölder calmness or of the error bound for the system in question.
|
237 |
Fine-Grained Parameterized Algorithms on Width Parameters and BeyondHegerfeld, Falko 25 October 2023 (has links)
Die Kernaufgabe der parameterisierten Komplexität ist zu verstehen, wie Eingabestruktur die Problemkomplexität beeinflusst. Wir untersuchen diese Fragestellung aus einer granularen Perspektive und betrachten Problem-Parameter-Kombinationen mit einfach exponentieller Laufzeit, d.h., Laufzeit a^k n^c, wobei n die Eingabegröße ist, k der Parameterwert, und a und c zwei positive Konstanten sind. Unser Ziel ist es, die optimale Laufzeitbasis a für eine gegebene Kombination zu bestimmen. Für viele Zusammenhangsprobleme, wie Connected Vertex Cover oder Connected Dominating Set, ist die optimale Basis bezüglich dem Parameter Baumweite bekannt. Die Baumweite gehört zu der Klasse der Weiteparameter, welche auf natürliche Weise zu Algorithmen mit dem Prinzip der dynamischen Programmierung führen.
Im ersten Teil dieser Dissertation untersuchen wir, wie sich die optimale Laufzeitbasis für diverse Zusammenhangsprobleme verändert, wenn wir zu ausdrucksstärkeren Weiteparametern wechseln. Wir entwerfen neue parameterisierte Algorithmen und (bedingte) untere Schranken, um diese optimalen Basen zu bestimmen. Insbesondere zeigen wir für die Parametersequenz Baumweite, modulare Baumweite, und Cliquenweite, dass die optimale Basis von Connected Vertex Cover bei 3 startet, sich erst auf 5 erhöht und dann auf 6, wobei hingegen die optimale Basis von Connected Dominating Set bei 4 startet, erst bei 4 bleibt und sich dann auf 5 erhöht.
Im zweiten Teil gehen wir über Weiteparameter hinaus und analysieren restriktivere Arten von Parametern. Für die Baumtiefe entwerfen wir platzsparende Verzweigungsalgorithmen. Die Beweistechniken für untere Schranken bezüglich Weiteparametern übertragen sich nicht zu den restriktiveren Parametern, weshalb nur wenige optimale Laufzeitbasen bekannt sind. Um dies zu beheben untersuchen wir Knotenlöschungsprobleme. Insbesondere zeigen wir, dass die optimale Basis von Odd Cycle Transversal parameterisiert mit einem Modulator zu Baumweite 2 den Wert 3 hat. / The question at the heart of parameterized complexity is how input structure governs the complexity of a problem. We investigate this question from a fine-grained perspective and study problem-parameter-combinations with single-exponential running time, i.e., time a^k n^c, where n is the input size, k the parameter value, and a and c are positive constants. Our goal is to determine the optimal base a for a given combination. For many connectivity problems such as Connected Vertex Cover or Connecting Dominating Set, the optimal base is known relative to treewidth. Treewidth belongs to the class of width parameters, which naturally admit dynamic programming algorithms.
In the first part of this thesis, we study how the optimal base changes for these connectivity problems when going to more expressive width parameters. We provide new parameterized dynamic programming algorithms and (conditional) lower bounds to determine the optimal base, in particular, we obtain for the parameter sequence treewidth, modular-treewidth, clique-width that the optimal base for Connected Vertex Cover starts at 3, increases to 5, and then to 6, whereas the optimal base for Connected Dominating Set starts at 4, stays at 4, and then increases to 5.
In the second part, we go beyond width parameters and study more restrictive parameterizations like depth parameters and modulators. For treedepth, we design space-efficient branching algorithms. The lower bound techniques for width parameterizations do not carry over to these more restrictive parameterizations and as a result, only a few optimal bases are known. To remedy this, we study standard vertex-deletion problems. In particular, we show that the optimal base of Odd Cycle Transversal parameterized by a modulator to treewidth 2 is 3. Additionally, we show that similar lower bounds can be obtained in the realm of dense graphs by considering modulators consisting of so-called twinclasses.
|
238 |
A framework for estimating riskKroon, Rodney Stephen 03 1900 (has links)
Thesis (PhD (Statistics and Actuarial Sciences))--Stellenbosch University, 2008. / We consider the problem of model assessment by risk estimation. Various
approaches to risk estimation are considered in a uni ed framework. This a discussion of various complexity dimensions and approaches to obtaining
bounds on covering numbers is also presented.
The second type of training sample interval estimator discussed in the thesis
is Rademacher bounds. These bounds use advanced concentration inequalities,
so a chapter discussing such inequalities is provided. Our discussion
of Rademacher bounds leads to the presentation of an alternative, slightly
stronger, form of the core result used for deriving local Rademacher bounds,
by avoiding a few unnecessary relaxations.
Next, we turn to a discussion of PAC-Bayesian bounds. Using an approach
developed by Olivier Catoni, we develop new PAC-Bayesian bounds based
on results underlying Hoe ding's inequality. By utilizing Catoni's concept
of \exchangeable priors", these results allowed the extension of a covering
number-based result to averaging classi ers, as well as its corresponding
algorithm- and data-dependent result.
The last contribution of the thesis is the development of a more
exible
shell decomposition bound: by using Hoe ding's tail inequality rather than
Hoe ding's relative entropy inequality, we extended the bound to general
loss functions, allowed the use of an arbitrary number of bins, and introduced
between-bin and within-bin \priors".
Finally, to illustrate the calculation of these bounds, we applied some of them
to the UCI spam classi cation problem, using decision trees and boosted
stumps.
framework is an extension of a decision-theoretic framework proposed by
David Haussler. Point and interval estimation based on test samples and
training samples is discussed, with interval estimators being classi ed based
on the measure of deviation they attempt to bound.
The main contribution of this thesis is in the realm of training sample interval
estimators, particularly covering number-based and PAC-Bayesian
interval estimators. The thesis discusses a number of approaches to obtaining
such estimators. The rst type of training sample interval estimator
to receive attention is estimators based on classical covering number arguments.
A number of these estimators were generalized in various directions.
Typical generalizations included: extension of results from misclassi cation
loss to other loss functions; extending results to allow arbitrary ghost sample
size; extending results to allow arbitrary scale in the relevant covering
numbers; and extending results to allow arbitrary choice of in the use of
symmetrization lemmas.
These extensions were applied to covering number-based estimators for various
measures of deviation, as well as for the special cases of misclassi -
cation loss estimators, realizable case estimators, and margin bounds. Extended
results were also provided for strati cation by (algorithm- and datadependent)
complexity of the decision class.
In order to facilitate application of these covering number-based bounds,
|
239 |
Analyse des bornes extrêmes et le contrôle des armes à feu : l’effet de la Loi C-68 sur les homicides au QuébecLinteau, Isabelle 12 1900 (has links)
Contexte et objectifs. En 1995, le gouvernement canadien a promulgué la Loi C-68, rendant ainsi obligatoire l’enregistrement de toutes les armes à feu et affermissant les vérifications auprès des futurs propriétaires. Faute de preuves scientifiques crédibles, le potentiel de cette loi à prévenir les homicides est présentement remis en question. Tout en surmontant les biais potentiels retrouvés dans les évaluations antérieures, l’objectif de ce mémoire est d’évaluer l’effet de la Loi C-68 sur les homicides au Québec entre 1974 et 2006. Méthodologie. L’effet de la Loi C-68 est évalué à l’aide d’une analyse des bornes extrêmes. Les effets immédiats et graduels de la Loi sont évalués à l’aide de 372 équations. Brièvement, il s’agit d’analyses de séries chronologiques interrompues où toutes les combinaisons de variables indépendantes sont envisagées afin d’éviter les biais relatifs à une spécification arbitraire des modèles. Résultats. L’introduction de la Loi C-68 est associée à une baisse graduelle des homicides commis à l’aide d’armes longues (carabines et fusils de chasse), sans qu’aucun déplacement tactique ne soit observé. Les homicides commis par des armes à feu à autorisation restreinte ou prohibées semblent influencés par des facteurs différents. Conclusion. Les résultats suggèrent que le contrôle des armes à feu est une mesure efficace pour prévenir les homicides. L’absence de déplacement tactique suggère également que l’arme à feu constitue un important facilitateur et que les homicides ne sont pas tous prémédités. D’autres études sont toutefois nécessaires pour clairement identifier les mécanismes de la Loi responsables de la baisse des homicides. / Context and objectives. Laws with extensive background checks and making mandatory the registration of all guns have been adopted by some governments to prevent firearms-related homicides. On the other hand, methodological flaws found in previous evaluations question the potential of such laws to prevent gun homicides. By taking into account previous limitations, the main objective of this study is to estimate the effect of Bill C-68 on homicides committed in the Province of Quebec, Canada, between 1974 and 2006. Methodology. Using extreme bounds analysis, we assess the effect of Bill C-68 on homicides. Estimates of the immediate and gradual effects of the law are based on a total of 372 equations. More precisely, interrupted time series analyses were conducted, using all possible variable combinations, in order to overcome biases related to model specification. Results. We found that Bill C-68 is associated with a significant and gradual decline in homicides committed with a long gun (either a riffle or a shotgun). The substitution effects are not robust with respect to different model specifications. Patterns observed in homicides involving restricted or prohibited firearms suggest that they are influenced by different factors, not considered in our analyses. Conclusion. Results suggest that enhanced firearm control laws are an effective tool to prevent homicides. The lack of tactical displacement supports the concept of firearm as a crime facilitator and suggests that all homicides are not carefully planned. Other studies are however needed to pinpoint law provisions accountable for the decrease in homicides.
|
240 |
Complexité de la communication sur un canal avec délaiLapointe, Rébecca 02 1900 (has links)
Nous introduisons un nouveau modèle de la communication à deux parties dans lequel nous nous intéressons au temps que prennent deux participants à effectuer une tâche à travers un canal avec délai d. Nous établissons quelques bornes supérieures et inférieures et comparons ce nouveau modèle aux modèles de communication classiques et quantiques étudiés dans la littérature. Nous montrons que la complexité de la communication d’une fonction sur un canal avec délai est bornée supérieurement par sa complexité de la communication modulo un facteur multiplicatif d/ lg d. Nous présentons ensuite quelques exemples de fonctions pour lesquelles une stratégie astucieuse se servant du temps mort confère un avantage sur une implémentation naïve d’un protocole de communication optimal en terme de complexité de la communication. Finalement, nous montrons qu’un canal avec délai permet de réaliser un échange de bit cryptographique, mais que, par lui-même, est insuffisant pour réaliser la primitive cryptographique de transfert équivoque. / We introduce a new communication complexity model in which we want to determine how much time of communication is needed by two players in order to execute arbitrary tasks on a channel with delay d. We establish a few basic lower and upper bounds and compare this new model to existing models such as the classical and quantum two-party models of communication. We show that the standard communication complexity of a function, modulo a factor of d/ lg d, constitutes an upper bound to its communication complexity on a delayed channel. We introduce a few examples on which a clever strategy depending on the delay procures a significant advantage over the naïve implementation of an optimal communication protocol. We then show that a delayed channel can be used to implement a cryptographic bit swap, but is insufficient on its own to implement an oblivious transfer scheme.
|
Page generated in 0.0785 seconds