• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 161
  • 32
  • 32
  • 22
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 312
  • 61
  • 42
  • 38
  • 36
  • 34
  • 31
  • 29
  • 26
  • 24
  • 24
  • 24
  • 23
  • 22
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Fractional calculus operator and its applications to certain classes of analytic functions. A study on fractional derivative operator in analytic and multivalent functions.

Amsheri, Somia M.A. January 2013 (has links)
The main object of this thesis is to obtain numerous applications of fractional derivative operator concerning analytic and -valent (or multivalent) functions in the open unit disk by introducing new classes and deriving new properties. Our finding will provide interesting new results and indicate extensions of a number of known results. In this thesis we investigate a wide class of problems. First, by making use of certain fractional derivative operator, we define various new classes of -valent functions with negative coefficients in the open unit disk such as classes of -valent starlike functions involving results of (Owa, 1985a), classes of -valent starlike and convex functions involving the Hadamard product (or convolution) and classes of -uniformly -valent starlike and convex functions, in obtaining, coefficient estimates, distortion properties, extreme points, closure theorems, modified Hadmard products and inclusion properties. Also, we obtain radii of convexity, starlikeness and close-to-convexity for functions belonging to those classes. Moreover, we derive several new sufficient conditions for starlikeness and convexity of the fractional derivative operator by using certain results of (Owa, 1985a), convolution, Jack¿s lemma and Nunokakawa¿ Lemma. In addition, we obtain coefficient bounds for the functional of functions belonging to certain classes of -valent functions of complex order which generalized the concepts of starlike, Bazilevi¿ and non-Bazilevi¿ functions. We use the method of differential subordination and superordination for analytic functions in the open unit disk in order to derive various new subordination, superordination and sandwich results involving the fractional derivative operator. Finally, we obtain some new strong differential subordination, superordination, sandwich results for -valent functions associated with the fractional derivative operator by investigating appropriate classes of admissible functions. First order linear strong differential subordination properties are studied. Further results including strong differential subordination and superordination based on the fact that the coefficients of the functions associated with the fractional derivative operator are not constants but complex-valued functions are also studied.
232

PROBABLY APPROXIMATELY CORRECT BOUNDS FOR ESTIMATING MARKOV TRANSITION KERNELS

Imon Banerjee (17555685) 06 December 2023 (has links)
<p dir="ltr">This thesis presents probably approximately correct (PAC) bounds on estimates of the transition kernels of Controlled Markov chains (CMC’s). CMC’s are a natural choice for modelling various industrial and medical processes, and are also relevant to reinforcement learning (RL). Learning the transition dynamics of CMC’s in a sample efficient manner is an important question that is open. This thesis aims to close this gap in knowledge in literature.</p><p dir="ltr">In Chapter 2, we lay the groundwork for later chapters by introducing the relevant concepts and defining the requisite terms. The two subsequent chapters focus on non-parametric estimation. </p><p dir="ltr">In Chapter 3, we restrict ourselves to a finitely supported CMC with d states and k controls and produce a general theory for minimax sample complexity of estimating the transition matrices.</p><p dir="ltr">In Chapter 4 we demonstrate the applicability of this theory by using it to recover the sample complexities of various controlled Markov chains, as well as RL problems.</p><p dir="ltr">In Chapter 5 we move to a continuous state and action spaces with compact supports. We produce a robust, non-parametric test to find the best histogram based estimator of the transition density; effectively reducing the problem into one of model selection based on empricial processes.</p><p dir="ltr">Finally, in Chapter 6 we move to a parametric and Bayesian regime, and restrict ourselves to Markov chains. Under this setting we provide a PAC-Bayes bound for estimating model parameters under tempered posteriors.</p>
233

Fine-Grained Parameterized Algorithms on Width Parameters and Beyond

Hegerfeld, Falko 25 October 2023 (has links)
Die Kernaufgabe der parameterisierten Komplexität ist zu verstehen, wie Eingabestruktur die Problemkomplexität beeinflusst. Wir untersuchen diese Fragestellung aus einer granularen Perspektive und betrachten Problem-Parameter-Kombinationen mit einfach exponentieller Laufzeit, d.h., Laufzeit a^k n^c, wobei n die Eingabegröße ist, k der Parameterwert, und a und c zwei positive Konstanten sind. Unser Ziel ist es, die optimale Laufzeitbasis a für eine gegebene Kombination zu bestimmen. Für viele Zusammenhangsprobleme, wie Connected Vertex Cover oder Connected Dominating Set, ist die optimale Basis bezüglich dem Parameter Baumweite bekannt. Die Baumweite gehört zu der Klasse der Weiteparameter, welche auf natürliche Weise zu Algorithmen mit dem Prinzip der dynamischen Programmierung führen. Im ersten Teil dieser Dissertation untersuchen wir, wie sich die optimale Laufzeitbasis für diverse Zusammenhangsprobleme verändert, wenn wir zu ausdrucksstärkeren Weiteparametern wechseln. Wir entwerfen neue parameterisierte Algorithmen und (bedingte) untere Schranken, um diese optimalen Basen zu bestimmen. Insbesondere zeigen wir für die Parametersequenz Baumweite, modulare Baumweite, und Cliquenweite, dass die optimale Basis von Connected Vertex Cover bei 3 startet, sich erst auf 5 erhöht und dann auf 6, wobei hingegen die optimale Basis von Connected Dominating Set bei 4 startet, erst bei 4 bleibt und sich dann auf 5 erhöht. Im zweiten Teil gehen wir über Weiteparameter hinaus und analysieren restriktivere Arten von Parametern. Für die Baumtiefe entwerfen wir platzsparende Verzweigungsalgorithmen. Die Beweistechniken für untere Schranken bezüglich Weiteparametern übertragen sich nicht zu den restriktiveren Parametern, weshalb nur wenige optimale Laufzeitbasen bekannt sind. Um dies zu beheben untersuchen wir Knotenlöschungsprobleme. Insbesondere zeigen wir, dass die optimale Basis von Odd Cycle Transversal parameterisiert mit einem Modulator zu Baumweite 2 den Wert 3 hat. / The question at the heart of parameterized complexity is how input structure governs the complexity of a problem. We investigate this question from a fine-grained perspective and study problem-parameter-combinations with single-exponential running time, i.e., time a^k n^c, where n is the input size, k the parameter value, and a and c are positive constants. Our goal is to determine the optimal base a for a given combination. For many connectivity problems such as Connected Vertex Cover or Connecting Dominating Set, the optimal base is known relative to treewidth. Treewidth belongs to the class of width parameters, which naturally admit dynamic programming algorithms. In the first part of this thesis, we study how the optimal base changes for these connectivity problems when going to more expressive width parameters. We provide new parameterized dynamic programming algorithms and (conditional) lower bounds to determine the optimal base, in particular, we obtain for the parameter sequence treewidth, modular-treewidth, clique-width that the optimal base for Connected Vertex Cover starts at 3, increases to 5, and then to 6, whereas the optimal base for Connected Dominating Set starts at 4, stays at 4, and then increases to 5. In the second part, we go beyond width parameters and study more restrictive parameterizations like depth parameters and modulators. For treedepth, we design space-efficient branching algorithms. The lower bound techniques for width parameterizations do not carry over to these more restrictive parameterizations and as a result, only a few optimal bases are known. To remedy this, we study standard vertex-deletion problems. In particular, we show that the optimal base of Odd Cycle Transversal parameterized by a modulator to treewidth 2 is 3. Additionally, we show that similar lower bounds can be obtained in the realm of dense graphs by considering modulators consisting of so-called twinclasses.
234

Sensor Networks: Studies on the Variance of Estimation, Improving Event/Anomaly Detection, and Sensor Reduction Techniques Using Probabilistic Models

Chin, Philip Allen 19 July 2012 (has links)
Sensor network performance is governed by the physical placement of sensors and their geometric relationship to the events they measure. To illustrate this, the entirety of this thesis covers the following interconnected subjects: 1) graphical analysis of the variance of the estimation error caused by physical characteristics of an acoustic target source and its geometric location relative to sensor arrays, 2) event/anomaly detection method for time aggregated point sensor data using a parametric Poisson distribution data model, 3) a sensor reduction or placement technique using Bellman optimal estimates of target agent dynamics and probabilistic training data (Goode, Chin, & Roan, 2011), and 4) transforming event monitoring point sensor data into event detection and classification of the direction of travel using a contextual, joint probability, causal relationship, sliding window, and geospatial intelligence (GEOINT) method. / Master of Science
235

New Theoretical Techniques For Analyzing And Mitigating Password Cracking Attacks

Peiyuan Liu (18431811) 26 April 2024 (has links)
<p dir="ltr">Brute force guessing attacks continue to pose a significant threat to user passwords. To protect user passwords against brute force attacks, many organizations impose restrictions aimed at forcing users to select stronger passwords. Organizations may also adopt stronger hashing functions in an effort to deter offline brute force guessing attacks. However, these defenses induce trade-offs between security, usability, and the resources an organization is willing to investigate to protect passwords. In order to make informed password policy decisions, it is crucial to understand the distribution over user passwords and how policy updates will impact this password distribution and/or the strategy of a brute force attacker.</p><p dir="ltr">This first part of this thesis focuses on developing rigorous statistical tools to analyze user password distributions and the behavior of brute force password attackers. In particular, we first develop several rigorous statistical techniques to upper and lower bound the guessing curve of an optimal attacker who knows the user password distribution and can order guesses accordingly. We apply these techniques to analyze eight password datasets and two PIN datasets. Our empirical analysis demonstrates that our statistical techniques can be used to evaluate password composition policies, compare the strength of different password distributions, quantify the impact of applying PIN blocklists, and help tune hash cost parameters. A real world attacker may not have perfect knowledge of the password distribution. Prior work introduced an efficient Monte Carlo technique to estimate the guessing number of a password under a particular password cracking model, i.e., the number of guesses an attacker would check before this particular password. This tool can also be used to generate password guessing curves, but there is no absolute guarantee that the guessing number and the resulting guessing curves are accurate. Thus, we propose a tool called Confident Monte Carlo that uses rigorous statistical techniques to upper and lower bound the guessing number of a particular password as well as the attacker's entire guessing curve. Our empirical analysis also demonstrate that this tool can be used to help inform password policy decisions, e.g., identifying and warning users with weaker passwords, or tuning hash cost parameters.</p><p dir="ltr">The second part of this thesis focuses on developing stronger password hashing algorithms to protect user passwords against offline brute force attacks. In particular, we establish that the memory hard function Scrypt, which has been widely deployed as password hash function, is maximally bandwidth hard. We also present new techniques to construct and analyze depth robust graph with improved concrete parameters. Depth robust graph play an essential rule in the design and analysis of memory hard functions.</p>
236

A framework for estimating risk

Kroon, Rodney Stephen 03 1900 (has links)
Thesis (PhD (Statistics and Actuarial Sciences))--Stellenbosch University, 2008. / We consider the problem of model assessment by risk estimation. Various approaches to risk estimation are considered in a uni ed framework. This a discussion of various complexity dimensions and approaches to obtaining bounds on covering numbers is also presented. The second type of training sample interval estimator discussed in the thesis is Rademacher bounds. These bounds use advanced concentration inequalities, so a chapter discussing such inequalities is provided. Our discussion of Rademacher bounds leads to the presentation of an alternative, slightly stronger, form of the core result used for deriving local Rademacher bounds, by avoiding a few unnecessary relaxations. Next, we turn to a discussion of PAC-Bayesian bounds. Using an approach developed by Olivier Catoni, we develop new PAC-Bayesian bounds based on results underlying Hoe ding's inequality. By utilizing Catoni's concept of \exchangeable priors", these results allowed the extension of a covering number-based result to averaging classi ers, as well as its corresponding algorithm- and data-dependent result. The last contribution of the thesis is the development of a more exible shell decomposition bound: by using Hoe ding's tail inequality rather than Hoe ding's relative entropy inequality, we extended the bound to general loss functions, allowed the use of an arbitrary number of bins, and introduced between-bin and within-bin \priors". Finally, to illustrate the calculation of these bounds, we applied some of them to the UCI spam classi cation problem, using decision trees and boosted stumps. framework is an extension of a decision-theoretic framework proposed by David Haussler. Point and interval estimation based on test samples and training samples is discussed, with interval estimators being classi ed based on the measure of deviation they attempt to bound. The main contribution of this thesis is in the realm of training sample interval estimators, particularly covering number-based and PAC-Bayesian interval estimators. The thesis discusses a number of approaches to obtaining such estimators. The rst type of training sample interval estimator to receive attention is estimators based on classical covering number arguments. A number of these estimators were generalized in various directions. Typical generalizations included: extension of results from misclassi cation loss to other loss functions; extending results to allow arbitrary ghost sample size; extending results to allow arbitrary scale in the relevant covering numbers; and extending results to allow arbitrary choice of in the use of symmetrization lemmas. These extensions were applied to covering number-based estimators for various measures of deviation, as well as for the special cases of misclassi - cation loss estimators, realizable case estimators, and margin bounds. Extended results were also provided for strati cation by (algorithm- and datadependent) complexity of the decision class. In order to facilitate application of these covering number-based bounds,
237

Analyse des bornes extrêmes et le contrôle des armes à feu : l’effet de la Loi C-68 sur les homicides au Québec

Linteau, Isabelle 12 1900 (has links)
Contexte et objectifs. En 1995, le gouvernement canadien a promulgué la Loi C-68, rendant ainsi obligatoire l’enregistrement de toutes les armes à feu et affermissant les vérifications auprès des futurs propriétaires. Faute de preuves scientifiques crédibles, le potentiel de cette loi à prévenir les homicides est présentement remis en question. Tout en surmontant les biais potentiels retrouvés dans les évaluations antérieures, l’objectif de ce mémoire est d’évaluer l’effet de la Loi C-68 sur les homicides au Québec entre 1974 et 2006. Méthodologie. L’effet de la Loi C-68 est évalué à l’aide d’une analyse des bornes extrêmes. Les effets immédiats et graduels de la Loi sont évalués à l’aide de 372 équations. Brièvement, il s’agit d’analyses de séries chronologiques interrompues où toutes les combinaisons de variables indépendantes sont envisagées afin d’éviter les biais relatifs à une spécification arbitraire des modèles. Résultats. L’introduction de la Loi C-68 est associée à une baisse graduelle des homicides commis à l’aide d’armes longues (carabines et fusils de chasse), sans qu’aucun déplacement tactique ne soit observé. Les homicides commis par des armes à feu à autorisation restreinte ou prohibées semblent influencés par des facteurs différents. Conclusion. Les résultats suggèrent que le contrôle des armes à feu est une mesure efficace pour prévenir les homicides. L’absence de déplacement tactique suggère également que l’arme à feu constitue un important facilitateur et que les homicides ne sont pas tous prémédités. D’autres études sont toutefois nécessaires pour clairement identifier les mécanismes de la Loi responsables de la baisse des homicides. / Context and objectives. Laws with extensive background checks and making mandatory the registration of all guns have been adopted by some governments to prevent firearms-related homicides. On the other hand, methodological flaws found in previous evaluations question the potential of such laws to prevent gun homicides. By taking into account previous limitations, the main objective of this study is to estimate the effect of Bill C-68 on homicides committed in the Province of Quebec, Canada, between 1974 and 2006. Methodology. Using extreme bounds analysis, we assess the effect of Bill C-68 on homicides. Estimates of the immediate and gradual effects of the law are based on a total of 372 equations. More precisely, interrupted time series analyses were conducted, using all possible variable combinations, in order to overcome biases related to model specification. Results. We found that Bill C-68 is associated with a significant and gradual decline in homicides committed with a long gun (either a riffle or a shotgun). The substitution effects are not robust with respect to different model specifications. Patterns observed in homicides involving restricted or prohibited firearms suggest that they are influenced by different factors, not considered in our analyses. Conclusion. Results suggest that enhanced firearm control laws are an effective tool to prevent homicides. The lack of tactical displacement supports the concept of firearm as a crime facilitator and suggests that all homicides are not carefully planned. Other studies are however needed to pinpoint law provisions accountable for the decrease in homicides.
238

Complexité de la communication sur un canal avec délai

Lapointe, Rébecca 02 1900 (has links)
Nous introduisons un nouveau modèle de la communication à deux parties dans lequel nous nous intéressons au temps que prennent deux participants à effectuer une tâche à travers un canal avec délai d. Nous établissons quelques bornes supérieures et inférieures et comparons ce nouveau modèle aux modèles de communication classiques et quantiques étudiés dans la littérature. Nous montrons que la complexité de la communication d’une fonction sur un canal avec délai est bornée supérieurement par sa complexité de la communication modulo un facteur multiplicatif d/ lg d. Nous présentons ensuite quelques exemples de fonctions pour lesquelles une stratégie astucieuse se servant du temps mort confère un avantage sur une implémentation naïve d’un protocole de communication optimal en terme de complexité de la communication. Finalement, nous montrons qu’un canal avec délai permet de réaliser un échange de bit cryptographique, mais que, par lui-même, est insuffisant pour réaliser la primitive cryptographique de transfert équivoque. / We introduce a new communication complexity model in which we want to determine how much time of communication is needed by two players in order to execute arbitrary tasks on a channel with delay d. We establish a few basic lower and upper bounds and compare this new model to existing models such as the classical and quantum two-party models of communication. We show that the standard communication complexity of a function, modulo a factor of d/ lg d, constitutes an upper bound to its communication complexity on a delayed channel. We introduce a few examples on which a clever strategy depending on the delay procures a significant advantage over the naïve implementation of an optimal communication protocol. We then show that a delayed channel can be used to implement a cryptographic bit swap, but is insufficient on its own to implement an oblivious transfer scheme.
239

Homogénéisation des composites linéaires : Etude des comportements apparents et effectif / Homogenization of linear elastic matrix-inclusion composites : a study of their apparent and effective behaviors

Salmi, Moncef 02 July 2012 (has links)
Les travaux effectués au cours de cette thèse portent principalement sur la construction de nouvelles bornes du comportement effectif des matériaux biphasés de type matrice-inclusions à comportement linéaire élastique. Dans un premier temps, afin d’encadrer le comportement effectif, nous présentons une nouvelle approche numérique, inspirée des travaux de Huet (J. Mech. Phys. Solids 1990 ; 38:813-41), qui repose sur le calcul des comportements apparents associés à des volumes élémentaires (VE) non-carrés construits à partir d'assemblages de cellules de Voronoï, chaque cellule contenant une inclusion entourée de matrice. De tels VE non-carrés permettent d'éviter l'application directe des CL sur les inclusions à l’origine d’une surestimation artificielle des comportements apparents. En utilisant les théorèmes énergétiques de l'élasticité linéaire et des procédures de moyennisation appropriées portant sur les comportements apparents, un nouvel encadrement du comportement effectif est obtenu. Son application au cas d'un composite biphasé, constitué d'une matrice isotrope et de fibres cylindriques parallèles et identiques distribuées aléatoirement dans le plan transverse, conduit à des bornes plus resserrées que celles obtenues par Huet. En nous appuyant sur cette nouvelle procédure numérique, nous avons ensuite réalisé une étude statistique des comportements apparents à l'aide de simulations de type Monté Carlo. Puis, à partir des tendances issues de cette étude statistique, nous avons proposé et mis en œuvre de nouveaux critères de tailles de VER. / This work is devoted to the derivation of improved bounds for the effective behavior of random linear elastic matrix-inclusions composites. In order to bounds their effective behavior, we present a new numerical approach, inspired by the works of Huet (J. Mech. Phys. Solids 1990 ; 38:813-41), which relies on the computation of the apparent behaviors associated to non square (or non cubic) volume elements (VEs) comprised of Voronoï cells assemblages, each cell being composed of a single inclusion surrounded by the matrix. Such non-square VEs forbid any direct application of boundary conditions to particles which is responsible for the artificial overestimation of the apparent behaviors observed for square VEs. By making used of the classical bounding theorems for linear elasticity and appropriate averaging procedures, new bounds are derived from ensemble averages of the apparent behavior associated with non square VEs. Their application to a two-phase composite composed of an isotropic matrix and aligned identical fibers randomly and isotropically distributed in the transverse plane leads to sharper bounds than those obtained by Huet. Then, by making use of this new numerical approach, a statistical study of the apparent behavior is carried out by means of Monte Carlo simulations. Subsequently, relying on the trends derived from this study, some proposals to define RVE criteria are presented.
240

Advanced Signal Processing Methods for GNSS Positioning with NLOS/Multipath Signals / Approches avancées de traitement de signal pour la navigation GNSS en présence des signaux multi-trajets ou sans ligne de vue directe (NLOS)

Kbayer, Nabil 09 October 2018 (has links)
Les avancées récentes dans le domaine de navigation par satellites (GNSS) ontconduit à une prolifération des applications de géolocalisation dans les milieux urbains. Pourde tels environnements, les applications GNSS souffrent d’une grande dégradation liée à laréception des signaux satellitaires en lignes indirectes (NLOS) et en multitrajets (MP). Cetravail de thèse propose une méthodologie originale pour l’utilisation constructive des signauxdégradés MP/NLOS, en appliquant des techniques avancées de traitement du signal ou àl’aide d’une assistance d’un simulateur 3D de propagation des signaux GNSS. D’abord, nousavons établi le niveau maximal réalisable sur la précision de positionnement par un systèmeGNSS "Stand-Alone" en présence de conditions MP/NLOS, en étudiant les bornes inférieuressur l’estimation en présence des signaux MP/NLOS. Pour mieux améliorer ce niveau deprécision, nous avons proposé de compenser les erreurs NLOS en utilisant un simulateur 3D dessignaux GNSS afin de prédire les biais MP/NLOS et de les intégrer comme des observationsdans l’estimation de la position, soit par correction des mesures dégradées ou par sélectiond’une position parmi une grille de positions candidates. L’application des approches proposéesdans un environnement urbain profond montre une bonne amélioration des performances depositionnement dans ces conditions. / Recent trends in Global Navigation Satellite System (GNSS) applications inurban environments have led to a proliferation of studies in this field that seek to mitigatethe adverse effect of non-line-of-sight (NLOS). For such harsh urban settings, this dissertationproposes an original methodology for constructive use of degraded MP/NLOS signals, insteadof their elimination, by applying advanced signal processing techniques or by using additionalinformation from a 3D GNSS simulator. First, we studied different signal processing frameworks,namely robust estimation and regularized estimation, to tackle this GNSS problemwithout using an external information. Then, we have established the maximum achievablelevel (lower bounds) of GNSS Stand-Alone positioning accuracy in presence of MP/NLOSconditions. To better enhance this accuracy level, we have proposed to compensate for theMP/NLOS errors using a 3D GNSS signal propagation simulator to predict the biases andintegrate them as observations in the estimation method. This could be either by correctingdegraded measurements or by scoring an array of candidate positions. Besides, new metricson the maximum acceptable errors on MP/NLOS errors predictions, using GNSS simulations,have been established. Experiment results using real GNSS data in a deep urban environmentshow that using these additional information provides good positioning performance enhancement,despite the intensive computational load of 3D GNSS simulation.

Page generated in 0.7189 seconds