• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 32
  • Tagged with
  • 163
  • 163
  • 77
  • 53
  • 35
  • 32
  • 30
  • 28
  • 27
  • 27
  • 27
  • 24
  • 24
  • 23
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Smartphone User Privacy Preserving through Crowdsourcing

Rashidi, Bahman 01 January 2018 (has links)
In current Android architecture, users have to decide whether an app is safe to use or not. Expert users can make savvy decisions to avoid unnecessary private data breach. However, the majority of regular users are not technically capable or do not care to consider privacy implications to make safe decisions. To assist the technically incapable crowd, we propose a permission control framework based on crowdsourcing. At its core, our framework runs new apps under probation mode without granting their permission requests up-front. It provides recommendations on whether to accept or not the permission requests based on decisions from peer expert users. To seek expert users, we propose an expertise rating algorithm using a transitional Bayesian inference model. The recommendation is based on aggregated expert responses and their confidence level. As a complete framework design of the system, this thesis also includes a solution for Android app risks estimation based on behaviour analysis. To eliminate the negative impact from dishonest app owners, we also proposed a bot user detection to make it harder to utilize false recommendations through bot users to impact the overall recommendations. This work also covers a multi-view permission notification design to customize the app safety notification interface based on users' need and an app recommendation method to suggest safe and usable alternative apps to users.
32

Perfect Sampling of Vervaat Perpetuities

Williams, Robert Tristan 01 January 2013 (has links)
This paper focuses on the issue of sampling directly from the stationary distribution of Vervaat perpetuities. It improves upon an algorithm for perfect sampling first presented by Fill & Huber by implementing both a faster multigamma coupler and a moving value of Xmax to increase the chance of unification. For beta = 1 we are able to reduce the expected steps for a sample by 22%, and at just beta = 3 we lower the expected time by over 80%. These improvements allow us to sample in reasonable time from perpetuities with much higher values of beta than was previously possible.
33

Efficient algorithm to construct phi function in vector space secret sharing scheme and application of secret sharing scheme in Visual Cryptography

Potay, Sunny 01 May 2012 (has links)
Secret Sharing refers to a method through which a secret key K can be shared among a group of authorized participants, such that when they come together later, they can figure out the secret key K to decrypt the encrypted message. Any group which is not authorized cannot determine the secret key K. Some of the important secret schemes are Shamir Threshold Scheme, Monotone Circuit Scheme, and Brickell Vector Space Scheme. Brikell’s vector space secret sharing construction requires the existence of a function from a set of participant P in to vector space Zdp, where p is a prime number and d is a positive number. There is no known algorithm to construct such a function in general. We developed an efficient algorithm to construct function for some special secret sharing scheme. We also give an algorithm to demonstrate how a secret sharing scheme can be used in visual cryptography.
34

A systematic approach to the design and analysis of linear algebra algorithms

Gunnels, John Andrew. January 2001 (has links)
Thesis (Ph. D.)--University of Texas at Austin, 2001. / Vita. Includes bibliographical references. Available also from UMI/Dissertation Abstracts International.
35

Evaluation of Data-Path Topologies for Self-Timed Conditional Statements

Jamadagni, Navaneeth Prasannakumar 10 August 2015 (has links)
This research presents a methodology to evaluate data path topologies that implement a conditional statement for an average-case performance that is better than the worst-case performance. A conditional statement executes one of many alternatives depending on how Boolean conditions evaluate to true or false. Alternatives with simple computations take less time to execute. The self-timed designs can exploit the faster executing alternatives and provide an average-case behavior, where the average depends on the frequency of simple and complex computations, and the difference in the completion times of simple and complex computations. The frequency of simple and complex computations depends on a given workload. The difference in the completion times of a simple and complex computations depend on the choice of a data path topology. Conventional wisdom suggests that a fully-speculative data path, independent of the design style, yields the best performance. A fully-speculative data path executes all the choices or alternatives in a conditional statement in parallel and then chooses the correct result. Using a division algorithm as an example of an instruction that embodies a conditional statement, the proposed methodology shows that a fully-speculative design is suitable for a synchronous design but a less-speculative design is suitable for a self-timed design. Consequently, the results from the SPICE simulation of the extracted netlists show that on average, the self-timed divider is approximately 10% faster, consumes 42% less energy per division and 20% less area than the synchronous divider. In addition to the evaluation methodology, this research also presents the derivation of four new radix-2 division algorithms that offer a simpler quotient selection logic compared to the existing radix-2 division algorithms. A circuit optimization technique called Glissando is presented in this research. Glissando exploits a simple idea that the non-critical bits can arrive late at the input of the registers to reduce the delay of the data paths. The effect of the variations in manufacturing on the functionality of the divider designs is also analyzed in this research.
36

Formalizing Combinatorial Matrix Theory

Fernandez, Ariel German G. 10 1900 (has links)
<p>In this thesis we are concerned with the complexity of formalizing reasoning in Combinatorial Matrix Theory (CMT). We are interested in the strength of the bounded arithmetic theories necessary in order to prove the fundamental results of this field. Bounded Arithmetic can be seen as the uniform counterpart of Propositional Proof Complexity.</p> <p>Perhaps the most famous and fundamental theorem in CMT is the K{\"o}nig's Min-Max Theorem $(\KMM)$ which arises naturally in all areas of combinatorial algorithms. As far as we know, in this thesis we give the first feasible proof of $\KMM$. Our results show that Min-Max reasoning can be formalized with uniform Extended Frege.</p> <p>We show, by introducing new proof techniques, that the first order theory $\LA$ with induction restricted to $\Sigma_1^B$ formulas---i.e., restricted to bounded existential matrix quantification---is sufficient to formalize a large portion of CMT, in particular $\KMM$. $\Sigma_1^B$-$\LA$ corresponds to polynomial time reasoning, also known as $\ELA$.</p> <p>While we consider matrices over $\{0,1\}$, the underlying ring is $\mathbb{Z}$, since we require that $\Sigma A$ compute the number of 1s in the matrix $A$ (which for a 0-1 matrix is simply the sum of all entries---meaning $\Sigma A$). Thus, over $\mathbb{Z}$, $\LA$ translates to $\TC^0$-Frege, while, as mentioned before, $\ELA$ translates into Extended Frege.</p> <p>In order to prove $\KMM$ in $\ELA$, we need to restrict induction to $\Sigma_1^B$ formulas. The main technical contribution is presented in Claim~4.3.4, ~Section~4.3.3. Basically, we introduce a polynomial time procedure, whose proof of correctness can be shown with $\ELA$, that works as follow: given a matrix of size $e \times f$ such that $e\leq f$, where the minimum cover is of size $e$, our procedure computes a maximum selection of size $e$, row by row.</p> <p>Furthermore, we show that Menger's Theorem, Hall's Theorem, and Dilworth's Theorem---theorems related to $\KMM$---can also be proven feasibly; in fact, all these theorems are equivalent to KMM, and the equivalence can be shown in $\LA$. We believe that this captures the proof complexity of Min-Max reasoning rather completely.</p> <p>We also present a new Permutation-Based algorithm for computing a Minimum Vertex Cover from a Maximum Matching in a bipartite graph. Our algorithm is linear-time and computationally very simple: it permutes the rows and columns of the matrix representation of the bipartite graph in order to extract the vertex cover from a maximum matching in a recursive fashion. Our Permutation-Based algorithm uses properties of $\KMM$ Theorem and it is interesting for providing a new permutation---and CMT---perspective on a well-known problem.</p> / Doctor of Philosophy (PhD)
37

Dynamic Game-Theoretic Models to Determine the Value of Intrusion Detection Systems in the Face of Uncertainty

Moured, David Paul 27 January 2015 (has links)
Firms lose millions of dollars every year to cyber-attacks and the risk to these companies is growing exponentially. The threat to monetary and intellectual property has made Information Technology (IT) security management a critical challenge to firms. Security devices, including Intrusion Detections Systems (IDS), are commonly used to help protect these firms from malicious users by identifying the presence of malicious network traffic. However, the actual value of these devices remains uncertain among the IT security community because of the costs associated with the implementation of different monitoring strategies that determine when to inspect potentially malicious traffic and the costs associated with false positive and negative errors. Game theoretic models have proven effective for determining the value of these devices under several conditions where firms and users are modeled as players. However, these models assume that both the firm and attacker have complete information about their opponent and lack the ability to account for more realistic situations where players have incomplete information regarding their opponent's payoffs. The proposed research develops an enhanced model that can be used for strategic decision making in IT security management where the firm is uncertain about the user's utility of intrusion. By using Harsanyi Transformation Analysis, the model provides the IT security research community with valuable insight into the value of IDS when the firm is uncertain of the incentives and payoffs available to users choosing to hack. Specifically, this dissertation considers two possible types of users with different utility for intrusion to gain further insights about the players' strategies. The firm's optimal strategy is to start the game with the expected value of the user's utility as an estimate. Under this strategy, the firm can determine the user's utility with certainty within one iteration of the game. After the first iteration, the game may be analyzed as a game of perfect information.
38

Aspect Mining Using Multiobjective Genetic Clustering Algorithms

Bethelmy, David G. 01 January 2016 (has links)
In legacy software, non-functional concerns tend to cut across the system and manifest themselves as tangled or scattered code. If these crosscutting concerns could be modularized and the system refactored, then the system would become easier to understand, modify, and maintain. Modularized crosscutting concerns are known as aspects and the process of identifying aspect candidates in legacy software is called aspect mining. One of the techniques used in aspect mining is clustering and there are many clustering algorithms. Current aspect mining clustering algorithms attempt to form clusters by optimizing one objective function. However, the objective function to be optimized tends to bias the formation of clusters towards the data model implicitly defined by that function. One solution is to use algorithms that try to optimize more than one objective function. These multiobjective algorithms have been used successfully in data mining but, as far as this author knows, have not been applied to aspect mining. This study investigated the feasibility of using multiobjective evolutionary algorithms, in particular, multiobjective genetic algorithms, in aspect mining. The study utilized an existing multiobjective genetic algorithm, MOCK, which had already been tested against several popular single objective clustering algorithms. MOCK has been shown to be, on average, as good as, and sometimes better than, those algorithms. Since some of those data mining algorithms have counterparts in aspect mining, it was reasonable to assume that MOCK would perform at least as good in an aspect mining context. Since MOCK's objective functions were not directly trying to optimize aspect mining metrics, the study also implemented another multiobjective genetic algorithm, AMMOC, based on MOCK but tailored to optimize those metrics. The reasoning hinged on the fact that, since the goal was to determine if a clustering method resulted in optimizing these quality metrics, it made sense to attempt to optimize these functions directly instead of a posteriori. This study determined that these multiobjective algorithms performed at least as good as two popular aspect mining algorithms, k-means and hierarchical agglomerative. As a result, this study has contributed to both the theoretical body of knowledge in the field of aspect mining as well as provide a practical tool for the field.
39

Topics on Register Synthesis Problems

Liu, Weihua 01 January 2016 (has links)
Pseudo-random sequences are ubiquitous in modern electronics and information technology. High speed generators of such sequences play essential roles in various engineering applications, such as stream ciphers, radar systems, multiple access systems, and quasi-Monte-Carlo simulation. Given a short prefix of a sequence, it is undesirable to have an efficient algorithm that can synthesize a generator which can predict the whole sequence. Otherwise, a cryptanalytic attack can be launched against the system based on that given sequence. Linear feedback shift registers (LFSRs) are the most widely studied pseudorandom sequence generators. The LFSR synthesis problem can be solved by the Berlekamp-Massey algorithm, by constructing a system of linear equations, by the extended Euclidean algorithm, or by the continued fraction algorithm. It is shown that the linear complexity is an important security measure for pseudorandom sequences design. So we investigate lower bounds of the linear complexity of different kinds of pseudorandom sequences. Feedback with carry shift registers (FCSRs) were first described by Goresky and Klapper. They have many good algebraic properties similar to those of LFSRs. FCSRs are good candidates as building blocks of stream ciphers. The FCSR synthesis problem has been studied in many literatures but there are no FCSR synthesis algorithms for multi-sequences. Thus one of the main contributions of this dissertation is to adapt an interleaving technique to develop two algorithms to solve the FCSR synthesis problem for multi-sequences. Algebraic feedback shift registers (AFSRs) are generalizations of LFSRs and FCSRs. Based on a choice of an integral domain R and π ∈ R, an AFSR can produce sequences whose elements can be thought of elements of the quotient ring R/(π). A modification of the Berlekamp-Massey algorithm, Xu's algorithm solves the synthesis problem for AFSRs over a pair (R, π) with certain algebraic properties. We propose two register synthesis algorithms for AFSR synthesis problem. One is an extension of lattice approximation approach but based on lattice basis reduction and the other one is based on the extended Euclidean algorithm.
40

An Introduction to the Theory and Applications of Bayesian Networks

Jaitha, Anant 01 January 2017 (has links)
Bayesian networks are a means to study data. A Bayesian network gives structure to data by creating a graphical system to model the data. It then develops probability distributions over these variables. It explores variables in the problem space and examines the probability distributions related to those variables. It conducts statistical inference over those probability distributions to draw meaning from them. They are good means to explore a large set of data efficiently to make inferences. There are a number of real world applications that already exist and are being actively researched. This paper discusses the theory and applications of Bayesian networks.

Page generated in 0.0549 seconds