• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 1
  • Tagged with
  • 13
  • 13
  • 10
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Optimization Frameworks for Graph Clustering

Luke N Veldt (6636218) 15 May 2019 (has links)
<div>In graph theory and network analysis, communities or clusters are sets of nodes in a graph that share many internal connections with each other, but are only sparsely connected to nodes outside the set. Graph clustering, the computational task of detecting these communities, has been studied extensively due to its widespread applications and its theoretical richness as a mathematical problem. This thesis presents novel optimization tools for addressing two major challenges associated with graph clustering.</div><div></div><div>The first major challenge is that there already exists a plethora of algorithms and objective functions for graph clustering. The relationship between different methods is often unclear, and it can be very difficult to determine in practice which approach is the best to use for a specific application. To address this challenge, we introduce a generalized discrete optimization framework for graph clustering called LambdaCC, which relies on a single tunable parameter. The value of this parameter controls the balance between the internal density and external sparsity of clusters that are formed by optimizing an underlying objective function. LambdaCC unifies the landscape of graph clustering techniques, as a large number of previously developed approaches can be recovered as special cases for a fixed value of the LambdaCC input parameter. </div><div> </div><div>The second major challenge of graph clustering is the computational intractability of detecting the best way to cluster a graph with respect to a given NP-hard objective function. To address this intractability, we present new optimization tools and results which apply to LambdaCC as well as a broader class of graph clustering problems. In particular, we develop polynomial time approximation algorithms for LambdaCC and other more generalized clustering objectives. In particular, we show how to obtain a polynomial-time 2-approximation for cluster deletion, which improves upon the previous best approximation factor of 3. We also present a new optimization framework for solving convex relaxations of NP-hard graph clustering problems, which are frequently used in the design of approximation algorithms. Finally, we develop a new framework for efficiently setting tunable parameters for graph clustering objective functions, so that practitioners can work with graph clustering techniques that are especially well suited to their application. </div>
2

Postbuckling Analysis of Functionally Graded Beams

Soncco, K, Jorge, X, Arciniega, R.A. 26 February 2019 (has links)
This paper studies the geometrically non-linear bending behavior of functionally graded beams subjected to buckling loads using the finite element method. The computational model is based on an improved first-order shear deformation theory for beams with five independent variables. The abstract finite element formulation is derived by means of the principle of virtual work. High-order nodal-spectral interpolation functions were utilized to approximate the field variables which minimizes the locking problem. The incremental/iterative solution technique of Newton's type is implemented to solve the nonlinear equations. The model is verified with benchmark problems available in the literature. The objective is to investigate the effect of volume fraction variation in the response of functionally graded beams made of ceramics and metals. As expected, the results show that transverse deflections vary significantly depending on the ceramic and metal combination. / Revisión por pares
3

The development of a method of digital computer simulation of the flotation process by means of a mathematical model

Bull, W. R. Unknown Date (has links)
No abstract available
4

Modeling Rational Adversaries: Predicting Behavior and Developing Deterrents

Benjamin D Harsha (11186139) 26 July 2021 (has links)
In the field of cybersecurity, it is often not possible to construct systems that are resistant to all attacks. For example, even a well-designed password authentication system will be vulnerable to password cracking attacks because users tend to select low-entropy passwords. In the field of cryptography, we often model attackers as powerful and malicious and say that a system is broken if any such attacker can violate the desired security properties. While this approach is useful in some settings, such a high bar is unachievable in many security applications e.g., password authentication. However, even when the system is imperfectly secure, it may be possible to deter a rational attacker who seeks to maximize their utility. In particular, if a rational adversary finds that the cost of running an attack is higher than their expected rewards, they will not run that particular attack. In this dissertation we argue in support of the following statement: Modeling adversaries as rational actors can be used to better model the security of imperfect systems and develop stronger defenses. We present several results in support of this thesis. First, we develop models for the behavior of rational adversaries in the context of password cracking and quantum key-recovery attacks. These models allow us to quantify the damage caused by password breaches, quantify the damage caused by (widespread) password length leakage, and identify imperfectly secure settings where a rational adversary is unlikely to run any attacks i.e. quantum key-recovery attacks. Second, we develop several tools to deter rational attackers by ensuring the utility-optimizing attack is either less severe or nonexistent. Specifically, we develop tools that increase the cost of offline password cracking attacks by strengthening password hashing algorithms, strategically signaling user password strength, and using dedicated Application-Specific Integrated Circuits (ASICs) to store passwords.
5

Fault Tolerance in Linear Algebraic Methods using Erasure Coded Computations

Xuejiao Kang (5929862) 16 January 2019 (has links)
<p>As parallel and distributed systems scale to hundreds of thousands of cores and beyond, fault tolerance becomes increasingly important -- particularly on systems with limited I/O capacity and bandwidth. Error correcting codes (ECCs) are used in communication systems where errors arise when bits are corrupted silently in a message. Error correcting codes can detect and correct erroneous bits. Erasure codes, an instance of error correcting codes that deal with data erasures, are widely used in storage systems. An erasure code addsredundancy to the data to tolerate erasures. </p> <p><br> </p> <p>In this thesis, erasure coded computations are proposed as a novel approach to dealing with processor faults in parallel and distributed systems. We first give a brief review of traditional fault tolerance methods, error correcting codes, and erasure coded storage. The benefits and challenges of erasure coded computations with respect to coding scheme, fault models and system support are also presented.</p> <p><br> </p> <p>In the first part of my thesis, I demonstrate the novel concept of erasure coded computations for linear system solvers. Erasure coding augments a given problem instance with redundant data. This augmented problem is executed in a fault oblivious manner in a faulty parallel environment. In the event of faults, we show how we can compute the true solution from potentially fault-prone solutions using a computationally inexpensive procedure. The results on diverse linear systems show that our technique has several important advantages: (i) as the hardware platform scales in size and in number of faults, our scheme yields increasing improvement in resource utilization, compared to traditional schemes; (ii) the proposed scheme is easy to code as the core algorithm remains the same; (iii) the general scheme is flexible to accommodate a range of computation and communication trade-offs. </p> <p><br> </p> <p>We propose a new coding scheme for augmenting the input matrix that satisfies the recovery equations of erasure coding with high probability in the event of random failures. This coding scheme also minimizes fill (non-zero elements introduced by the coding block), while being amenable to efficient partitioning across processing nodes. Our experimental results show that the scheme adds minimal overhead for fault tolerance, yields excellent parallel efficiency and scalability, and is robust to different fault arrival models and fault rates.</p> <p><br> </p> <p>Building on these results, we show how we can minimize, to optimality, the overhead associated with our problem augmentation techniques for linear system solvers. Specifically, we present a technique that adaptively augments the problem only when faults are detected. At any point during execution, we only solve a system with the same size as the original input system. This has several advantages in terms of maintaining the size and conditioning of the system, as well as in only adding the minimal amount of computation needed to tolerate the observed faults. We present, in details, the augmentation process, the parallel formulation, and the performance of our method. Specifically, we show that the proposed adaptive fault tolerance mechanism has minimal overhead in terms of FLOP counts with respect to the original solver executing in a non-faulty environment, has good convergence properties, and yields excellent parallel performance.</p> <p><br> </p> <p>Based on the promising results for linear system solvers, we apply the concept of erasure coded computation to eigenvalue problems, which arise in many applications including machine learning and scientific simulations. Erasure coded computation is used to design a fault tolerant eigenvalue solver. The original eigenvalue problem is reformulated into a generalized eigenvalue problem defined on appropriate augmented matrices. We present the augmentation scheme, the necessary conditions for augmentation blocks, and the proofs of equivalence of the original eigenvalue problem and the reformulated generalized eigenvalue problem. Finally, we show how the eigenvalues can be derived from the augmented system in the event of faults. </p> <p><br> </p> <p>We present detailed experiments, which demonstrate the excellent convergence properties of our fault tolerant TraceMin eigensolver in the average case. In the worst case where the row-column pairs that have the most impact on eigenvalues are erased, we present a novel scheme that computes the augmentation blocks as the computation proceeds, using the estimates of leverage scores of row-column pairs as they are computed by the iterative process. We demonstrate low overhead, excellent scalability in terms of the number of faults, and the robustness to different fault arrival models and fault rates for our method.</p> <p><br> </p> <p>In summary, this thesis presents a novel approach to fault tolerance based on erasure coded computations, demonstrates it in the context of important linear algebra kernels, and validates its performance on a diverse set of problems on scalable parallel computing platforms. As parallel systems scale to hundreds of thousands of processing cores and beyond, these techniques present the most scalable fault tolerant mechanisms currently available.</p><br>
6

Transparent and Mutual Restraining Electronic Voting

Huian Li (6012225) 17 January 2019 (has links)
Many e-voting techniques have been proposed but not widely used in reality. One of the problems associated with most of existing e-voting techniques is the lack of transparency, leading to a failure to deliver voter assurance. In this work, we propose a transparent, auditable, end-to-end verifiable, and mutual restraining e-voting protocol that exploits the existing multi-party political dynamics such as in the US. The new e-voting protocol consists of three original technical contributions -- universal verifiable voting vector, forward and backward mutual lock voting, and in-process check and enforcement -- that, along with a public real time bulletin board, resolves the apparent conflicts in voting such as anonymity vs. accountability and privacy vs. verifiability. Especially, the trust is split equally among tallying authorities who have conflicting interests and will technically restrain each other. The voting and tallying processes are transparent to voters and any third party, which allow any voter to verify that his vote is indeed counted and also allow any third party to audit the tally. For the environment requiring receipt-freeness and coercion-resistance, we introduce additional approaches to counter vote-selling and voter-coercion issues. Our interactive voting protocol is suitable for small number of voters like boardroom voting where interaction between voters is encouraged and self-tallying is necessary; while our non-interactive protocol is for the scenario of large number of voters where interaction is prohibitively expensive. Equipped with a hierarchical voting structure, our protocols can enable open and fair elections at any scale.
7

Expressibility of higher-order logics on relational databases : proper hierarchies : a dissertation presented in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Information Systems at Massey University, Wellington, New Zealand

Ferrarotti, Flavio Antonio Unknown Date (has links)
We investigate the expressive power of different fragments of higher-order logics over finite relational structures (or equivalently, relational databases) with special emphasis in higher-order logics of order greater than or equal three. Our main results concern the study of the effect on the expressive power of higher-order logics, of simultaneously bounding the arity of the higher-order variables and the alternation of quantifiers.
8

Modelling avian influenza in bird-human systems : this thesis is presented in the partial fulfillment of the requirement for the degree of Masters of Information Science in Mathematics at Massey University, Albany, New Zealand

Zhao, Yue January 2009 (has links)
In 1997, the first human case of avian influenza infection was reported in Hong Kong. Since then, avian influenza has become more and more hazardous for both animal and human health. Scientists believed that it would not take long until the virus mutates to become contagious from human to human. In this thesis, we construct avian influenza with possible mutation situations in bird-human systems. Also, possible control measures for humans are introduced in the systems. We compare the analytical and numerical results and try to find the most efficient control measures to prevent the disease.
9

A Generic Proof Checker

Watson, Geoffrey Norman Unknown Date (has links)
The use of formal methods in software development seeks to increase our confidence in the resultant system. Their use often requires tool support, so the integrity of a development using formal methods is dependent on the integrity of the tool-set used. Specifically its integrity depends on the theorem prover, since in a typical formal development system the theorem prover is used to establish the validity of the proof obligations incurred by all the steps in the design and refinement process. In this thesis we are concerned with tool-based formal development systems that are used to develop high-integrity software. Since the theorem prover program is a critical part of such a system, it should ideally have been itself formally verified. Unfortunately, most theorem provers are too complex to be verified formally using currently available techniques. An alternative approach, which has many advantages, is to include a proof checker as an extra component in the system, and to certify this. A proof checker is a program which reads and checks the proofs produced by a theorem prover. Proof checkers are inherently simpler than theorem provers, since they only process actual proofs, whereas much of the code of a theorem prover is concerned with searching the space of possible proofs to find the required one. They are also free from all but the simplest user interface concerns, since their input is a proof produced by another program, and their output may be as simple as a `yes/no' reply to the question: Is this a valid proof? plus a list of assumptions on which this judgement is based. When included in a formal development system a stand-alone proof checker is, in one sense, superfluous, since it does not produce any proofs -- the theorem prover does this. Instead its importance is in establishing the integrity of the results of the system -- it provides extra assurance. A proof checker provides extra assurance simply by checking the proofs, since all proofs have then been validated by two independent programs. However a proof checker can provide an extra, and higher, level of assurance if it has been formally verified. In order for formal verification to be feasible the proof checker must be as simple as possible. In turn the simplicity of a proof checker is dependent on the complexity of the data which it processes, that is, the representation of the proofs that it checks. This thesis develops a representation of proofs that is simple and generic. The aim is to produce a generic representation that is applicable to the proofs produced by a variety of theorem provers. Simplicity facilitates verification, while genericity maximises the return on the effort of verification. Using a generic representation places obligations on the theorem provers to produce a proof record in this format. A flexible recorder/translator architecture is proposed which allows proofs to be recorded by existing theorem provers with minimal changes to the original code. The prover is extended with a recorder module whose output is then, if necessary, converted to the generic format by a separate translator program. A formal specification of a checker for proofs recorded in this representation is given. The specification could be used to formally develop a proof-checker, although this step is not taken in this thesis. In addition the characteristics of both the specification and possible implementations are investigated. This is done to assess the size and feasibility of the verification task, and also to confirm that the design is not over-sensitive to the size of proofs. This investigation shows that a checker developed from the specification will be scalable to handle large proofs. To investigate the feasibility of a system based on this architecture, prototype proof recorders were developed for the Ergo 5 and Isabelle 98 theorem provers. In addition a prototype checker was written to check proofs in the proposed format. This prototype is compatible with the formal specification. The combined system was tested successfully using existing proofs for both the Ergo 5 and Isabelle 98 theorem provers.
10

Betti numbers of deterministic and random sets in semi-algebraic and o-minimal geometry

Abhiram Natarajan (8802785) 06 May 2020 (has links)
<p>Studying properties of random polynomials has marked a shift in algebraic geometry. Instead of worst-case analysis, which often leads to overly pessimistic perspectives, randomness helps perform average-case analysis, and thus obtain a more realistic view. Also, via Erdos' astonishing 'probabilistic method', one can potentially obtain deterministic results by introducing randomness into a question that apriori had nothing to do with randomness. </p> <p><br></p> <p>In this thesis, we study topological questions in real algebraic geometry, o-minimal geometry and random algebraic geometry, with motivation from incidence combinatorics. Specifically, we prove results along two different threads:</p> <p><br></p> <p>1. Topology of semi-algebraic and definable (over any o-minimal structure over R) sets, in both deterministic and random settings.</p><p>2. Topology of random hypersurface arrangements. In this case, we also prove a result that could be of independent interest in random graph theory.</p> <p><br></p> <p>Towards the first thread, motivated by applications in o-minimal incidence combinatorics, we prove bounds (both deterministic and random) on the topological complexity (as measured by the Betti numbers) of general definable hypersurfaces restricted to algebraic sets. Given any sequence of hypersurfaces, we show that there exists a definable hypersurface G, and a sequence of polynomials, such that each manifold in the sequence of hypersurfaces appears as a component of G restricted to the zero set of some polynomial in the sequence of polynomials. This shows that the topology of the intersection of a definable hypersurface and an algebraic set can be made <i>arbitrarily pathological</i>. On the other hand, we show that for random polynomials, the Betti numbers of the restriction of the zero set of a random polynomial to any definable set deviates from a Bezout-type bound with <i>bounded probability</i>.</p> <p><br></p> <p>Progress in o-minimal incidence combinatorics has lagged behind the developments in incidence combinatorics in the algebraic case due to the absence of an o-minimal version of the Guth-Katz <i>polynomial partitioning</i> theorem, and the first part of our work explains why this is so difficult. However, our average result shows that if we can prove that the measure of the set of polynomials which satisfy a certain property necessary for polynomial partitioning is suitably bounded from below, by the <i>probabilistic method</i>, we get an o-minimal polynomial partitioning theorem. This would be a tremendous breakthrough and would enable progress on multiple fronts in model theoretic combinatorics. </p> <p><br></p> <p>Along the second thread, we have studied the average Betti numbers of <i>random hypersurface arrangements</i>. Specifically, we study how the average Betti numbers of a finite arrangement of random hypersurfaces grows in terms of the degrees of the polynomials in the arrangement, as well as the number of polynomials. This is proved using a random Mayer-Vietoris spectral sequence argument. We supplement this result with a better bound on the average Betti numbers when one considers an <i>arrangement of quadrics</i>. This question turns out to be equivalent to studying the expected number of connected components of a certain <i>random graph model</i>, which has not been studied before, and thus could be of independent interest. While our motivation once again was incidence combinatorics, we obtained the first bounds on the topology of arrangements of random hypersurfaces, with an unexpected bonus of a result in random graphs.</p>

Page generated in 0.0986 seconds