Spelling suggestions: "subject:"[een] OPTIMIZATION"" "subject:"[enn] OPTIMIZATION""
161 |
Modularity and Structure in MatroidsKapadia, Rohan January 2013 (has links)
This thesis concerns sufficient conditions for a matroid to admit one of two types of structural characterization: a representation over a finite field or a description as a frame matroid.
We call a restriction N of a matroid M modular if, for every flat F of M,
r_M(F) + r(N) = r_M(F ∩ E(N)) + r_M(F ∪ E(N)).
A consequence of a theorem of Seymour is that any 3-connected matroid with a modular U_{2,3}-restriction is binary.
We extend this fact to arbitrary finite fields, showing that if N is a modular rank-3 restriction of a vertically 4-connected matroid M, then any representation of N over a finite field extends to a representation of M.
We also look at a more general notion of modularity that applies to minors of a matroid, and use it to present conditions for a matroid with a large projective geometry minor to be representable over a finite field.
In particular, we show that a 3-connected, representable matroid with a sufficiently large projective geometry over a finite field GF(q) as a minor is either representable over GF(q) or has a U_{2,q^2+1}-minor.
A second result of Seymour is that any vertically 4-connected matroid with a modular M(K_4)-restriction is graphic.
Geelen, Gerards, and Whittle partially generalized this from M(K_4) to larger frame matroids, showing that any vertically 5-connected, representable matroid with a rank-4 Dowling geometry as a modular restriction is a frame matroid.
As with projective geometries, we prove a version of this result for matroids with large Dowling geometries as minors, providing conditions which imply that they are frame matroids.
|
162 |
Optimization of Monte Carlo simulationsBryskhe, Henrik January 2009 (has links)
This thesis considers several different techniques for optimizing Monte Carlo simulations. The Monte Carlo system used is Penelope but most of the techniques are applicable to other systems. The two mayor techniques are the usage of the graphics card to do geometry calculations, and raytracing. Using graphics card provides a very efficient way to do fast ray and triangle intersections. Raytracing provides an approximation of Monte Carlo simulation but is much faster to perform. A program was also written in order to have a platform for Monte Carlo simulations where the different techniques were implemented and tested. The program also provides an overview of the simulation setup, were the user can easily verify that everything has been setup correctly. The thesis also covers an attempt to rewrite Penelope from FORTAN to C. The new version is significantly faster and can be used on more systems. A distribution package was also added to the new Penelope version. Since Monte Carlo simulations are easily distributed, running this type of simulations on ten computers yields ten times the speedup. Combining the different techniques in the platform provides an easy to use and at the same time efficient way of performing Monte Carlo simulations.
|
163 |
The combinatorics of the Jack parameter and the genus series for topological mapsLa Croix, Michael Andrew January 2009 (has links)
Informally, a rooted map is a topologically pointed embedding of a graph in a surface. This thesis examines two problems in the enumerative theory of rooted maps.
The b-Conjecture, due to Goulden and Jackson, predicts that structural similarities between the generating series for rooted orientable maps with respect
to vertex-degree sequence, face-degree sequence, and number of edges, and
the corresponding generating series for rooted locally orientable maps, can be
explained by a unified enumerative theory. Both series specialize M(x,y,z;b), a
series defined algebraically in terms of Jack symmetric functions, and the unified
theory should be based on the existence of an appropriate integer valued invariant of rooted maps with respect to which M(x,y,z;b) is the generating series for locally orientable maps. The conjectured invariant should take the value zero when evaluated on orientable maps, and should take positive values when evaluated on non-orientable maps, but since it must also depend on
rooting, it cannot be directly related to genus.
A new family of candidate invariants, η, is described recursively in terms of root-edge deletion. Both the generating series for rooted maps with respect to η and an appropriate specialization of M satisfy the same differential equation with a unique solution. This shows that η gives the appropriate enumerative theory when vertex degrees are ignored, which is precisely the setting required by Goulden, Harer, and Jackson for an application to algebraic geometry. A functional equation satisfied by M and the existence of a bijection between
rooted maps on the torus and a restricted set of rooted maps on the Klein bottle show that η has additional structural properties that are required of the conjectured invariant.
The q-Conjecture, due to Jackson and Visentin, posits a natural combinatorial
explanation, for a functional relationship between a generating series for rooted
orientable maps and the corresponding generating series for 4-regular rooted
orientable maps. The explanation should take the form of a bijection, ϕ, between appropriately decorated rooted orientable maps and 4-regular rooted orientable
maps, and its restriction to undecorated maps is expected to be related to the
medial construction.
Previous attempts to identify ϕ have suffered from the fact that the existing
derivations of the functional relationship involve inherently non-combinatorial
steps, but the techniques used to analyze η suggest the possibility of a new derivation of the relationship that may be more suitable to combinatorial analysis. An examination of automorphisms that must be induced by ϕ gives evidence for a refinement of the functional relationship, and this leads to a more combinatorially refined conjecture. The refined conjecture is then reformulated algebraically so that its predictions can be tested numerically.
|
164 |
Modeling and Optimization of Desalting Process in Oil IndustryAlshehri, Ali January 2009 (has links)
Throughout a very long piping network crude oil in Saudi Arabia is sent to Gas Oil Separation
Plant called GOSP. The main objectives of the GOSP are:
- Separation of the associated gas through pressure drop in two series stages one to 120
psig and the other to 50 psig.
- Separation of water by gravity separators called High Pressure Production Trap (HPPT),
Dehydrator, Desalter and Water Oil Separator (WOSEP).
- Reducing salt concentration to less than 10 PTB utilizing wash water and demulsifier.
During the desalting process, the challenge is to overcome the existence of an emulsion layer at
the interface between oil and water. In petroleum industry normally emulsions encountered are
some kind of water droplets dispersed in a continuous phase of oil. In crude oil emulsions,
emulsifying agents are present at the oil-water interface, hindering this coalescence process.
Such agents include scale and clay particles, added chemicals or indigenous crude oil
components like asphaltenes, resins, waxes and naphthenic acids.
Many techniques made available to gas oil separation plant operators to minimize the effect of
tight emulsions. These techniques include injection of demulsifier, increasing oil temperature,
gravity separation in large vessels with high retention time as well as electrostatic voltage. From
experience and studies these variables have been already optimized to a good extent; however,
from the believe that knowledge never stop, this study is conducted targeting enhancing the
demulsifier control and optimizing the wash water rate.
The objective of this study is to design an Artificial Neural Network (ANN) trained on data set
to cover wide operating range of all parameters effecting demulsifier dosage. This network will
be used to work as a control black box inside the controller in which all effecting parameters are
inputs and the demulsifier dosage is the controller output. Testing this control scheme showed an
effective reduction in demulsifier consumption rate compared to the existing linear method.
Results also, showed that the existing control strategy is highly conservative to prevent the salt
from exceeding the limit. The generated function from the ANN was used also to optimize the
amount of fresh water added to wash the salty crude oil. Finally, another ANN was developed to
generate an online estimate of the salt content in the produced oil.
|
165 |
Quantum Information Processing with Adversarial DevicesMcKague, Matthew 20 May 2010 (has links)
We consider several applications in black-box quantum computation in which untrusted physical quantum devices are connected together to produce an experiment. By examining the outcome statistics of such an experiment, and comparing them against the desired experiment, we may hope to certify that the physical experiment is implementing the desired experiment. This is useful in order to verify that a calculation has been performed correctly, that measurement outcomes are secure, or that the devices are producing the desired state.
First, we introduce constructions for a family of simulations, which duplicate the outcome statistics of an experiment but are not exactly the same as the desired experiment. This places limitations on how strict we may be with the requirements we place on the physical devices. We identify many simulations, and consider their implications for quantum foundations as well as security related applications.
The most general application of black-box quantum computing is self-testing circuits, in which a generic physical circuit may be tested against a given circuit. Earlier results were restricted to circuits described on a real Hilbert space. We give new proofs for earlier results and begin work extending them to circuits on a complex Hilbert space with a test that verifies complex measurements.
For security applications of black-box quantum computing, we consider device independent quantum key distribution (DIQKD). We may consider DIQKD as an extension of QKD (quantum key distribution) in which the model of the physical measurement devices is replaced with an adversarial model. This introduces many technical problems, such as unbounded dimension, but promises increased security since the many complexities hidden by traditional models are implicitly considered. We extend earlier work by proving security with fewer assumptions.
Finally, we consider the case of black-box state characterization. Here the emphasis is placed on providing robust results with operationally meaningful measures. The goal is to certify that a black box device is producing high quality maximally entangled pairs of qubits using only untrusted measurements and a single statistic, the CHSH value, defined using correlations of outcomes from the two parts of the system. We present several measures of quality and prove bounds for them.
|
166 |
Design Optimization of a Porous Radiant BurnerHorsman, Adam January 2010 (has links)
The design of combustion devices is very important to society today. They need to be highly efficient, while reducing emissions in order to meet strict environmental standards. These devices, however, are currently not being designed effectively. The most common method of improving them is through parametric studies, where the design parameters are altered one at a time to try and find the best operating point. While this method does work, it is not very enlightening as it neglects the non-linear interactions between the design parameters, requires a large amount of time, and does not guarantee that the best operating point is found. As the environmental standards continue to become stricter, a more robust method of optimizing combustion devices will be required.
In this work a robust design optimization algorithm is presented that is capable of mathematically accounting for all of the interactions between the parameters and can find the best operating point of a combustion device. The algorithm uses response surface modeling to model the objective function, thereby reducing computational expense and time as compared to traditional optimization algorithms.
The algorithm is tested on three case studies, with the goal of improving the radiant efficiency of a two stage porous radiant burner. The first case studied was one dimensional and involved adjusting the pore diameter of the second stage of the burner. The second case, also one dimensional, involved altering the second stage porosity. The third, and final, case study required that both of the above parameters be altered to improve the radiant efficiency. All three case studies resulted in statistically significantly changes in the efficiency of the burner.
|
167 |
On the orientation of hypergraphsRuiz-Vargas, Andres J. 12 1900 (has links)
This is an expository thesis. In this thesis we study out-orientations of hypergraphs, where every hyperarc has one tail vertex. We study hypergraphs that admit out-orientations covering supermodular-type connectivity requirements. For this, we follow a paper of Frank.
We also study the Steiner rooted orientation problem. Given a hypergraph and a subset of vertices S ⊆ V, the goal is to give necessary and sufficient conditions for an orientation such that the connectivity between a root vertex and each vertex of S is at least k, for a positive integer k. We follow a paper by Kiraly and Lau, where they prove that every 2k-hyperedge connected hypergraph has such an orientation.
|
168 |
Algebraic Methods and Monotone Hurwitz NumbersGuay-Paquet, Mathieu January 2012 (has links)
We develop algebraic methods to solve join-cut equations, which are partial differential equations that arise in the study of permutation factorizations. Using these techniques, we give a detailed study of the recently introduced monotone Hurwitz numbers, which count factorizations of a given permutation into a fixed number of transpositions, subject to some technical conditions known as transitivity and monotonicity.
Part of the interest in monotone Hurwitz numbers comes from the fact that they have been identified as the coefficients in a certain asymptotic expansion related to the Harish-Chandra-Itzykson-Zuber integral, which comes from the theory of random matrices and has applications in mathematical physics. The connection between random matrices and permutation factorizations goes through representation theory, with symmetric functions in the Jucys-Murphy elements playing a key role.
As the name implies, monotone Hurwitz numbers are related to the more classical Hurwitz numbers, which count permutation factorizations regardless of monotonicity, and for which there is a significant body of work. Our results for monotone Hurwitz numbers are inspired by similar results for Hurwitz numbers; we obtain a genus expansion for the related generating functions, which yields explicit formulas and a polynomiality result for monotone Hurwitz numbers. A significant difference between the two cases is that our methods are purely algebraic, whereas the theory of Hurwitz numbers relies on some fairly deep results in algebraic geometry.
Despite our methods being algebraic, it seems that there should be a connection between monotone Hurwitz numbers and geometry, although this is currently missing. We give some evidence for this connection by identifying some of the coefficients in the monotone Hurwitz genus expansion with coefficients in the classical Hurwitz genus expansion known to be Hodge integrals over the moduli space of curves.
|
169 |
On the Efficiency and Security of Cryptographic PairingsKnapp, Edward 04 December 2012 (has links)
Pairing-based cryptography has been employed to obtain several advantageous cryptographic protocols. In particular, there exist several identity-based variants of common cryptographic schemes. The computation of a single pairing is a comparatively expensive operation, since it often requires many operations in the underlying elliptic curve. In this thesis, we explore the efficient computation of pairings.
Computation of the Tate pairing is done in two steps. First, a Miller function is computed, followed by the final exponentiation. We discuss the state-of-the-art optimizations for Miller function computation under various conditions. We are able to shave off a fixed number of operations in the final exponentiation. We consider methods to effectively parallelize the computation of pairings in a multi-core setting and discover that the Weil pairing may provide some advantage under certain conditions. This work is extended to the 192-bit security level and some unlikely candidate curves for such a setting are discovered.
Electronic Toll Pricing (ETP) aims to improve road tolling by collecting toll fares electronically and without the need to slow down vehicles. In most ETP schemes, drivers are charged periodically based on the locations, times, distances or durations travelled. Many ETP schemes are currently deployed and although these systems are efficient, they require a great deal of knowledge regarding driving habits in order to operate correctly. We present an ETP scheme where pairing-based BLS signatures play an important role.
Finally, we discuss the security of pairings in the presence of an efficient algorithm to invert the pairing. We generalize previous results to the setting of asymmetric pairings as well as give a simplified proof in the symmetric setting.
|
170 |
Using optimized computer simulation to facilitate the learning process of the free throw in wheelchair basketballHamilton, Brianne Nicole 05 January 2006 (has links)
A computer simulation program was previously developed by the researcher which determines a theoretically optimal movement pattern for the free throw in wheelchair basketball. The purpose of this study was to evaluate the external validity of the optimization program by examining whether the knowledge of the optimal movement pattern facilitates performance of the free throw in wheelchair basketball. </p><p>In a pilot study, four able-bodied players from the Saskatchewan Wheelchair Basketball Mens Team were invited to participate on one occasion. These participants were videotaped shooting free throws to provide knowledge of an expert wheelchair free throw movement pattern. Using video analysis, it was found that the release conditions used by this group were very similar to those predicted to be optimal. This lent support to the predicted optimal movement pattern being an actual optimal movement pattern for the free throw in wheelchair basketball.
In the primary study, thirty-three able-bodied male participants were randomly assigned to three groups: a no-feedback group; a video-feedback group; and an optimal pattern feedback group. The participants performed wheelchair basketball free throw training for three days over one week. The no-feedback group simply shot free throws from a wheelchair, whereas the video-feedback group viewed video of their previous free throws, and the optimal pattern group viewed video of their previous free throws with an optimal free throw pattern superimposed. The participants also completed a pretest one week before and a retention test one week after the training period. </p> <p>A repeated measures ANOVA was used to test for significant differences between the three training groups in free throw success in wheelchair basketball over each testing occasion. The statistical analyses indicated that there were no differences in free throw success between the group that had knowledge of their personalized optimal movement pattern when compared to the groups that received either no-feedback or video-feedback (p<0.05). </p> <p>Video analysis revealed that the wheelchair free throw movement pattern of participants in the optimal pattern group changed substantially from the pretest to the post-test. This suggests that the participants in the optimal pattern group were making progress towards their optimal movement patterns, but had not yet mastered the movement pattern.
|
Page generated in 0.0485 seconds