71 |
Small zeros of quadratic congruences to a prime power modulusHakami, Ali Hafiz Mawdah January 1900 (has links)
Doctor of Philosophy / Department of Mathematics / Todd E. Cochrane / Let $m$ be a positive integer, $p$ be an odd prime, and $\mathbb{Z}_{p^m } = \mathbb{Z}/(p^m )$ be the ring of integers modulo $p^m $. Let
$$Q({\mathbf{x}}) = Q(x_1 ,x_2 ,...,x_n ) = \sum\limits_{1 \leqslant i \leqslant j \leqslant n} {a_{ij} x_i x_j } ,$$
be a quadratic form with integer coefficients. Suppose that $n$ is even and $\det A_Q \not \equiv 0\;(\bmod p)$. Set $\Delta = (( - 1)^{n/2} \det A_Q /p)$, where $( \cdot /p)$ is the Legendre symbol and $\left\| {\mathbf{x}} \right\| = \max \left| {x_i } \right|$. Let $V$ be the set of solutions the congruence
$ $Q({\mathbf{x}})\, \equiv \;0\quad (\bmod p^m ) \quad(1)$$,
contained in $\mathbb{Z}^n $ and let $B$ be any box of points in $\mathbb{Z}^n $of the type
$$B = \left\{ {{\mathbf{x}} \in \mathbb{Z}^n \left| {\,a_i \leqslant x_i < a_i + m_i ,\;\,1 \leqslant i \leqslant n} \right.} \right\},$$
where $a_i ,m_i \in \mathbb{Z},\;1 \leqslant m_i \leqslant p^m $.
In this dissertation we use the method of exponential sums to investigate how large the cardinality of the box $B$ must be in order to guarantee that there exists a solution ${\mathbf{x}}$of (1) in $ B$. In particular we will focus on cubes (all $m_i $equal) centered at the origin in order to obtain primitive solutions with $\left\| {\mathbf{x}} \right\|$ small. For $m = 2$ and $n \geqslant 4$ we obtain a primitive solution with $\left\| {\mathbf{x}} \right\| \leqslant \max \left\{ {2^5 p,2^{18} } \right\}$. For $m = 3$, $n \geqslant 6$, and $\Delta = + 1$, we get $\left\| {\mathbf{x}} \right\| \leqslant \max \left\{ {2^{2/n} p^{(3/2) + (3/n)} ,2^{(2n + 4)/(n - 2)} } \right\}$. Finally for any $m \geqslant 2$, $n \geqslant m,$ and any nonsingular quadratic form we obtain $\left\| {\mathbf{x}} \right\| \leqslant \max \{ 6^{1/n} p^{m[(1/2) + (1/n)]} ,2^{2(n + 1)/(n - 2)} 3^{2/(n - 2)} \} $.
Others results are obtained for boxes $B$ with sides of arbitrary lengths.
|
72 |
Cliqued holes and other graphic structures for the node packing polytopeConley, Clark Logan January 1900 (has links)
Master of Science / Department of Industrial & Manufacturing Systems Engineering / Todd W. Easton / Graph Theory is a widely studied topic. A graph is defined by two important features:
nodes and edges. Nodes can represent people, cities, variables, resources, products, while
the edges represent a relationship between two nodes. Using graphs to solve problems
has played a major role in a diverse set of industries for many years.
Integer Programs (IPs) are mathematical models used to optimize a problem. Often
this involves maximizing the utilization of resources or minimizing waste. IPs are
most notably used when resources must be of integer value, or cannot be split. IPs
have been utilized by many companies for resource distribution, scheduling, and conflict
management.
The node packing or independent set problem is a common combinatorial optimization
problem. The objective is to select the maximum nodes in a graph such that no two
nodes are adjacent. Node packing has been used in a wide variety of problems, which
include routing of vehicles and scheduling machines.
This thesis introduces several new graph structures, cliqued hole, odd bipartite hole,
and odd k-partite hole, and their corresponding valid inequalities for the node packing
polyhedron. These valid inequalities are shown to be new valid inequalities and conditions
are provided for when they are facet defining, which are known to be the strongest
class of valid inequalities. These new valid inequalities can be used by practitioners to
help solve node packing instances and integer programs.
|
73 |
Synchronized simultaneous lifting in binary knapsack polyhedraBolton, Jennifer Elaine January 1900 (has links)
Master of Science / Department of Industrial & Manufacturing Systems Engineering / Todd W. Easton / Integer programs (IP) are used in companies and organizations across the world to
reach financial and time-related goals most often through optimal resource allocation and
scheduling. Unfortunately, integer programs are computationally difficult to solve and
in some cases the optimal solutions are unknown even with today’s advanced computing
machines.
Lifting is a technique that is often used to decrease the time required to solve an IP
to optimality. Lifting begins with a valid inequality and strengthens it by changing the
coefficients of variables in the inequality. Often times, this technique can result in facet
defining inequalities, which are the theoretically strongest inequalities.
This thesis introduces a new type of lifting called synchronized simultaneous lifting
(SSL). SSL allows for multiple sets of simultaneously lifted variables to be simultaneously
lifted which generates a new class of inequalities that previously would have required
an oracle to be found. Additionally, this thesis describes an algorithm to perform synchronized
simultaneous lifting for a binary knapsack inequality called the Synchronized
Simultaneous Lifting Algorithm (SSLA). SSLA is a quadratic time algorithm that will
exactly simultaneously lift two sets of simultaneously lifted variables.
Short computational studies show SSLA can sometimes solve IPs to optimality that
CPLEX, an advanced integer programming solver, alone cannot solve. Specifically, the
SSL cuts allowed a 76 percent improvement over CPLEX alone.
|
74 |
A bandlimited step function for use in discrete periodic extensionPathmanathan, Sureka January 1900 (has links)
Master of Science / Department of Mathematics / Nathan Albin / A new methodology is introduced for use in discrete periodic extension of non-periodic functions. The methodology is based on a band-limited step function, and utilizes the computational efficiency of FC-Gram (Fourier Continuation based on orthonormal Gram polynomial basis on the extension stage) extension database. The discrete periodic extension is a technique for augmenting a set of uniformly-spaced samples of a smooth function with auxiliary values in an extension region. If a suitable extension is constructed, the interpolating trigonometric polynomial found via an FFT(Fast Fourier Transform) will accurately approximate the original function in its original interval. The discrete periodic extension is a key construction in the FC-Gram algorithm which is successfully implemented in several recent efficient and high-order PDEs solvers. This thesis focuses on a new flexible discrete periodic extension procedure that performs at least as well as the FC-Gram method, but with somewhat simpler implementation and significantly decreased setup time.
|
75 |
Methods for handling missing data due to a limit of detection in longitudinal lognormal dataDick, Nicole Marie January 1900 (has links)
Master of Science / Department of Statistics / Suzanne Dubnicka / In animal science, challenge model studies often produce longitudinal data. Many times
the lognormal distribution is useful in modeling the data at each time point. Escherichia coli
O157 (E. coli O157) studies measure and record the concentration of colonies of the bacteria.
There are times when the concentration of colonies present is too low, falling below a limit of
detection. In these cases a zero is recorded for the concentration. Researchers employ a method
of enrichment to determine if E. coli O157 was truly not present. This enrichment process
searches for bacteria colony concentrations a second time to confirm or refute the previous
measurement. If enrichment comes back without evidence of any bacteria colonies present, a
zero remains as the observed concentration. If enrichment comes back with presence of bacteria
colonies, a minimum value is imputed for the concentration. At the conclusion of the study the
data are log10-transformed. One problem with the transformation is that the log of zero is
mathematically undefined, so any observed concentrations still recorded as a zero after
enrichment can not be log-transformed. Current practice carries the zero value from the
lognormal data to the normal data. The purpose of this report is to evaluate methods for handling
missing data due to a limit of detection and to provide results for various analyses of the
longitudinal data. Multiple methods of imputing a value for the missing data are compared.
Each method is analyzed by fitting three different models using SAS. To determine which
method is most accurately explaining the data, a simulation study was conducted.
|
76 |
Cluster-based lack of fit tests for nonlinear regression modelsMunasinghe, Wijith Prasantha January 1900 (has links)
Doctor of Philosophy / Department of Statistics / James W. Neill / Checking the adequacy of a proposed parametric nonlinear regression model is important
in order to obtain useful predictions and reliable parameter inferences. Lack of fit is said to
exist when the regression function does not adequately describe the mean of the response
vector. This dissertation considers asymptotics, implementation and a comparative performance
for the likelihood ratio tests suggested by Neill and Miller (2003). These tests use
constructed alternative models determined by decomposing the lack of fit space according to
clusterings of the observations. Clusterings are selected by a maximum power strategy and a
sequence of statistical experiments is developed in the sense of Le Cam. L2 differentiability
of the parametric array of probability measures associated with the sequence of experiments
is established in this dissertation, leading to local asymptotic normality. Utilizing contiguity,
the limit noncentral chi-square distribution under local parameter alternatives is then
derived. For implementation purposes, standard linear model projection algorithms are
used to approximate the likelihood ratio tests, after using the convexity of a class of fuzzy
clusterings to form a smooth alternative model which is necessarily used to approximate the
corresponding maximum optimal statistical experiment. It is demonstrated empirically that
good power can result by allowing cluster selection to vary according to different points along
the expectation surface of the proposed nonlinear regression model. However, in some cases,
a single maximum clustering suffices, leading to the development of a Bonferroni adjusted
multiple testing procedure. In addition, the maximin clustering based likelihood ratio tests
were observed to possess markedly better simulated power than the generalized likelihood
ratio test with semiparametric alternative model presented by Ciprian and Ruppert (2004).
|
Page generated in 0.1572 seconds