• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 72
  • 72
  • 72
  • 72
  • 27
  • 24
  • 19
  • 19
  • 15
  • 14
  • 12
  • 11
  • 10
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

On the Defining Ideals of Rees Rings for Determinantal and Pfaffian Ideals of Generic Height

Edward F Price (9188318) 04 August 2020 (has links)
<div>This dissertation is based on joint work with Monte Cooper and is broken into two main parts, both of which study the defining ideals of the Rees rings of determinantal and Pfaffian ideals of generic height. In both parts, we attempt to place degree bounds on the defining equations.</div><div> </div><div> The first part of the dissertation consists of Chapters 3 to 5. Let $R = K[x_{1},\ldots,x_{d}]$ be a standard graded polynomial ring over a field $K$, and let $I$ be a homogeneous $R$-ideal generated by $s$ elements. Then there exists a polynomial ring $\mathcal{S} = R[T_{1},\ldots,T_{s}]$, which is also equal to $K[x_{1},\ldots,x_{d},T_{1},\ldots,T_{s}]$, of which the defining ideal of $\mathcal{R}(I)$ is an ideal. The polynomial ring $\mathcal{S}$ comes equipped with a natural bigrading given by $\deg x_{i} = (1,0)$ and $\deg T_{j} = (0,1)$. Here, we attempt to use specialization techniques to place bounds on the $x$-degrees (first component of the bidegrees) of the defining equations, i.e., the minimal generators of the defining ideal of $\mathcal{R}(I)$. We obtain degree bounds by using known results in the generic case and specializing. The key tool are the methods developed by Kustin, Polini, and Ulrich to obtain degree bounds from approximate resolutions. We recover known degree bounds for ideals of maximal minors and submaximal Pfaffians of an alternating matrix. Additionally, we obtain $x$-degree bounds for sufficiently large $T$-degrees in other cases of determinantal ideals of a matrix and Pfaffian ideals of an alternating matrix. We are unable to obtain degree bounds for determinantal ideals of symmetric matrices due to a lack of results in the generic case; however, we develop the tools necessary to obtain degree bounds once similar results are proven for generic symmetric matrices.</div><div> </div><div> The second part of this dissertation is Chapter 6, where we attempt to find a bound on the $T$-degrees of the defining equations of $\mathcal{R}(I)$ when $I$ is a nonlinearly presented homogeneous perfect Gorenstein ideal of grade three having second analytic deviation one that is of linear type on the punctured spectrum. We restrict to the case where $\mathcal{R}(I)$ is not Cohen-Macaulay. This is a natural next step following the work of Morey, Johnson, and Kustin-Polini-Ulrich. Based on extensive computation in Macaulay2, we give a conjecture for the relation type of $I$ and provide some evidence for the conjecture. In an attempt to prove the conjecture, we obtain results about the defining ideals of general fibers of rational maps, which may be of independent interest. We end with some examples where the bidegrees of the defining equations exhibit unusual behavior.</div>
52

Accuracy and Monotonicity of Spectral Element Method on Structured Meshes

Hao Li (10731936) 03 May 2021 (has links)
<div>On rectangular meshes, the simplest spectral element method for elliptic equations is the classical Lagrangian <i>Q</i><sup>k</sup> finite element method with only (<i>k</i>+1)-point Gauss-Lobatto quadrature, which can also be regarded as a finite difference scheme on all Gauss-Lobatto points. We prove that this finite difference scheme is (<i>k</i> + 2)-th order accurate for <i>k</i> ≥ 2, whereas <i>Q</i><sup><i>k</i></sup> spectral element method is usually considered as a (<i>k</i> + 1)-th order accurate scheme in <i>L<sup>2</sup></i>-norm. This result can be extended to linear wave, parabolic and linear Schrödinger equations.</div><div><br></div><div><div>Additionally, the <i>Q<sup>k</sup></i> finite element method for elliptic problems can also be viewed as a finite difference scheme on all Gauss-Lobatto points if the variable coefficients are replaced by their piecewise <i>Q<sup>k</sup> </i>Lagrange interpolants at the Gauss Lobatto points in each rectangular cell, which is also proven to be (<i>k</i> + 2)-th order accurate.</div></div><div><br></div><div><div>Moreover, the monotonicity and discrete maximum principle can be proven for the fourth order accurate Q2 scheme for solving a variable coefficient Poisson equation, which is the first monotone and high order accurate scheme for a variable coefficient elliptic operator.</div></div><div><br></div><div><div>Last but not the least, we proved that certain high order accurate compact finite difference methods for convection diffusion problems satisfy weak monotonicity. Then a simple limiter can be designed to enforce the bound-preserving property when solving convection diffusion equations without losing conservation and high order accuracy.</div><div><br></div></div>
53

Resonance Varieties and Free Resolutions Over an Exterior Algebra

Michael J Kaminski (10703067) 06 May 2021 (has links)
If <i>E</i> is an exterior algebra on a finite dimensional vector space and <i>M</i> is a graded <i>E</i>-module, the relationship between the resonance varieties of <i>M</i> and the minimal free resolution of <i>M </i>is studied. In the context of Orlik–Solomon algebras, we give a condition under which elements of the second resonance variety can be obtained. We show that the resonance varieties of a general <i>M</i> are invariant under taking syzygy modules up to a shift. As corollary, it is shown that certain points in the resonance varieties of <i>M</i> can be determined from minimal syzygies of a special form, and we define syzygetic resonance varieties to be the subvarieties consisting of such points. The (depth one) syzygetic resonance varieties of a square-free module <i>M</i> over <i>E</i> are shown to be subspace arrangements whose components correspond to graded shifts in the minimal free resolution of <i><sub>S</sub>M</i>, the square-free module over a commutative polynomial ring <i>S </i>corresponding to <i>M</i>. Using this, a lower bound for the graded Betti numbers of the square-free module<i> M</i> is given. As another application, it is shown that the minimality of certain syzygies of Orlik–Solomon algebras yield linear subspaces of their (syzygetic) resonance varieties and lower bounds for their graded Betti numbers.
54

Two Problems in Applied Topology

Nathanael D Cox (11008509) 23 July 2021 (has links)
<div>In this thesis, we present two main results in applied topology.</div><div> In our first result, we describe an algorithm for computing a semi-algebraic description of the quotient map of a proper semi-algebraic equivalence relation given as input. The complexity of the algorithm is doubly exponential in terms of the size of the polynomials describing the semi-algebraic set and equivalence relation.</div><div> In our second result, we use the fact that homology groups of a simplicial complex are isomorphic to the space of harmonic chains of that complex to obtain a representative cycle for each homology class. We then establish stability results on the harmonic chain groups.</div>
55

Applications of One-Point Quadrature Domains

Leah Elaine McNabb (18387690) 16 April 2024 (has links)
<p dir="ltr">This thesis presents applications of one-point quadrature domains to encryption and decryption as well as a method for estimating average temperature. In addition, it investigates methods for finding explicit formulas for certain functions and introduces results regarding quadrature domains for harmonic functions and for double quadrature domains. We use properties of quadrature domains to encrypt and decrypt locations in two dimensions. Results by Bell, Gustafsson, and Sylvan are used to encrypt a planar location as a point in a quadrature domain. A decryption method using properties of quadrature domains is then presented to uncover the location. We further demonstrate how to use data from the decryption algorithm to find an explicit formula for the Schwarz function for a one-point area quadrature domain. Given a double quadrature domain, we show that the fixed points within the area and arc length quadrature identities must be the same, but that the orders at each point may differ between these identities. In the realm of harmonic functions, we demonstrate how to uncover a one-point quadrature identity for harmonic functions from the quadrature identity for a simply-connected one-point quadrature domain for holomorphic functions. We use this result to state theorems for the density of one-point quadrature domains for harmonic functions in the realm of smooth domains with $C^{\infty}$-smooth boundary. These density theorems then lead us to discuss applications of quadrature domains for harmonic functions to estimating average temperature. We end by illustrating examples of the encryption process and discussing the building blocks for future work.</p>
56

Clinical Analytics and Personalized Medicine

Chih-Hao Fang (13978917) 19 October 2022 (has links)
<p>The increasing volume and availability of Electronic Health Records (EHRs) open up opportunities for computational models to improve patient care. Key factors in improving patient outcomes include identifying patient sub-groups with distinct patient characteristics and providing personalized treatment actions with expected improved outcomes. This thesis investigates how well-formulated matrix decomposition and causal inference techniques can be leveraged to tackle the problem of disease sub-typing and inferring treatment recommendations in healthcare. In particular, the research resulted in computational techniques based on archetypal analysis to identify and analyze disease sub-types and a causal reinforcement learning method for learning treatment recommendations. Our work on these techniques are divided into four part in this thesis:</p> <p><br></p> <p>In the first part of the thesis, we present a retrospective study of Sepsis patients in intensive care environments using patient data. Sepsis accounts for more than 50% of hospital deaths, and the associated cost ranks the highest among hospital admissions in the US. Sepsis may be misdiagnosed because the patient is not thoroughly assessed or the symptoms are misinterpreted, which can lead to serious health complications or even death. An improved understanding of disease states, progression, severity, and clinical markers can significantly improve patient outcomes and reduce costs. We have developed a computational framework based on archetypal analysis that identifies disease states in sepsis using clinical variables and samples in the MIMIC-III database. Each identified state is associated with different manifestations of organ dysfunction. Patients in different states are observed to be statistically significantly composed of distinct populations with disparate demographic and comorbidity profiles. We furthermore model disease progression using a Markov chain. Our progression model accurately characterizes the severity level of each pathological trajectory and identifies significant changes in clinical variables and treatment actions during sepsis state transitions. Collectively, our framework provides a holistic view of sepsis, and our findings provide the basis for the future development of clinical trials and therapeutic strategies for sepsis. These results have significant implications for a large number of hospitalizations.</p> <p><br></p> <p><br></p> <p>In the second part, we focus on the problem of recommending optimal personalized treatment policies from observational data. Treatment policies are typically based on randomized controlled trials (RCTs); these policies are often sub-optimal, inconsistent, and have potential biases. Using observational data, we formulate suitable objective functions that encode causal reasoning in a reinforcement learning (RL) framework and present efficient algorithms for learning optimal treatment policies using interventional and counterfactual reasoning. We demonstrate the efficacy of our method on two observational datasets: (i) observational data to study the effectiveness of right heart catheterization (RHC) in the initial care of 5735 critically ill patients, and (ii) data from the Infant Health and Development Program (IHDP), aimed at estimating the effect of the intervention on the neonatal health for 985 low-birth-weight, premature infants. For the RHC dataset, our method's policy prescribes right heart catheterization (RHC) for 11.5% of the patients compared to the best current method that prescribes RHC for 38% of the patients. Even with this significantly reduced intervention, our policy yields a 1.5% improvement in the 180-day survival rate and a 2.2% improvement in the 30-day survival rate. For the IHDP dataset, we observe a 3.16% improvement in the rate of improvement of neonatal health using our method's policy.</p> <p><br></p> <p>In the third part, we consider the Supervised Archetypal Analysis (SAA) problem, which incorporates label information to compute archetypes. We formulate a new constrained optimization problem incorporating Laplacian regularization to guide archetypes towards groupings of similar data points, resulting in label-coherent archetypes and label-consistent soft assignments. We first use the MNIST dataset to show that SAA can can yield better cluster quality over baselines on any chosen number of archetypes. We then use the CelebFaces Attributes dataset to demonstrate the superiority of SAA in terms of cluster quality and interpretability over competing supervised and unsupervised methods. We also demonstrate the interpretability of SAA decompositions in the context of a movie rating application. We show that the archetypes from SAA can be directly interpreted as user ratings and encode class-specific movie preferences. Finally, we demonstrate how the SAA archetypes can be used for personalized movie recommendations. </p> <p><br></p> <p>In the last part of this thesis, we apply our SAA technique to clinical settings. We study the problem of developing methods for ventilation recommendations for Sepsis patients. Mechanical ventilation is an essential and commonly prescribed intervention for Sepsis patients. However, studies have shown that mechanical ventilation is associated with higher mortality rates on average, it is generally believed that this is a consequence of broad use of ventilation, and that a more targeted use can significantly improve average treatment effect and, consequently, survival rates. We develop a computational framework using Supervised Archetypal Analysis to stratify our cohort to identify groups that benefit from ventilators. We use SAA to group patients based on pre-treatment variables as well as treatment outcomes by constructing a Laplacian regularizer from treatment response (label) information and incorporating it into the objective function of AA. Using our Sepsis cohort, we demonstrate that our method can effectively stratify our cohort into sub-cohorts that have positive and negative ATEs, corresponding to groups of patients that should and should not receive mechanical ventilation, respectively. </p> <p>We then train a classifier to identify patient sub-cohorts with positive and negative treatment effects. We show that our treatment recommender, on average, has a high positive ATE for patients that are recommended ventilator support and a slightly negative ATE for those not recommended ventilator support. We use SHAP (Shapley Additive exPlanations) techniques for generating clinical explanations for our classifier and demonstrate their use in the generation of patient-specific classification and explanation. Our framework provides a powerful new tool to assist in the clinical assessment of Sepsis patients for ventilator use.</p>
57

<b>FAST ALGORITHMS FOR MATRIX COMPUTATION AND APPLICATIONS</b>

Qiyuan Pang (17565405) 10 December 2023 (has links)
<p dir="ltr">Matrix decompositions play a pivotal role in matrix computation and applications. While general dense matrix-vector multiplications and linear equation solvers are prohibitively expensive, matrix decompositions offer fast alternatives for matrices meeting specific properties. This dissertation delves into my contributions to two fast matrix multiplication algorithms and one fast linear equation solver algorithm tailored for certain matrices and applications, all based on efficient matrix decompositions. Fast dimensionality reduction methods in spectral clustering, based on efficient eigen-decompositions, are also explored.</p><p dir="ltr">The first matrix decomposition introduced is the "kernel-independent" interpolative decomposition butterfly factorization (IDBF), acting as a data-sparse approximation for matrices adhering to a complementary low-rank property. Constructible in $O(N\log N)$ operations for an $N \times N$ matrix via hierarchical interpolative decompositions (IDs), the IDBF results in a product of $O(\log N)$ sparse matrices, each with $O(N)$ non-zero entries. This factorization facilitates rapid matrix-vector multiplication in $O(N \log N)$ operations, making it a versatile framework applicable to various scenarios like special function transformation, Fourier integral operators, and high-frequency wave computation.</p><p dir="ltr">The second matrix decomposition accelerates matrix-vector multiplication for computing multi-dimensional Jacobi polynomial transforms. Leveraging the observation that solutions to Jacobi's differential equation can be represented through non-oscillatory phase and amplitude functions, the corresponding matrix is expressed as the Hadamard product of a numerically low-rank matrix and a multi-dimensional discrete Fourier transform (DFT) matrix. This approach utilizes $r^d$ fast Fourier transforms (FFTs), where $r = O(\log n / \log \log n)$ and $d$ is the dimension, resulting in an almost optimal algorithm for computing the multidimensional Jacobi polynomial transform.</p><p dir="ltr">An efficient numerical method is developed based on a matrix decomposition, Hierarchical Interpolative Factorization, for solving modified Poisson-Boltzmann (MPB) equations. Addressing the computational bottleneck of evaluating Green's function in the MPB solver, the proposed method achieves linear scaling by combining selected inversion and hierarchical interpolative factorization. This innovation significantly reduces the computational cost associated with solving MPB equations, particularly in the evaluation of Green's function.</p><p dir="ltr"><br></p><p dir="ltr">Finally, eigen-decomposition methods, including the block Chebyshev-Davidson method and Orthogonalization-Free methods, are proposed for dimensionality reduction in spectral clustering. By leveraging well-known spectrum bounds of a Laplacian matrix, the Chebyshev-Davidson methods allow dimensionality reduction without the need for spectrum bounds estimation. And instead of the vanilla Chebyshev-Davidson method, it is better to use the block Chebyshev-Davidson method with an inner-outer restart technique to reduce total CPU time and a progressive polynomial filter to take advantage of suitable initial vectors when available, for example, in the streaming graph scenario. Theoretically, the Orthogonalization-Free method constructs a unitary isomorphic space to the eigenspace or a space weighting the eigenspace, solving optimization problems through Gradient Descent with Momentum Acceleration based on Conjugate Gradient and Line Search for optimal step sizes. Numerical results indicate that the eigenspace and the weighted eigenspace are equivalent in clustering performance, and scalable parallel versions of the block Chebyshev-Davidson method and OFM are developed to enhance efficiency in parallel computing.</p>
58

MAJORIZED MULTI-AGENT CONSENSUS EQUILIBRIUM FOR 3D COHERENT LIDAR IMAGING

Tony Allen (18502518) 06 May 2024 (has links)
<pre>Coherent lidar uses a chirped laser pulse for 3D imaging of distant targets.However, existing coherent lidar image reconstruction methods do not account for the system's aperture, resulting in sub-optimal resolution.Moreover, these methods use majorization-minimization for computational efficiency, but do so without a theoretical treatment of convergence.<br> <br>In this work, we present Coherent Lidar Aperture Modeled Plug-and-Play (CLAMP) for multi-look coherent lidar image reconstruction.CLAMP uses multi-agent consensus equilibrium (a form of PnP) to combine a neural network denoiser with an accurate physics-based forward model.CLAMP introduces an FFT-based method to account for the effects of the aperture and uses majorization of the forward model for computational efficiency.We also formalize the use of majorization-minimization in consensus optimization problems and prove convergence to the exact consensus equilibrium solution.Finally, we apply CLAMP to synthetic and measured data to demonstrate its effectiveness in producing high-resolution, speckle-free, 3D imagery.</pre><p></p>
59

Modeling a Dynamic System Using Fractional Order Calculus

Jordan D.F. Petty (9216107) 06 August 2020 (has links)
<p>Fractional calculus is the integration and differentiation to an arbitrary or fractional order. The techniques of fractional calculus are not commonly taught in engineering curricula since physical laws are expressed in integer order notation. Dr. Richard Magin (2006) notes how engineers occasionally encounter dynamic systems in which the integer order methods do not properly model the physical characteristics and lead to numerous mathematical operations. In the following study, the application of fractional order calculus to approximate the angular position of the disk oscillating in a Newtonian fluid was experimentally validated. The proposed experimental study was conducted to model the nonlinear response of an oscillating system using fractional order calculus. The integer and fractional order mathematical models solved the differential equation of motion specific to the experiment. The experimental results were compared to the integer order and the fractional order analytical solutions. The fractional order mathematical model in this study approximated the nonlinear response of the designed system by using the Bagley and Torvik fractional derivative. The analytical results of the experiment indicate that either the integer or fractional order methods can be used to approximate the angular position of the disk oscillating in the homogeneous solution. The following research was in collaboration with Dr. Richard Mark French, Dr. Garcia Bravo, and Rajarshi Choudhuri, and the experimental design was derived from the previous experiments conducted in 2018.</p>
60

An implementation of the parallelism, distribution and nondeterminism of membrane computing models on reconfigurable hardware

Nguyen, Van-Tuong January 2010 (has links)
Membrane computing investigates models of computation inspired by certain features of biological cells, especially features arising because of the presence of membranes. Because of their inherent large-scale parallelism, membrane computing models (called P systems) can be fully exploited only through the use of a parallel computing platform. However, it is an open question whether it is feasible to develop an efficient and useful parallel computing platform for membrane computing applications. Such a computing platform would significantly outperform equivalent sequential computing platforms while still achieving acceptable scalability, flexibility and extensibility. To move closer to an answer to this question, I have investigated a novel approach to the development of a parallel computing platform for membrane computing applications that has the potential to deliver a good balance between performance, flexibility, scalability and extensibility. This approach involves the use of reconfigurable hardware and an intelligent software component that is able to configure the hardware to suit the specific properties of the P system to be executed. As part of my investigations, I have created a prototype computing platform called Reconfig-P based on the proposed development approach. Reconfig-P is the only existing computing platform for membrane computing applications able to support both system-level and region-level parallelism. Using an intelligent hardware source code generator called P Builder, Reconfig-P is able to realise an input P system as a hardware circuit in various ways, depending on which aspects of P systems the user wishes to emphasise at the implementation level. For example, Reconfig-P can realise a P system in a rule-oriented manner or in a region-oriented manner. P Builder provides a unified implementation framework within which the various implementation strategies can be supported. The basic principles of this framework conform to a novel design pattern called Content-Form-Strategy. The framework seamlessly integrates the currently supported implementation approaches, and facilitates the inclusion of additional implementation strategies and additional P system features. Theoretical and empirical results regarding the execution time performance and hardware resource consumption of Reconfig-P suggest that the proposed development approach is a viable means of attaining a good balance between performance, scalability, flexibility and extensibility. Most of the existing computing platforms for membrane computing applications fail to support nondeterministic object distribution, a key aspect of P systems that presents several interesting implementation challenges. I have devised an efficient algorithm for nondeterministic object distribution that is suitable for implementation in hardware. Experimental results suggest that this algorithm could be incorporated into Reconfig-P without too significantly reducing its performance or efficiency. / Thesis (PhDInformationTechnology)--University of South Australia, 2010

Page generated in 0.1255 seconds