• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 69
  • 69
  • 69
  • 69
  • 26
  • 22
  • 19
  • 18
  • 15
  • 13
  • 12
  • 11
  • 10
  • 10
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Inequalities related to Lech's conjecture and other problems in local and graded algebra

Cheng Meng (17591913) 07 December 2023 (has links)
<p dir="ltr">This thesis consists of four parts that study different topics in commutative algebra. The main results of the first part of the dissertation are in Chapter 3, which is based on the author’s paper [1]. Let R be a commutative Noetherian ring graded by a torsionfree abelian group G. We introduce the notion of G-graded irreducibility and prove that G-graded irreducibility is equivalent to irreducibility in the usual sense. This is a generalization of a result by Chen and Kim in the Z-graded case. We also discuss the concept of the index of reducibility and give an inequality for the indices of reducibility between any radical non-graded ideal and its largest graded subideal. The second topic is developed in Chapter 4 which is based on the author’s paper [2]. In this chapter, we prove that if P is a prime ideal of inside a polynomial ring S with dim S/P = r, and adjoining s general linear forms to the prime ideal changes the (r − s)-th Hilbert coefficient of the quotient ring by 1 and doesn’t change the 0th to (r − s − 1)-th Hilbert coefficients where s ≤ r, then the depth of S/P is n − s − 1. This criterion also tells us about possible restrictions on the generic initial ideal of a prime ideal inside a polynomial ring. The third part of the thesis is Chapter 5 which is based on the author’s paper [3]. Let R be a polynomial ring over a field. We introduce the concept of sequentially almost Cohen-Macaulay modules, describe the extremal rays of the cone of local cohomology tables of finitely generated graded R-modules which are sequentially almost Cohen-Macaulay, and also describe some cases when the local cohomology table of a module of dimension 3 has a nontrivial decomposition. The last part is Chapter 6 which is based on the author’s paper [4]. We introduce the notion of strongly Lech-independent ideals as a generalization of Lech-independent ideals defined by Lech and Hanes, and use this notion to derive inequalities on multiplicities of ideals. In particular, we prove a new case of Lech’s conjecture, namely, if (R, m) → (S, n) is a flat local extension of local rings with dim R = dim S, the completion of S is the completion of a standard graded ring over a field k with respect to the homogeneous maximal ideal, and the completion of mS is the completion of a homogeneous ideal, then e(R) ≤ e(S).</p>
32

NUMERICAL METHOD BASED NEURAL NETWORK AND ITS APPLICATION IN SCIENTIFIC COMPUTING, OPERATOR LEARNING AND OPTIMIZATION PROBLEM

Jiahao Zhang (13140363) 22 July 2022 (has links)
<p>In this work, we develop several special computational structures of Neural Networks based on some existing approaches such as Auto-Encoder and DeepONet. Combined with classic numerical methods in scientific computing, finite difference and SAV method, our model is able to solve the operator learning tasks of partial differential equations accurately in both data-driven and non-data-driven settings. The high dimensional problem requires a large number of samples for training in the normal settings of Neural network training. The proposed</p> <p>model equipped with auto-encoder performs the dimension reduction for the input operator, which discovers the intrinsic hidden features, to reduce the number of samples needed for training. In addition, the non-linear basis of the hidden variables are constructed</p> <p>for both the operator variable and the solution of the equation, leading to a concise representation of the solution. For non data-driven setting, our method derives the solution of the equation with only initial and boundary condition, where the normal network can not manage to do it, with the assistance of SAV method. Besides, it preserves the advantages of DeepONet. It performs the operator learning with various initial conditions or parametric equations. The modified energy is defined to estimate the true energy of the system and it has the monotonic decreasing property. It also serves as an indicator of the suitable time step, allowing the model to adjust the time step. Finally, the optimization is a key procedure of network training. We propose a new optimization method based on SAV. It allows a much</p> <p>larger learning rate compared to SGD and ADAM, which are most popular methods used nowadays. Moreover, It also allows the adaptive learning rate to pursue the faster speed converging to the critical point.</p>
33

The Evolving Neural Network Method for Scalar Hyperbolic Conservation Laws

Brooke E Hejnal (18340839) 10 April 2024 (has links)
<p dir="ltr">This thesis introduces the evolving neural network method for solving scalar hyperbolic conservation laws. This method uses neural networks to compute solutions with an optimal moving mesh that evolves with the solution over time. The motivation for this method was to produce solutions with high accuracy near shocks while reducing the overall computational cost. The evolving neural network method first approximates initial data with a neural network producing a continuous piecewise linear approximation. Then, the neural network representation is evolved in time according to a combination of characteristics and a finite volume-type method.</p><p dir="ltr">It is shown numerically and theoretically that the evolving neural network method out performs traditional fixed-mesh methods with respect to computational cost. Numerical results for benchmark test problems including Burgers’ equation and the Buckley-Leverett equation demonstrate that this method can accurately capture shocks and rarefaction waves with a minimal number of mesh points.</p>
34

<i>A</i>-Hypergeometric Systems and <i>D</i>-Module Functors

Avram W Steiner (6598226) 15 May 2019 (has links)
<div>Let A be a d by n integer matrix. Gel'fand et al.\ proved that most A-hypergeometric systems have an interpretation as a Fourier–Laplace transform of a direct image. The set of parameters for which this happens was later identified by Schulze and Walther as the set of not strongly resonant parameters of A. A similar statement relating A-hypergeometric systems to exceptional direct images was proved by Reichelt. In the first part of this thesis, we consider a hybrid approach involving neighborhoods U of the torus of A and consider compositions of direct and exceptional direct images. Our main results characterize for which parameters the associated A-hypergeometric system is the inverse Fourier–Laplace transform of such a "mixed Gauss–Manin system". </div><div><br></div><div>If the semigroup ring of A is normal, we show that every A-hypergeometric system is "mixed Gauss–Manin". </div><div><br></div><div>In the second part of this thesis, we use our notion of mixed Gauss–Manin systems to show that the projection and restriction of a normal A-hypergeometric system to the coordinate subspace corresponding to a face are isomorphic up to cohomological shift; moreover, they are essentially hypergeometric. We also show that, if A is in addition homogeneous, the holonomic dual of an A-hypergeometric system is itself A-hypergeometric. This extends a result of Uli Walther, proving a conjecture of Nobuki Takayama in the normal homogeneous case.</div>
35

Microlocal Analysis and Applications to Medical Imaging

Chase O Mathison (9179663) 28 July 2020 (has links)
This thesis is a collection of the three projects I have worked on at Purdue. The first is a paper on thermoacoustic tomography involving circular integrating detectors that was published in Inverse Problems and Imaging. Results from this paper include demonstrating that the measurement operators involved are Fourier integral operators, as well as proving microlocal uniqueness in certain cases, and also stability. The second paper, submitted to the Journal of Inverse and Ill-Posed Problems, is much more of an application of sampling theory in to the specific case of thermoacoustic tomography. Results from this paper include demonstrating resolution limits imposed by sampling rates, and showing that aliasing artifacts appear in predictable locations in an image when the measurement operator is under sampled in either the time variable or space variables. We also show an application of a basic anti aliasing scheme based on averaging of data. The last project moves slightly away from microlocal analysis and considers the uniqueness in medical imaging of the restricted Radon transform in even dimensions. This is the classical interior problem, and we show a characterization of the range of the Radon transform, and from this are able to obtain a characterization of the kernel of the restricted Radon transform. We include figures throughout to illustrate results.
36

EFFICIENT NUMERICAL METHODS FOR KINETIC EQUATIONS WITH HIGH DIMENSIONS AND UNCERTAINTIES

Yubo Wang (11792576) 19 December 2021 (has links)
<div><div>In this thesis, we focus on two challenges arising in kinetic equations, high dimensions and uncertainties. To reduce the dimensions, we proposed efficient methods for linear Boltzmann and full Boltzmann equations based on dynamic low-rank frameworks. For linear Boltzmann equation, we proposed a method that is based on macro-micro decomposition of the equation; the low-rank approximation is only used for the micro part of the solution. The time and spatial discretizations are done properly so that the overall scheme is second-order accurate (in both the fully kinetic and the limit regime) and asymptotic-preserving (AP). That is, in the diffusive regime, the scheme becomes a macroscopic solver for the limiting diffusion equation that automatically captures the low-rank structure of the solution. Moreover, the method can be implemented in a fully explicit way and is thus significantly more efficient compared to the previous state of the art. We demonstrate the accuracy and efficiency of the proposed low-rank method by a number of four-dimensional (two dimensions in physical space and two dimensions in velocity space) simulations. We further study the adaptivity of low-rank methods in full Boltzmann equation. We proposed a highly efficient adaptive low- rank method in Boltzmann equation for computations of steady state solutions. The main novelties of this approach are: On one hand, to the best of our knowledge, the dynamic low- rank integrator hasn’t been applied to full Boltzmann equation till date. The full collision operator is local in spatial variable while the convection part is local in velocity variable. This separated nature is well-suited for low-rank methods. Compared with full grid method (finite difference, finite volume,...), the dynamic low-rank method can avoid the full computations of collision operators in each spatial grid/elements. Resultingly, it can achieve much better efficiency especially for some low rank flows (e.g. normal shock wave). On the other hand, our adaptive low-rank method uses a novel dynamic thresholding strategy to adaptively control the computational rank to achieve better efficiency especially for steady state solutions. We demonstrate the accuracy and efficiency of the proposed adaptive low rank method by a number of 1D/2D Maxwell molecule benchmark tests. On the other hand, for kinetic equations with uncertainties, we focus on non-intrusive sampling methods where we are able to inherit good properties (AP, positivity preserving) from existing deterministic solvers. We propose a control variate multilevel Monte Carlo method for the kinetic BGK model of the Boltzmann equation subject to random inputs. The method combines a multilevel Monte Carlo technique with the computation of the optimal control variate multipliers derived from local or global variance minimization prob- lems. Consistency and convergence analysis for the method equipped with a second-order positivity-preserving and asymptotic-preserving scheme in space and time is also performed. Various numerical examples confirm that the optimized multilevel Monte Carlo method outperforms the classical multilevel Monte Carlo method especially for problems with dis- continuities<br></div></div>
37

Local Langlands Correspondence for Asai L and Epsilon Factors

Daniel J Shankman (8797034) 05 May 2020 (has links)
Let E/F be a quadratic extension of p-adic fields. The local Langlands correspondence establishes a bijection between n-dimensional Frobenius semisimple representations of the Weil-Deligne group of E and smooth, irreducible representations of GL(n, E). We reinterpret this bijection in the setting of the Weil restriction of scalars Res(GL(n), E/F), and show that the Asai L-function and epsilon factor on the analytic side match up with the expected Artin L-function and epsilon factor on the Galois side.
38

Modeling Temporal Patterns of Neural Synchronization: Synaptic Plasticity and Stochastic Mechanisms

Joel A Zirkle (9178547) 05 August 2020 (has links)
Neural synchrony in the brain at rest is usually variable and intermittent, thus intervals of predominantly synchronized activity are interrupted by intervals of desynchronized activity. Prior studies suggested that this temporal structure of the weakly synchronous activity might be functionally significant: many short desynchronizations may be functionally different from few long desynchronizations, even if the average synchrony level is the same. In this thesis, we use computational neuroscience methods to investigate the effects of (i) spike-timing dependent plasticity (STDP) and (ii) noise on the temporal patterns of synchronization in a simple model. The model is composed of two conductance-based neurons connected via excitatory unidirectional synapses. In (i) these excitatory synapses are made plastic, in (ii) two different types of noise implementation to model the stochasticity of membrane ion channels is considered. The plasticity results are taken from our recently published article, while the noise results are currently being compiled into a manuscript.<br><br>The dynamics of this network is subjected to the time-series analysis methods used in prior experimental studies. We provide numerical evidence that both STDP and channel noise can alter the synchronized dynamics in the network in several ways. This depends on the time scale that plasticity acts on and the intensity of the noise. However, in general, the action of STDP and noise in the simple network considered here is to promote dynamics with short desynchronizations (i.e. dynamics reminiscent of that observed in experimental studies) over dynamics with longer desynchronizations.
39

Homological Representatives in Topological Persistence

Tao Hou (12422845) 20 April 2022 (has links)
<p>Harnessing the power of data has been a driving force for computing in recently years. However, the non-vectorized or even non-Euclidean nature of certain data with complex structures also poses new challenges to the data science community. Topological data analysis (TDA) has proven effective in several scenarios for alleviating the challenges, by providing techniques that can reveal hidden structures and high-order connectivity for data. A central technique in TDA is called persistent homology, which provides intervals tracking the birth and death of topological features in a growing sequence of topological spaces. In this dissertation, we study the representative problem for persistent homology, motivated by the observation that persistent homology does not pinpoint a specific homology class or cycle born and dying with the persistence intervals. Furthermore, studying the representatives also leads us to new findings for related problems such as persistence computation.<br> </p> <p>First, we look into the representative problem for (standard) persistence homology and term the representatives as persistent cycles. We define persistent cycles as cycles born and dying with given persistence intervals and connect the definition to interval decomposition for persistence modules. We also look into the computation of optimal (minimum) persistent cycles which have guaranteed quality. We prove that it is NP-hard to compute minimum persistent p-cycles for the two types of intervals in persistent homology in general dimensions (p>1). In view of the NP-hardness results, we then identify a special but important class of inputs called weak (p+1)-pseudomanifolds whose minimum persistent p-cycles can be computed in polynomial time. The algorithms are based on a reduction to minimum (s,t)-cuts on dual graphs.<br> </p> <p>Second, we propose alternative persistent cycles capturing the dynamic changes of homological features born and dying with persistence intervals, which the previous persistent cycles do not reveal. We focus on persistent homology generated by piecewise linear (PL) functions and base our definition on an extension of persistence called the levelset zigzag persistence. We define a sequence of cycles called levelset persistent cycles containing a cycle between each consecutive critical points within the persistence interval. Due to the NP-harness results proven previously, we propose polynomial-time algorithms computing optimal sequences of levelset persistent p-cycles for weak (p+1)-pseudomanifolds. Our algorithms draw upon the idea of relating optimal cycles to min-cuts in a graph that we exploited earlier for standard persistent cycles. Note that levelset zigzag poses non-trivial challenges for the approach because a sequence of optimal cycles instead of a single one needs to be computed in this case.<br> </p> <p>Third, we investigate the computation of zigzag persistence on graph inputs, motivated by the fact that graphs model real-world circumstances in many applications where they may constantly change to capture dynamic behaviors of phenomena. Zigzag persistence, an extension of the standard persistence incorporating both insertions and deletions of simplices, is one appropriate instrument for analyzing such changing graph data. However, unlike standard persistence which admits nearly linear-time algorithms for graphs, such results for the zigzag version improving the general $O(m^\omega)$ time complexity are not known, where $\omega< 2.37286$ is the matrix multiplication exponent. We propose algorithms for zigzag persistence on graphs which run in near-linear time. Specifically, given a filtration of length m on a graph of size n, the algorithm for 0-dimension runs in $O(m\log^2 n+m\log m)$ time and the algorithm for 1-dimension runs in $O(m\log^4 n)$ time. The algorithm for 0-dimension draws upon another algorithm designed originally for pairing critical points of Morse functions on 2-manifolds. The correctness proof of the algorithm, which is a major contribution, is achieved with the help of representatives. The algorithm for 1-dimension pairs a negative edge with the earliest positive edge so that a representative 1-cycle containing both edges resides in all intermediate graphs.</p>
40

LONG TIME BEHAVIOR OF SURFACE DIFFUSION OFANISOTROPIC SURFACE ENERGY

Hanan Ussif Gadi (17592987) 09 December 2023 (has links)
<p dir="ltr">We investigate the surface diffusion flow of smooth curves with anisotropic surface energy.</p><p dir="ltr">This geometric flow is the H−1-gradient flow of an energy functional. It preserves the area</p><p dir="ltr">enclosed by the evolving curve while at the same time decreases its energy. We show the</p><p dir="ltr">existence of a unique local in time solution for the flow but also the existence of a global in</p><p dir="ltr">time solution if the initial curve is close to the Wulff shape. In addition, we prove that the</p><p dir="ltr">global solution converges to the Wulff shape as t → ∞. In the current setting, the anisotropy</p><p dir="ltr">is not too strong so that the Wulff shape is given by a smooth curve. In the last section, we</p><p dir="ltr">formulate the corresponding problem when the Wulff shape exhibits corners.</p>

Page generated in 0.1358 seconds