• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • Tagged with
  • 85
  • 85
  • 85
  • 31
  • 31
  • 30
  • 23
  • 23
  • 21
  • 20
  • 18
  • 16
  • 15
  • 14
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Sampling from the Hardcore Process

Dodds, William C 01 January 2013 (has links)
Partially Recursive Acceptance Rejection (PRAR) and bounding chains used in conjunction with coupling from the past (CFTP) are two perfect simulation protocols which can be used to sample from a variety of unnormalized target distributions. This paper first examines and then implements these two protocols to sample from the hardcore gas process. We empirically determine the subset of the hardcore process's parameters for which these two algorithms run in polynomial time. Comparing the efficiency of these two algorithms, we find that PRAR runs much faster for small values of the hardcore process's parameter whereas the bounding chain approach is vastly superior for large values of the process's parameter.
12

OPTIMAL GEOMETRY IN A SIMPLE MODEL OF TWO-DIMENSIONAL HEAT TRANSFER

Peng, Xiaohui 10 1900 (has links)
<p>This investigation is motivated by the problem of optimal design of cooling elements in modern battery systems used in hybrid/electric vehicles. We consider a simple model of two-dimensional steady-state heat conduction generated by a prescribed distribution of heat sources and involving a one-dimensional cooling element represented by a closed contour. The problem consists in finding an optimal shape of the cooling element which will ensure that the temperature in a given region is close (in the least squares sense) to some prescribed distribution. We formulate this problem as PDE-constrained optimization and use methods of the shape-differential calculus to obtain the first-order optimality conditions characterizing the locally optimal shapes of the contour. These optimal shapes are then found numerically using the conjugate gradient method where the shape gradients are conveniently computed based on adjoint equations. A number of computational aspects of the proposed approach is discussed and optimization results obtained in several test problems are presented.</p> / Master of Science (MSc)
13

Paving the Randomized Gauss-Seidel

Wu, Wei 01 January 2017 (has links)
The Randomized Gauss-Seidel Method (RGS) is an iterative algorithm that solves overdetermined systems of linear equations Ax = b. This paper studies an update on the RGS method, the Randomized Block Gauss-Seidel Method. At each step, the algorithm greedily minimizes the objective function L(x) = kAx bk2 with respect to a subset of coordinates. This paper describes a Randomized Block Gauss-Seidel Method (RBGS) which uses a randomized control method to choose a subset at each step. This algorithm is the first block RGS method with an expected linear convergence rate which can be described by the properties of the matrix A and its column submatrices. The analysis demonstrates that RBGS improves RGS more when given appropriate column-paving of the matrix, a partition of the columns into well-conditioned blocks. The main result yields a RBGS method that is more e cient than the simple RGS method.
14

Survival Model and Estimation for Lung Cancer Patients.

Yuan, Xingchen 07 May 2005 (has links)
Lung cancer is the most frequent fatal cancer in the United States. Following the notion in actuarial math analysis, we assume an exponential form for the baseline hazard function and combine Cox proportional hazard regression for the survival study of a group of lung cancer patients. The covariates in the hazard function are estimated by maximum likelihood estimation following the proportional hazards regression analysis. Although the proportional hazards model does not give an explicit baseline hazard function, the baseline hazard function can be estimated by fitting the data with a non-linear least square technique. The survival model is then examined by a neural network simulation. The neural network learns the survival pattern from available hospital data and gives survival prediction for random covariate combinations. The simulation results support the covariate estimation in the survival model.
15

Electrodynamical Modeling for Light Transport Simulation

Saunders, Michael G 01 May 2017 (has links)
Modernity in the computer graphics community is characterized by a burgeoning interest in physically based rendering techniques. That is to say that mathematical reasoning from first principles is widely preferred to ad hoc, approximate reasoning in blind pursuit of photorealism. Thereby, the purpose of our research is to investigate the efficacy of explicit electrodynamical modeling by means of the generalized Jones vector given by Azzam [1] and the generalized Jones matrix given by Ortega-Quijano & Arce-Diego [2] in the context of stochastic light transport simulation for computer graphics. To augment the status quo path tracing framework with such a modeling technique would permit a plethora of complex optical effects—including dispersion, birefringence, dichroism, and thin film interference, and the physical optical elements associated with these effects—to become naturally supported, fully integrated features in physically based rendering software.
16

Swarm Stability: Distinguishing between Clumps and Lattices

Barth, Quentin 01 January 2019 (has links)
Swarms are groups of agents, which we model as point particles, whose collective behavior emerges from individual interactions. We study a first-order swarming model in a periodic coordinate system with pairwise social forces, investigating its stable configurations for differing numbers of agents relative to the periodic width. Two states emerge from numerical simulations in one dimension: even spacing throughout the period, or clumping within a certain portion of the period. A mathematical analysis of the energy of the system allows us to determine stability of these configurations. We also perform numerical simulations for evolution to equilibrium over time, and find results in agreement with our mathematical analysis. For certain values of the periodic width relative to the number of agents, our numerical simulations show that either clumping or even spacing can be stable equilibria, and which equilibrium is reached depends on on starting conditions, indicating hysteresis.
17

Iterative Methods to Solve Systems of Nonlinear Algebraic Equations

Alam, Md Shafiful 01 April 2018 (has links)
Iterative methods have been a very important area of study in numerical analysis since the inception of computational science. Their use ranges from solving algebraic equations to systems of differential equations and many more. In this thesis, we discuss several iterative methods, however our main focus is Newton's method. We present a detailed study of Newton's method, its order of convergence and the asymptotic error constant when solving problems of various types as well as analyze several pitfalls, which can affect convergence. We also pose some necessary and sufficient conditions on the function f for higher order of convergence. Different acceleration techniques are discussed with analysis of the asymptotic behavior of the iterates. Analogies between single variable and multivariable problems are detailed. We also explore some interesting phenomena while analyzing Newton's method for complex variables.
18

Analysis of a Partial Differential Equation Model of Surface Electromigration

Cinar, Selahittin 01 May 2014 (has links)
A Partial Differential Equation (PDE) based model combining surface electromigration and wetting is developed for the analysis of the morphological instability of mono-crystalline metal films in a high temperature environment typical to operational conditions of microelectronic interconnects. The atomic mobility and surface energy of such films are anisotropic, and the model accounts for these material properties. The goal of modeling is to describe and understand the time-evolution of the shape of film surface. I will present the formulation of a nonlinear parabolic PDE problem for the height function h(x,t) of the film in the horizontal electric field, followed by the results of the linear stability analyses and computations of fully nonlinear evolution equation.
19

On the Role of Ill-conditioning: Biharmonic Eigenvalue Problem and Multigrid Algorithms

Bray, Kasey 01 January 2019 (has links)
Very fine discretizations of differential operators often lead to large, sparse matrices A, where the condition number of A is large. Such ill-conditioning has well known effects on both solving linear systems and eigenvalue computations, and, in general, computing solutions with relative accuracy independent of the condition number is highly desirable. This dissertation is divided into two parts. In the first part, we discuss a method of preconditioning, developed by Ye, which allows solutions of Ax=b to be computed accurately. This, in turn, allows for accurate eigenvalue computations. We then use this method to develop discretizations that yield accurate computations of the smallest eigenvalue of the biharmonic operator across several domains. Numerical results from the various schemes are provided to demonstrate the performance of the methods. In the second part we address the role of the condition number of A in the context of multigrid algorithms. Under various assumptions, we use rigorous Fourier analysis on 2- and 3-grid iteration operators to analyze round off errors in floating point arithmetic. For better understanding of general results, we provide detailed bounds for a particular algorithm applied to the 1-dimensional Poisson equation. Numerical results are provided and compared with those obtained by the schemes discussed in part 1.
20

A DETECTION AND DATA ACQUISITION SYSTEM FOR PRECISION BETA DECAY SPECTROSCOPY

Jezghani, Aaron P. 01 January 2019 (has links)
Free neutron and nuclear beta decay spectroscopy serves as a robust laboratory for investigations of the Standard Model of Particle Physics. Observables such as decay product angular correlations and energy spectra overconstrain the Standard Model and serve as a sensitive probe for Beyond the Standard Model physics. Improved measurement of these quantities is necessary to complement the TeV scale physics being conducted at the Large Hadron Collider. The UCNB, 45Ca, and Nab experiments aim to improve upon existing measurements of free neutron decay angular correlations and set new limits in the search for exotic couplings in beta decay. To achieve these experimental goals, a highly-pixelated, thick silicon detector with a 100 nm entrance window has been developed for precision beta spectroscopy and the direct detection of 30 keV beta decay protons. The detector has been characterized for its performance in energy reconstruction and particle arrival time determination. A Monte Carlo simulation of signal formation in the silicon detector and propagation through the electronics chain has been written to develop optimal signal analysis algorithms for minimally biased energy and timing extraction. A tagged-electron timing test has been proposed and investigated as a means to assess the validity of these Monte Carlo efforts. A universal platform for data acquisition (DAQ) has been designed and implemented in National Instrument's PXIe-5171R digitizer/FPGA hardware. The DAQ retains a ring buffer of the most recent 400 ms of data in all 256 channels, so that a waveform trace can be returned from any combination of pixels and resolution for complete energy reconstruction. Low-threshold triggers on individual channels were implemented in FPGA as a generic piecewise-polynomial filter for universal, real-time digital signal processing, which allows for arbitrary filter implementation on a pixel-by-pixel basis. This system is universal in the sense that it has complete flexible, complex, and debuggable triggering at both the pixel and global level without recompiling the firmware. The culmination of this work is a system capable of a 10 keV trigger threshold, 3 keV resolution, and maximum 300 ps arrival time systematic, even in the presence of large amplitude noise components.

Page generated in 0.1678 seconds