• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • 1
  • Tagged with
  • 5
  • 5
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Comparison of Two Methods for Developing Aggregate Population-Based Models

Oyero, Oyebola E 01 December 2016 (has links)
Aggregate models incorporate the variation between individual parameters of individualbased models to construct a population-based model. This thesis focuses on the comparison of two different methods for creating these population-based models. The first method, the individual parameter distribution technique (IPD) focuses on the similarities and variation of parameters in an individual-based model as calculated using individual data sets [4]. The second method we consider is the nonlinear mixed effect method (NLME), which is primarily used in modeling repeated measurement data. In the NLME approach, both the fixed effects and random effects of the parameter values are estimated in the model by assuming a normal distribution for the parameter values across individuals[2]. Both methods were implemented on a one-compartment pharmacokinetic concentration model. Using the variation in parameters estimated using the two different approaches, a population model was generated and then compared to the dynamics seen in the individual data sets. We compare three features of the concentration data to the simulated population models. The values for all three features were captured by both methods; however, the biggest difference observed is 2 that there is a longer tail in the distribution for the population model developed using NLME than observed in the dynamics in the original data.
2

The Material Distribution Method : Analysis and Acoustics applications

Kasolis, Fotios January 2014 (has links)
For the purpose of numerically simulating continuum mechanical structures, different types of material may be represented by the extreme values {<img src="http://www.diva-portal.org/cgi-bin/mimetex.cgi?%5Cepsilon" />,1}, where 0&lt;<img src="http://www.diva-portal.org/cgi-bin/mimetex.cgi?%5Cepsilon" /><img src="http://www.diva-portal.org/cgi-bin/mimetex.cgi?%5Cll" />1, of a varying coefficient <img src="http://www.diva-portal.org/cgi-bin/mimetex.cgi?%5Calpha" /> in the governing equations. The paramter <img src="http://www.diva-portal.org/cgi-bin/mimetex.cgi?%5Cepsilon" /> is not allowed to vanish in order for the equations to be solvable, which means that the exact conditions are approximated. For example, for linear elasticity problems, presence of material is represented by the value <img src="http://www.diva-portal.org/cgi-bin/mimetex.cgi?%5Calpha" /> = 1, while <img src="http://www.diva-portal.org/cgi-bin/mimetex.cgi?%5Calpha" /> = <img src="http://www.diva-portal.org/cgi-bin/mimetex.cgi?%5Cepsilon" /> provides an approximation of void, meaning that material-free regions are approximated with a weak material. For acoustics applications, the value <img src="http://www.diva-portal.org/cgi-bin/mimetex.cgi?%5Calpha" /> = 1 corresponds to air and <img src="http://www.diva-portal.org/cgi-bin/mimetex.cgi?%5Calpha" /> = <img src="http://www.diva-portal.org/cgi-bin/mimetex.cgi?%5Cepsilon" /> to an approximation of sound-hard material using a dense fluid. Here we analyze the convergence properties of such material approximations as <img src="http://www.diva-portal.org/cgi-bin/mimetex.cgi?%5Cepsilon" />!0, and we employ this type of approximations to perform design optimization. In Paper I, we carry out boundary shape optimization of an acoustic horn. We suggest a shape parameterization based on a local, discrete curvature combined with a fixed mesh that does not conform to the generated shapes. The values of the coefficient <img src="http://www.diva-portal.org/cgi-bin/mimetex.cgi?%5Calpha" />, which enters in the governing equation, are obtained by projecting the generated shapes onto the underlying computational mesh. The optimized horns are smooth and exhibit good transmission properties. Due to the choice of parameterization, the smoothness of the designs is achieved without imposing severe restrictions on the design variables. In Paper II, we analyze the convergence properties of a linear elasticity problem in which void is approximated by a weak material. We show that the error introduced by the weak material approximation, after a finite element discretization, is bounded by terms that scale as <img src="http://www.diva-portal.org/cgi-bin/mimetex.cgi?%5Cepsilon" /> and <img src="http://www.diva-portal.org/cgi-bin/mimetex.cgi?%5Cepsilon" />1/2hs, where h is the mesh size and s depends on the order of the finite element basis functions. In addition, we show that the condition number of the system matrix scales inversely proportional to <img src="http://www.diva-portal.org/cgi-bin/mimetex.cgi?%5Cepsilon" />, and we also construct a left preconditioner that yields a system matrix with a condition number independent of <img src="http://www.diva-portal.org/cgi-bin/mimetex.cgi?%5Cepsilon" />. In Paper III, we observe that the standard sound-hard material approximation with <img src="http://www.diva-portal.org/cgi-bin/mimetex.cgi?%5Calpha" /> = <img src="http://www.diva-portal.org/cgi-bin/mimetex.cgi?%5Cepsilon" /> gives rise to ill-conditioned system matrices at certain wavenumbers due to resonances within the approximated sound-hard material. To cure this defect, we propose a stabilization scheme that makes the condition number of the system matrix independent of the wavenumber. In addition, we demonstrate that the stabilized formulation performs well in the context of design optimization of an acoustic waveguide transmission device. In Paper IV, we analyze the convergence properties of a wave propagation problem in which sound-hard material is approximated by a dense fluid. To avoid the occurrence of internal resonances, we generalize the stabilization scheme presented in Paper III. We show that the error between the solution obtained using the stabilized soundhard material approximation and the solution to the problem with exactly modeled sound-hard material is bounded proportionally to <img src="http://www.diva-portal.org/cgi-bin/mimetex.cgi?%5Cepsilon" />.
3

Vieno kintamojo funkcijų minimizavimo algoritmų analizė / Analysis of one variable functions minimizing methods

Bernotas, Simonas 22 June 2005 (has links)
This paper investigates three methods of one variable function optimizing methods, executes the comparison of their efficiency and generalizes the results of this research. At first there is a review of historical aspects of optimization theory, definition of optimization concept, introduction to task formulation. Presentation of optimization importance, the role of objective function in the process of optimization. Introduction to the classification of optimization tasks and optimization of various systems. In this paper there is an analysis of three methods of optimization: “Half distribution”, “Golden cut”, “Powell”. There was created a program for calculating and comparing of the selected optimization methods. During the investigation it was determined that when there is a small precision (0,1; 0,01), the change of minimum of the function and the value of that point are great. When you increase the value of precision the change of minimum of the function and the value of that point are very small. When the precision value is about (0,0001 .. 0,000001) there is a difference in only 6-th – 9-th value after the comma. The use of “Powell” method requires least steps of calculating, the use of “Half distribution” method requires mostly steps of calculating. In about 80 % of calculation the shortest interval of the search was using the “Powell” method of optimizing, in 20 % of calculation the shortest interval of the search was using the “Golden cut” method of optimization... [to full text]
4

The K-distribution method for calculating thermal infrared radiative transfer in the atmosphere : A two-stage numerical procedure based on Gauss-Legendre quadrature

Nerman, Karl January 2022 (has links)
The K-distribution method is a fast approximative method used for calculating thermal infrared radiative transfer in the atmosphere, as opposed to the traditional Line-by-line method, which is precise, but very time-costly. Here we consider the atmosphere to consist of homogeneous and plane-parallel layers in local thermal equilibrium. This lets us use efficient upwards recursion for calculating the thermal infrared radiative transfer and ultimately the outgoing irradiance at the top of the atmosphere. Our specific implementation of the K-distribution method revolves around changing the integration space from the wavenumber domain to the g domain by employing Gauss-Legendre quadrature in two steps. The method is implemented in MATLAB and is shown to be several thousand times faster than the traditional Line-by-line method, with the relative error being only 3 % for the outgoing irradiance at the top of the atmosphere.
5

Performance Analysis of Detection System Design Algorithms

Nyberg, Karl-Johan 11 April 2003 (has links)
Detection systems are widely used in industry. Designers, operators and users of these systems need to choose an appropriate design, based on the intended usage and the operating environment. The purpose of this research is to analyze the effect of various system design variables (controllable) and system parameters (uncontrollable) on the performance of detection systems. To optimize system performance one must manage the tradeoff between two errors that can occur. A False Alarm occurs if the detection system falsely indicates a target is present and a False Clear occurs if the detection system falsely fails to indicate a target is present. Given a particular detection system and a pre-specified false clear (or false alarm) rate, there is a minimal false alarm (or false clear) rate that can be achieved. Earlier research has developed methods that address this false alarm, false clear tradeoff problem (FAFCT) by formulating a Neyman-Pearson hypothesis problem, which can be solved as a Knapsack problem. The objective of this research is to develop guidelines that can be of help in designing detection systems. For example, what system design variables must be implemented to achieve a certain false clear standard for a parallel 2-sensor detection system for Salmonella detection? To meet this objective, an experimental design is constructed and an analysis of variance is performed. Computational results are obtained using the FAFCT-methodology and the results are presented and analyzed using ROC (Receiver Operating Characteristic) curves and an analysis of variance. The research shows that sample size (i.e., size of test data set used to estimate the distribution of sensor responses) has very little effect on the FAFCT compared to other factors. The analysis clearly shows that correlation has the most influence on the FAFCT. Negatively correlated sensor responses outperform uncorrelated and positively correlated sensor responses with large margins, especially for strict FC-standards (FC-standard is defined as the maximum allowed False Clear rate). Suggestions for future research are also included. FC-standard is the second most influential design variable followed by grid size. / Master of Science

Page generated in 0.0868 seconds