• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 33
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 49
  • 49
  • 15
  • 8
  • 7
  • 7
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Sensitivity Analysis and Parameter Estimation for the APEX Model on Runoff, Sediments and Phosphorus

Jiang, Yi 09 December 2016 (has links)
Sensitivity analysis is essential for the hydrologic models to help gain insight into model’s behavior, and assess the model structure and conceptualization. Parameter estimation in the distributed hydrologic models is difficult due to the high-dimensional parameter spaces. Sensitivity analysis identified the influential and non-influential parameters in the modeling process, thus it will benefit the calibration process. This study identified, applied and evaluated two sensitivity analysis methods for the APEX model. The screening methods, the Morris method, and LH-OAT method, were implemented in the experimental site in North Carolina for modeling runoff, sediment loss, TP and DP losses. At the beginning of the application, the run number evaluation was conducted for the Morris method. The result suggested that 2760 runs were sufficient for 45 input parameters to get reliable sensitivity result. Sensitivity result for the five management scenarios in the study site indicated that the Morris method and LH-OAT method provided similar results on the sensitivity of the input parameters, except the difference on the importance of PARM2, PARM8, PARM12, PARM15, PARM20, PARM49, PARM76, PARM81, PARM84, and PARM85. The results for the five management scenarios indicated the very influential parameters were consistent in most cases, such as PARM23, PARM34, and PARM84. The “sensitive” parameters had good overlaps between different scenarios. In addition, little variation was observed in the importance of the sensitive parameters in the different scenarios, such as PARM26. The optimization process with the most influential parameters from sensitivity analysis showed great improvement on the APEX modeling performance in all scenarios by the objective functions, PI1, NSE, and GLUE.
2

Informing the use of Hyper-Parameter Optimization Through Meta-Learning

Sanders, Samantha Corinne 01 June 2017 (has links)
One of the challenges of data mining is finding hyper-parameters for a learning algorithm that will produce the best model for a given dataset. Hyper-parameter optimization automates this process, but it can still take significant time. It has been found that hyperparameter optimization does not always result in induced models with significant improvement over default hyper-parameters, yet no systematic analysis of the role of hyper-parameter optimization in machine learning has been conducted. We propose the use of meta-learning to inform the decision to optimize hyper-parameters based on whether default hyper-parameter performance can be surpassed in a given amount of time. We will build a base of metaknowledge, through a series of experiments, to build predictive models that will assist in the decision process.
3

Bifurcation Analysis and Qualitative Optimization of Models in Molecular Cell Biology with Applications to the Circadian Clock

Conrad, Emery David 10 May 2006 (has links)
Circadian rhythms are the endogenous, roughly 24-hour rhythms that coordinate an organism's interaction with its cycling environment. The molecular mechanism underlying this physiological process is a cell-autonomous oscillator comprised of a complex regulatory network of interacting DNA, RNA and proteins that is surprisingly conserved across many different species. It is not a trivial task to understand how the positive and negative feedback loops interact to generate an oscillator capable of a) maintaining a 24-hour rhythm in constant conditions; b) entraining to external light and temperature signals; c) responding to pulses of light in a rather particular, predictable manner; and d) compensating itself so that the period is relatively constant over a large range of temperatures, even for mutations that affect the basal period of oscillation. Mathematical modeling is a useful tool for dealing with such complexity, because it gives us an object that can be quickly probed and tested in lieu of the experiment or actual biological system. If we do a good job designing the model, it will help us to understand the biology better by predicting the outcome of future experiments. The difficulty lies in properly designing a model, a task that is made even more difficult by an acute lack of quantitative data. Thankfully, our qualitative understanding of a particular phenomenon, i.e. the observed physiology of the cell, can often be directly related to certain mathematical structures. Bifurcation analysis gives us a glimpse of these structures, and we can use these glimpses to build our models with greater confidence. In this dissertation, I will discuss the particular problem of the circadian clock and describe a number of new methods and tools related to bifurcation analysis. These tools can effectively be applied during the modeling process to build detailed models of biological regulatory with greater ease. / Ph. D.
4

Polypropylene Production Optimization in Fluidized Bed Catalytic Reactor (FBCR): Statistical Modeling and Pilot Scale Experimental Validation

Khan, M.J.H., Hussain, M.A., Mujtaba, Iqbal M. 13 March 2014 (has links)
Yes / Polypropylene is one type of plastic that is widely used in our everyday life. This study focuses on the identification and justification of the optimum process parameters for polypropylene production in a novel pilot plant based fluidized bed reactor. This first-of-its-kind statistical modeling with experimental validation for the process parameters of polypropylene production was conducted by applying ANNOVA (Analysis of variance) method to Response Surface Methodology (RSM). Three important process variables i.e., reaction temperature, system pressure and hydrogen percentage were considered as the important input factors for the polypropylene production in the analysis performed. In order to examine the effect of process parameters and their interactions, the ANOVA method was utilized among a range of other statistical diagnostic tools such as the correlation between actual and predicted values, the residuals and predicted response, outlier t plot, 3D response surface and contour analysis plots. The statistical analysis showed that the proposed quadratic model had a good fit with the experimental results. At optimum conditions with temperature of 75 °C, system pressure of 25 bar and hydrogen percentage of 2%, the highest polypropylene production obtained is 5.82% per pass. Hence it is concluded that the developed experimental design and proposed model can be successfully employed with over a 95% confidence level for optimum polypropylene production in a fluidized bed catalytic reactor (FBCR).
5

Optimal Design of Sensor Parameters in PLC-Based Control System Using Mixed Integer Programming

OKUMA, Shigeru, SUZUKI, Tatsuya, MUTOU, Takashi, KONAKA, Eiji 01 April 2005 (has links)
No description available.
6

Stereolithography Cure Process Modeling

Tang, Yanyan 20 July 2005 (has links)
Although stereolithography (SL) is a remarkable improvement over conventional prototyping production, it is being pushed aggressively for improvements in both speed and resolution. However, it is not clear currently how these two features can be improved simultaneously and what the limits are for such optimization. In order to address this issue a quantitative SL cure process model is developed which takes into account all the sub-processes involved in SL: exposure, photoinitiation, photopolymerizaion, mass and heat transfer. To parameterize the model, the thermal and physical properties of a model compound system, ethoxylated (4) pentaerythritol tetraacrylate (E4PETeA) with 2,2-dimethoxy-2-phenylacetophenone (DMPA) as initiator, are determined. The free radical photopolymerization kinetics is also characterized by differential photocalorimetry (DPC) and a comprehensive kinetic model parameterized for the model material. The SL process model is then solved using the finite element method in the software package, FEMLAB, and validated by the capability of predicting fabricated part dimensions. The SL cure process model, also referred to as the degree of cure (DOC) threshold model, simulates the cure behavior during the SL fabrication process, and provides insight into the part building mechanisms. It predicts the cured part dimension within 25% error, while the prediction error of the exposure threshold model currently utilized in SL industry is up to 50%. The DOC threshold model has been used to investigate the effects of material and process parameters on the SL performance properties, such as resolution, speed, maximum temperature rise in the resin bath, and maximum DOC of the green part. The effective factors are identified and parameter optimization is performed, which also provides guidelines for SL material development as well as process and laser improvement.
7

Evaluating and developing parameter optimization and uncertainty analysis methods for a computationally intensive distributed hydrological model

Zhang, Xuesong 15 May 2009 (has links)
This study focuses on developing and evaluating efficient and effective parameter calibration and uncertainty methods for hydrologic modeling. Five single objective optimization algorithms and six multi-objective optimization algorithms were tested for automatic parameter calibration of the SWAT model. A new multi-objective optimization method (Multi-objective Particle Swarm and Optimization & Genetic Algorithms) that combines the strengths of different optimization algorithms was proposed. Based on the evaluation of the performances of different algorithms on three test cases, the new method consistently performed better than or close to the other algorithms. In order to save efforts of running the computationally intensive SWAT model, support vector machine (SVM) was used as a surrogate to approximate the behavior of SWAT. It was illustrated that combining SVM with Particle Swarm and Optimization can save efforts for parameter calibration of SWAT. Further, SVM was used as a surrogate to implement parameter uncertainty analysis fo SWAT. The results show that SVM helped save more than 50% of runs of the computationally intensive SWAT model The effect of model structure on the uncertainty estimation of streamflow simulation was examined through applying SWAT and Neural Network models. The 95% uncertainty intervals estimated by SWAT only include 20% of the observed data, while Neural Networks include more than 70%. This indicates the model structure is an important source of uncertainty of hydrologic modeling and needs to be evaluated carefully. Further exploitation of the effect of different treatments of the uncertainties of model structures on hydrologic modeling was conducted through applying four types of Bayesian Neural Networks. By considering uncertainty associated with model structure, the Bayesian Neural Networks can provide more reasonable quantification of the uncertainty of streamflow simulation. This study stresses the need for improving understanding and quantifying methods of different uncertainty sources for effective estimation of uncertainty of hydrologic simulation.
8

A Study of Match Cost Functions and Colour Use In Global Stereopsis

Neilson, Daniel Unknown Date
No description available.
9

A Study of Match Cost Functions and Colour Use In Global Stereopsis

Neilson, Daniel 11 1900 (has links)
Stereopsis is the process of inferring the distance to objects from two or more images. It has applications in areas such as: novel-view rendering, motion capture, autonomous navigation, and topographical mapping from remote sensing data. Although it sounds simple, in light of the effortlessness with which we are able to perform the task with our own eyes, a number of factors that make it quite challenging become apparent once one begins delving into computational methods of solving it. For example, occlusions that block part of the scene from being seen in one of the images, and changes in the appearance of objects between the two images due to: sensor noise, view dependent effects, and/or differences in the lighting/camera conditions between the two images. Global stereopsis algorithms aim to solve this problem by making assumptions about the smoothness of the depth of surfaces in the scene, and formulating stereopsis as an optimization problem. As part of their formulation, these algorithms include a function that measures the similarity between pixels in different images to detect possible correspondences. Which of these match cost functions work better, when, and why is not well understood. Furthermore, in areas of computer vision such as segmentation, face detection, edge detection, texture analysis and classification, and optical flow, it is not uncommon to use colour spaces other than the well known RGB space to improve the accuracy of algorithms. However, the use of colour spaces other than RGB is quite rare in stereopsis research. In this dissertation we present results from two, first of their kind, large scale studies on global stereopsis algorithms. In the first we compare the relative performance of a structured set of match cost cost functions in five different global stereopsis frameworks in such a way that we are able to infer some general rules to guide the choice of which match cost functions to use in these algorithms. In the second we investigate how much accuracy can be gained by simply changing the colour representation used in the input to global stereopsis algorithms.
10

Parameter optimization in simplified models of cardiac myocytes

Mathavan, Neashan , Graduate School of Biomedical Engineering, Faculty of Engineering, UNSW January 2009 (has links)
Atrial fibrillation (AF) is a complex, multifaceted arrhythmia. Pathogenesis of AF is associated with multiple aetiologies and the mechanisms by which it is sustained and perpetuated are similarly diverse. In particular, regional heterogeneity in the electrophysiological properties of normal and pathological tissue plays a critical role in the occurrence of AF. Understanding AF in the context of electrophysiological heterogeneity requires cell-specific ionic models of electrical activity which can then be incorporated into models on larger temporal and spatial scales. Biophysically-based models have typically dominated the study of cellular excitability providing detailed and precise descriptions in the form of complex mathematical formulations. However, such models have limited applicability in multidimensional simulations as the computational expense is too prohibitive. Simplified mathematical models of cardiac cell electrical activity are an alternative approach to these traditional biophysically-detailed models. Utilizing this approach enables the embodiment of cellular excitation characteristics at minimal computational cost such that simulations of arrhythmogensis in atrial tissue are conceivable. In this thesis, a simplified, generic mathematical model is proposed that characterizes and reproduces the action potential waveforms of individual cardiac myocytes. It incorporates three time-dependent ionic currents and an additional time-independent leakage current. The formulation of the three time-dependent ionic currents is based on 4-state Markov schemes with state transition rates expressed as nonlinear sigmoidal functions of the membrane potential. Parameters of the generic model were optimized to fit the action potential waveforms of the Beeler-Reuter model, and, experimental recordings from atrial and sinoatrial cells of rabbits. A nonlinear least-squares optimization routine was employed for the parameter fits. The model was successfully fitted to the Beeler-Reuter waveform (RMS error: 1.4999 mV) and action potentials recorded from atrial tissue (RMS error: 1.3398 mV) and cells of the peripheral (RMS error: 2.4821 mV) and central (RMS error: 2.3126 mV) sinoatrial node. Thus, the model presented here is a mathematical framework by which a wide variety of cell-specific AP morphologies can be reproduced. Such a model offers the potential for insights into possible mechanisms that contribute to heterogeneity and/or arrhythmia.

Page generated in 0.1414 seconds