• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 477
  • 92
  • 35
  • 32
  • 10
  • 5
  • 5
  • 5
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 818
  • 818
  • 126
  • 120
  • 117
  • 101
  • 85
  • 81
  • 75
  • 70
  • 68
  • 63
  • 62
  • 58
  • 56
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

COMPRESSIVE PARAMETER ESTIMATION VIA APPROXIMATE MESSAGE PASSING

Hamzehei, Shermin 08 April 2020 (has links)
The literature on compressive parameter estimation has been mostly focused on the use of sparsity dictionaries that encode a discretized sampling of the parameter space; these dictionaries, however, suffer from coherence issues that must be controlled for successful estimation. To bypass such issues with discretization, we propose the use of statistical parameter estimation methods within the Approximate Message Passing (AMP) algorithm for signal recovery. Our method leverages the recently proposed use of custom denoisers in place of the usual thresholding steps (which act as denoisers for sparse signals) in AMP. We introduce the design of analog denoisers that are based on statistical parameter estimation algorithms, and we focus on two commonly used examples: frequency estimation and bearing estimation, coupled with the Root MUSIC estimation algorithm. We first analyze the performance of the proposed analog denoiser for signal recovery, and then link the performance in signal estimation to that of parameter estimation. Numerical experiments show significant improvements in estimation performance versus previously proposed approaches for compressive parameter estimation.
182

Estimation of gene network parameters from imaging cytometry data

Lux, Matthew W. 23 May 2013 (has links)
Synthetic biology endeavors to forward engineer genetic circuits with novel function. A major inspiration for the field has been the enormous success in the engineering of digital electronic circuits over the past half century. This dissertation approaches synthetic biology from the perspective of the engineering design cycle, a concept ubiquitous across many engineering disciplines. First, an analysis of the state of the engineering design cycle in synthetic biology is presented, pointing out the most limiting challenges currently facing the field. Second, a principle commonly used in electronics to weigh the tradeoffs between hardware and software implementations of a function, called co-design, is applied to synthetic biology. Designs to implement a specific logical function in three distinct domains are proposed and their pros and cons weighed. Third, automatic transitioning between an abstract design, its physical implementation, and accurate models of the corresponding system are critical for success in synthetic biology. We present a framework for accomplishing this task and demonstrate how it can be used to explore a design space. A major limitation of the aforementioned approach is that adequate parameter values for the performance of genetic components do not yet exist. Thus far, it has not been possible to uniquely attribute the function of a device to the function of the individual components in a way that enables accurate prediction of the function of new devices assembled from the same components. This lack presents a major challenge to rapid progression through the design cycle. We address this challenge by first collecting high time-resolution fluorescence trajectories of individual cells expressing a fluorescent protein, as well as snapshots of the number of corresponding mRNA molecules per cell. We then leverage the information embedded in the cell-cell variability of the population to extract parameter values for a stochastic model of gene expression more complex than typically used. Such analysis opens the door for models of genetic components that can more reliably predict the function of new combinations of these basic components. / Ph. D.
183

Cancer Invasion in Time and Space

January 2020 (has links)
abstract: Cancer is a disease involving abnormal growth of cells. Its growth dynamics is perplexing. Mathematical modeling is a way to shed light on this progress and its medical treatments. This dissertation is to study cancer invasion in time and space using a mathematical approach. Chapter 1 presents a detailed review of literature on cancer modeling. Chapter 2 focuses sorely on time where the escape of a generic cancer out of immune control is described by stochastic delayed differential equations (SDDEs). Without time delay and noise, this system demonstrates bistability. The effects of response time of the immune system and stochasticity in the tumor proliferation rate are studied by including delay and noise in the model. Stability, persistence and extinction of the tumor are analyzed. The result shows that both time delay and noise can induce the transition from low tumor burden equilibrium to high tumor equilibrium. The aforementioned work has been published (Han et al., 2019b). In Chapter 3, Glioblastoma multiforme (GBM) is studied using a partial differential equation (PDE) model. GBM is an aggressive brain cancer with a grim prognosis. A mathematical model of GBM growth with explicit motility, birth, and death processes is proposed. A novel method is developed to approximate key characteristics of the wave profile, which can be compared with MRI data. Several test cases of MRI data of GBM patients are used to yield personalized parameterizations of the model. The aforementioned work has been published (Han et al., 2019a). Chapter 4 presents an innovative way of forecasting spatial cancer invasion. Most mathematical models, including the ones described in previous chapters, are formulated based on strong assumptions, which are hard, if not impossible, to verify due to complexity of biological processes and lack of quality data. Instead, a nonparametric forecasting method using Gaussian processes is proposed. By exploiting the local nature of the spatio-temporal process, sparse (in terms of time) data is sufficient for forecasting. Desirable properties of Gaussian processes facilitate selection of the size of the local neighborhood and computationally efficient propagation of uncertainty. The method is tested on synthetic data and demonstrates promising results. / Dissertation/Thesis / Doctoral Dissertation Applied Mathematics 2020
184

System Identification of Postural Tremor in Wrist Flexion-Extension and Radial-Ulnar Deviation

Ward, Sydney Bryanna 25 November 2021 (has links)
Generic simulations of tremor propagation through the upper limb have been achieved using a previously developed postural tremor model, but this model had not yet been compared with experimental data or utilized for subject-specific studies. This work addressed these two issues, which are important for optimizing peripheral tremor suppression techniques. For tractability, we focused on a subsystem of the upper limb: the isolated wrist, including the four prime wrist muscles (extensor carpi ulnaris, flexor carpi ulnaris, extensor carpi radialis, and flexor carpi radialis) and the two degrees of freedom of the wrist (flexion-extension and radial-ulnar deviation). Muscle excitation and joint displacement signals were collected while subjects with Essential Tremor resisted gravity. System identification was implemented for three subjects who experienced significant tremor using two approaches: 1. Generic linear time-invariant (LTI) models, including autoregressive-exogenous (ARX) and state-space forms, were identified from the experimental data, and characteristics including model order and modal parameters were compared with the previously developed postural tremor model; 2. Subject-specific parameters for the previously developed postural tremor model were directly estimated from experimental data using nonlinear least-squares optimization combined with regularization. The identified LTI models fit the experimental data well, with coefficients of determination of 0.74 ± 0.18 and 0.83 ± 0.13 for ARX and state-space forms, respectively. The optimal model orders identified from the experimental data (4.8 ± 1.9 and 6.4 ± 1.9) were slightly lower than the orders of the ARX and state-space forms of the previously developed model (6 and 8). For each subject, at least one pair of identified complex poles aligned with the complex poles of the previously developed model, whereas the identified real poles were assumed to represent drift in the data rather than characteristics of the system. Subject-specific parameter estimates reduced the sum of squared-error (SSE) between the measured and predicted joint displacement signals to be between 10% and 50% of the SSE using generic literature parameters. The predicted joint displacements maintained high coherence at the tremor frequency for flexion-extension (0.90 ± 0.10), which experienced the most tremor. We successfully applied multiple system identification techniques to identify tremor propagation models using only tremorogenic muscle activity as the input. These techniques identified model order, poles, and subject-specific model parameters, and indicate that tremor propagation at the wrist is well approximated by an LTI model.
185

Parameter Estimation of Microwave Filters

Sun, Shuo 12 1900 (has links)
The focus of this thesis is on developing theories and techniques to extract lossy microwave filter parameters from data. In the literature, the Cauchy methods have been used to extract filters’ characteristic polynomials from measured scattering parameters. These methods are described and some examples are constructed to test their performance. The results suggest that the Cauchy method does not work well when the Q factors representing the loss of filters are not even. Based on some prototype filters and the relationship between Q factors and the loss, we conduct preliminary studies on alternative representations of the characteristic polynomials. The parameters in these new models are extracted using the Levenberg–Marquardt algorithm to accurately estimate characteristic polynomials and the loss information.
186

Online Parameter Learning for Structural Condition Monitoring System

Unknown Date (has links)
The purpose of online parameter learning and modeling is to validate and restore the properties of a structure based on legitimate observations. Online parameter learning assists in determining the unidentified characteristics of a structure by offering enhanced predictions of the vibration responses of the system. From the utilization of modeling, the predicted outcomes can be produced with a minimal amount of given measurements, which can be compared to the true response of the system. In this simulation study, the Kalman filter technique is used to produce sets of predictions and to infer the stiffness parameter based on noisy measurement. From this, the performance of online parameter identification can be tested with respect to different noise levels. This research is based on simulation work showcasing how effective the Kalman filtering techniques are in dealing with analytical uncertainties of data. / Includes bibliography. / Thesis (M.S.)--Florida Atlantic University, 2020. / FAU Electronic Theses and Dissertations Collection
187

Modeling and Uncertainty Analysis of CCHP systems

Smith, Joshua Aaron 15 December 2012 (has links)
Combined Cooling Heating and Power (CCHP) systems have been recognized as a viable alternative to conventional electrical and thermal energy generation in buildings because of their high efficiency, low environmental impact, and power grid independence. Many researchers have presented models for comparing CCHP systems to conventional systems and for optimizing CCHP systems. However, many of the errors and uncertainties that affect these modeling efforts have not been adequately addressed in the literature. This dissertation will focus on the following key issues related to errors and uncertainty in CCHP system modeling: (a) detailed uncertainty analysis of a CCHP system model with novel characterization of weather patterns, fuel prices and component efficiencies; (b) sensitivity analysis of a method for estimating the hourly energy demands of a building using Department of Energy (DOE) reference building models in combination with monthly utility bills; (c) development of a practical technique for selecting the optimal Power Generation Unit (PGU) for a given building that is robust with respect to fuel cost and weather uncertainty; (d) development of a systematic method for integrated calibration and parameter estimation of thermal system models. The results from the detailed uncertainty analysis show that CCHP operational strategies can effectively be assessed using steady state models with typical year weather data. The results of the sensitivity analysis reveal that the DOE reference buildings can be adjusted using monthly utility bills to represent the hourly energy demands of actual buildings. The optimal PGU sizing study illustrates that the PGU can be selected for a given building in consideration of weather and fuel cost uncertainty. The results of the integrated parameter estimation study reveal that using the integrated approach can reduce the effect of measurement error on the accuracy of predictive thermal system models.
188

Parameter indentifiability of ARX models via discrete time nonlinear system controllability

Özbay, Hitay. January 1987 (has links)
No description available.
189

STRUCTURAL UNCERTAINTY IN HYDROLOGICAL MODELS

Abhinav Gupta (11185086) 28 July 2021 (has links)
All hydrological models incur various uncertainties that can be broadly classified into three categories: measurement, structural, and parametric uncertainties. Measurement uncertainty exists due to error in measurements of properties and variables (e.g. streamflows that are typically an output and rainfall that serves as an input to hydrological models). Structural uncertainty exists due errors in mathematical representation of real-world hydrological processes. Parametric uncertainty exists due to structural and measurement uncertainty and limited amount of data availability for calibration. <br>Several studies have addressed the problem of measurement and parametric uncertainties but studies on structural uncertainty are lacking. Specifically, there does not exist any model that can be used to quantify structural uncertainties at an ungauged location. This was the first objective of the study: to develop a model of structural uncertainty that can be used to quantify total uncertainty (including structural uncertainty) in streamflow estimates at ungauged locations in a watershed. The proposed model is based on the idea that since the effect of structural uncertainty is to introduce a bias into the parameter estimation, one way to accommodate structural uncertainty is to compensate for this bias. The developed model was applied to two watersheds: Upper Wabash Busseron Watershed (UWBW) and Lower Des Plaines Watershed (LDPW). For UWBW, mean daily streamflow data were used while for LDPW mean hourly streamflow data were used. The proposed model worked well for mean daily data but failed to capture the total uncertainties for hourly data likely due to higher measurement uncertainties in hourly streamflow data than what was assumed in the study.<br>Once a hydrological and error model is specified, the next step is to estimate model- and error- parameters. Parameter estimation in hydrological modeling may be carried out using either formal Bayesian methodology or informal Bayesian methodology. In formal Bayesian methodology, a likelihood function, motivated from probability theory, is specified over a space of models (or residuals), and a prior probability distribution is assigned over the space of models. There has been significant debate on whether the likelihood functions used in Bayesian theory are justified in hydrological modeling. However, relatively little attention has been given to justification of prior probabilities. In most hydrological modeling studies, a uniform prior over hydrological model parameters is used to reflect a complete lack of knowledge of a modeler about model parameters before calibration. Such a prior is also known as a non-informative prior. The second objective of this study was to scrutinize the assumption of uniform prior as non-informative using the principle of maximum information gain. This principle was used to derive non-informative priors for several hydrological models, and it was found that the obtained prior was significantly different from a uniform prior. Further, the posterior distributions obtained by using this prior were significantly different from those obtained by using uniform priors.<br>The information about uncertainty in a modeling exercise is typically obtained from residual time series (the difference between observed and simulated streamflows) which is an aggregate of structural and measurement uncertainties for a fixed model parameter set. Using this residual time series, an estimate of total uncertainty may be obtained but it is impossible to separate structural and measurement uncertainties. The separation of these two uncertainties is, however, required to facilitate the rejection of deficient model structures, and to identify whether the model structure or the measurements need to be improved to reduce the total uncertainty. The only way to achieve this goal is to obtain an estimate of measurement uncertainty before model calibration. An estimate of measurement uncertainties in streamflow can be obtained by using rating-curve analysis but it is difficult to obtain an estimate of measurement uncertainty in rainfall. In this study, the classic idea of repeated sampling is used to get an estimate of measurement uncertainty in rainfall and streamflows. In the repeated sampling scheme, an experiment is performed several times under identical conditions to get an estimate of measurement uncertainty. This kind of repeated sampling, however, is not strictly possible for environmental observations, therefore, repeated sampling was used in an approximate manner using a machine learning algorithm called random forest (RF). The main idea is to identify rainfall-runoff events across several different watersheds which are similar to each other such that they can be thought of as different realizations of the same experiment performed under identical conditions. The uncertainty bounds obtained by RF were compared against the uncertainty band obtained by rating-curve analysis and runoff-coefficient method. Overall, the results of this study are encouraging in using RF as a pseudo repeated sampler. <br>In the fourth objective, importance of uncertainty in estimated streamflows at ungauged locations and uncertainty in measured streamflows at gauged locations is illustrated in water quality modeling. The results of this study showed that it is not enough to obtain an uncertainty bound that envelops the true streamflows, but that the individual realizations obtained by the model of uncertainty should be able to emulate the shape of the true streamflow time series for water quality modeling.
190

Parameter Estimation for the Beta Distribution

Owen, Claire Elayne Bangerter 20 November 2008 (has links) (PDF)
The beta distribution is useful in modeling continuous random variables that lie between 0 and 1, such as proportions and percentages. The beta distribution takes on many different shapes and may be described by two shape parameters, alpha and beta, that can be difficult to estimate. Maximum likelihood and method of moments estimation are possible, though method of moments is much more straightforward. We examine both of these methods here, and compare them to three more proposed methods of parameter estimation: 1) a method used in the Program Evaluation and Review Technique (PERT), 2) a modification of the two-sided power distribution (TSP), and 3) a quantile estimator based on the first and third quartiles of the beta distribution. We find the quantile estimator performs as well as maximum likelihood and method of moments estimators for most beta distributions. The PERT and TSP estimators do well for a smaller subset of beta distributions, though they never outperform the maximum likelihood, method of moments, or quantile estimators. We apply these estimation techniques to two data sets to see how well they approximate real data from Major League Baseball (batting averages) and the U.S. Department of Energy (radiation exposure). We find the maximum likelihood, method of moments, and quantile estimators perform well with batting averages (sample size 160), and the method of moments and quantile estimators perform well with radiation exposure proportions (sample size 20). Maximum likelihood estimators would likely do fine with such a small sample size were it not for the iterative method needed to solve for alpha and beta, which is quite sensitive to starting values. The PERT and TSP estimators do more poorly in both situations. We conclude that in addition to maximum likelihood and method of moments estimation, our method of quantile estimation is efficient and accurate in estimating parameters of the beta distribution.

Page generated in 0.1433 seconds