• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1248
  • 305
  • 123
  • 101
  • 67
  • 60
  • 42
  • 24
  • 22
  • 18
  • 14
  • 13
  • 8
  • 7
  • 7
  • Tagged with
  • 2446
  • 886
  • 408
  • 338
  • 307
  • 245
  • 240
  • 205
  • 199
  • 194
  • 178
  • 173
  • 170
  • 153
  • 148
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

Parameter estimation of biological pathways

Svensson, Emil January 2007 (has links)
To determine parameter values for models of reactions in the human body, like the glycolysis, good methods of parameter estimation are needed. Those models are often non-linear and estimation of the parameters can be very time consuming if it is possible at all. The goal of this work is to test different methods to improve the calculation speed of the parameter estimation of an example system. If the parameter estimation speed for the example system can be improved it is likely that the method could also be useful for systems similar to the example system. One approach to improve the calculation speed is to construct a new cost function whose evaluation does not require any simulation of the system. Simulation free parameter estimation can be much quicker than using simulations to evaluate the cost function since the cost function is evaluated many times. Also a modication of the simulated annealing optimization method has been implemented and tested. It turns out that some of the methods significantly reduced the time needed for the parameter estimations. However the quick methods have disadvantages in the form of reduced robustness. The most successful method was using a spline approximation together with a separation of the model into several submodels, and repeated use of the simulated annealing optimization algorithm to estimate the parameters.
262

A cross-country study on Okun's Law

Sögner, Leopold, Stiassny, Alfred January 2000 (has links) (PDF)
Okun's Law postulates an inverse relationship between movements of the unemployment rate and the real gross domestic product (GDP). In this article we investigate Okun's law for 15 OECD countries and check for its structural stability. By using data on employment and the labor force we infer whether structural instability is caused either from the the demand side or the supply side. (author's abstract) / Series: Working Papers Series "Growth and Employment in Europe: Sustainability and Competitiveness"
263

Parameter Estimation and Uncertainty Analysis of Contaminant First Arrival Times at Household Drinking Water Wells

Kang, Mary January 2007 (has links)
Exposure assessment, which is an investigation of the extent of human exposure to a specific contaminant, must include estimates of the duration and frequency of exposure. For a groundwater system, the duration of exposure is controlled largely by the arrival time of the contaminant of concern at a drinking water well. This arrival time, which is normally estimated by using groundwater flow and transport models, can have a range of possible values due to the uncertainties that are typically present in real problems. Earlier arrival times generally represent low likelihood events, but play a crucial role in the decision-making process that must be conservative and precautionary, especially when evaluating the potential for adverse health impacts. Therefore, an emphasis must be placed on the accuracy of the leading tail region in the likelihood distribution of possible arrival times. To demonstrate an approach to quantify the uncertainty of arrival times, a real contaminant transport problem which involves TCE contamination due to releases from the Lockformer Company Facility in Lisle, Illinois is used. The approach used in this research consists of two major components: inverse modelling or parameter estimation, and uncertainty analysis. The parameter estimation process for this case study was selected based on insufficiencies in the model and observational data due to errors, biases, and limitations. A consideration of its purpose, which is to aid in characterising uncertainty, was also made in the process by including many possible variations in attempts to minimize assumptions. A preliminary investigation was conducted using a well-accepted parameter estimation method, PEST, and the corresponding findings were used to define characteristics of the parameter estimation process applied to this case study. Numerous objective functions, which include the well-known L2-estimator, robust estimators (L1-estimators and M-estimators), penalty functions, and deadzones, were incorporated in the parameter estimation process to treat specific insufficiencies. The concept of equifinality was adopted and multiple maximum likelihood parameter sets were accepted if pre-defined physical criteria were met. For each objective function, three procedures were implemented as a part of the parameter estimation approach for the given case study: a multistart procedure, a stochastic search using the Dynamically-Dimensioned Search (DDS), and a test for acceptance based on predefined physical criteria. The best performance in terms of the ability of parameter sets to satisfy the physical criteria was achieved using a Cauchy’s M-estimator that was modified for this study and designated as the LRS1 M-estimator. Due to uncertainties, multiple parameter sets obtained with the LRS1 M-estimator, the L1-estimator, and the L2-estimator are recommended for use in uncertainty analysis. Penalty functions had to be incorporated into the objective function definitions to generate a sufficient number of acceptable parameter sets; in contrast, deadzones proved to produce negligible benefits. The characteristics for parameter sets were examined in terms of frequency histograms and plots of parameter value versus objective function value to infer the nature of the likelihood distributions of parameters. The correlation structure was estimated using Pearson’s product-moment correlation coefficient. The parameters are generally distributed uniformly or appear to follow a random nature with few correlations in the parameter space that results after the implementation of the multistart procedure. The execution of the search procedure results in the introduction of many correlations and in parameter distributions that appear to follow lognormal, normal, or uniform distributions. The application of the physical criteria refines the parameter characteristics in the parameter space resulting from the search procedure by reducing anomalies. The combined effect of optimization and the application of the physical criteria performs the function of behavioural thresholds by removing parameter sets with high objective function values. Uncertainty analysis is performed with parameter sets obtained through two different sampling methodologies: the Monte Carlo sampling methodology, which randomly and independently samples from user-defined distributions, and the physically-based DDS-AU (P-DDS-AU) sampling methodology, which is developed based on the multiple parameter sets acquired during the parameter estimation process. Monte Carlo samples are found to be inadequate for uncertainty analysis of this case study due to its inability to find parameter sets that meet the predefined physical criteria. Successful results are achieved using the P-DDS-AU sampling methodology that inherently accounts for parameter correlations and does not require assumptions regarding parameter distributions. For the P-DDS-AU samples, uncertainty representation is performed using four definitions based on pseudo-likelihoods: two based on the Nash and Sutcliffe efficiency criterion, and two based on inverse error or residual variance. The definitions consist of shaping factors that strongly affect the resulting likelihood distribution. In addition, some definitions are affected by the objective function definition. Therefore, all variations are considered in the development of likelihood distribution envelopes, which are designed to maximize the amount of information available to decision-makers. The considerations that are important to the creation of an uncertainty envelope are outlined in this thesis. In general, greater uncertainty appears to be present at the tails of the distribution. For a refinement of the uncertainty envelopes, the application of additional physical criteria is recommended. The selection of likelihood and objective function definitions and their properties are made based on the needs of the problem; therefore, preliminary investigations should always be conducted to provide a basis for selecting appropriate methods and definitions. It is imperative to remember that the communication of assumptions and definitions used in both parameter estimation and uncertainty analysis is crucial in decision-making scenarios.
264

The Role of Dominant Cause in Variation Reduction through Robust Parameter Design

Asilahijani, Hossein 24 April 2008 (has links)
Reducing variation in key product features is a very important goal in process improvement. Finding and trying to control the cause(s) of variation is one way to reduce variability, but is not cost effective or even possible in some situations. In such cases, Robust Parameter Design (RPD) is an alternative. The goal in RPD is to reduce variation by reducing the sensitivity of the process to the sources of variation, rather than controlling these sources directly. That is, the goal is to find levels of the control inputs that minimize the output variation imposed on the process via the noise variables (causes). In the literature, a variety of experimental plans have been proposed for RPD, including Robustness, Desensitization and Taguchi’s method. In this thesis, the efficiency of the alternative plans is compared in the situation where the most important source of variation, called the “Dominant Cause”, is known. It is shown that desensitization is the most appropriate approach for applying the RPD method to an existing process.
265

An experimental investigation of the flow around impulsively started cylinders

Tonui, Nelson Kiplanga't 10 September 2009 (has links)
A study of impulsively started flow over cylindrical objects is made using the particle image velocimetry (PIV) technique for Reynolds numbers of Re = 200, 500 and 1000 in an X-Y towing tank. The cylindrical objects studied were a circular cylinder of diameter, D = 25.4 mm, and square and diamond cylinders each with side length, D = 25.4 mm. The aspect ratio, AR (= L/D) of the cylinders was 28 and therefore they were considered infinite. The development of the recirculation zone up to a dimensionless time of t* = 4 following the start of the motion was examined. The impulsive start was approximated using a dimensionless acceleration parameter, a*, and in this research, the experiments were conducted for five acceleration parameters, a* = 0.5, 1, 3, 5 and 10. The study showed that conditions similar to impulsively started motion were attained once a* ¡Ý 3.<p> A recirculation zone was formed immediately after the start of motion as a result of flow separation at the surface of the cylinder. It contained a pair of primary eddies, which in the initial stages (like in this case) were symmetrical and rotating in opposite directions. The recirculation zone was quantified by looking at the length of the zone, LR, the vortex development, both in terms of the streamwise location and the cross-stream spacing of the vortex centers, a and b, respectively, as well as the circulation (strength) of the primary vortices, ¦£.<p> For all types of cylinders examined, the length of the recirculation zone, the streamwise location of the primary eddies and the circulation of the primary eddies increase as time advances from the start of the impulsive motion. They also increase with an increase in the acceleration parameter, a*, until a* = 3, beyond which there is no more change, since the conditions similar to impulsively started conditions have been achieved. The cross-stream spacing of the primary vortices is relatively independent of Re, a* and t* but was different for different cylinders.<p> Irrespective of the type of cylinder, the growth of the recirculation zone at Re = 500 and 1000 is smaller than at Re = 200. The recirculation zone of a diamond cylinder is much larger than for both square and circular cylinders. The square and diamond cylinders have sharp edges which act as fixed separation points. Therefore, the cross-stream spacing of the primary vortex centers are independent of Re, unlike the circular cylinder which shows some slight variation with changes in Reynolds number.<p> The growth of the recirculation is more dependent on the distance moved following the start of the impulsive motion; that is why for all types of cylinders, the LR/D, a/D and ¦£/UD profiles collapse onto common curves when plotted against the distance moved from the start of the motion.
266

Condition Monitoring of Electrolytic Capacitors for Power Electronics Applications

Imam, Afroz M. 09 April 2007 (has links)
The objective of this research is to advance the field of condition monitoring of electrolytic capacitors used in power electronics circuits. The construction process of an electrolytic capacitor is presented. Descriptions of various kinds of faults that can occur in an electrolytic capacitor are discussed. The methods available to detect electrolytic capacitor faults are discussed. The effects of the capacitor faults on the capacitor voltage and current waveforms are investigated through experiments. It is also experimentally demonstrated that faults in the capacitor can be detected by monitoring the capacitor voltage and current. Various ESR estimation based detection techniques available to detect capacitor failures in power electronics circuits are reviewed. Three algorithms are proposed to track and detect capacitor failures: an FFT based algorithm, a system modeling based detection scheme, and finally a parameter estimation based algorithm. The parameter estimation based algorithm is a low-cost real-time scheme, and it is inexpensive to implement. Finally, a detailed study is carried out to understand the failure mechanism of an electrolytic capacitor due to inrush current.
267

Robust Parameter Design for Automatically Controlled Systems and Nanostructure Synthesis

Dasgupta, Tirthankar 25 June 2007 (has links)
This research focuses on developing comprehensive frameworks for developing robust parameter design methodology for dynamic systems with automatic control and for synthesis of nanostructures. In many automatically controlled dynamic processes, the optimal feedback control law depends on the parameter design solution and vice versa and therefore an integrated approach is necessary. A parameter design methodology in the presence of feedback control is developed for processes of long duration under the assumption that experimental noise factors are uncorrelated over time. Systems that follow a pure-gain dynamic model are considered and the best proportional-integral and minimum mean squared error control strategies are developed by using robust parameter design. The proposed method is illustrated using a simulated example and a case study in a urea packing plant. This idea is also extended to cases with on-line noise factors. The possibility of integrating feedforward control with a minimum mean squared error feedback control scheme is explored. To meet the needs of large scale synthesis of nanostructures, it is critical to systematically find experimental conditions under which the desired nanostructures are synthesized reproducibly, at large quantity and with controlled morphology. The first part of the research in this area focuses on modeling and optimization of existing experimental data. Through a rigorous statistical analysis of experimental data, models linking the probabilities of obtaining specific morphologies to the process variables are developed. A new iterative algorithm for fitting a Multinomial GLM is proposed and used. The optimum process conditions, which maximize the above probabilities and make the synthesis process less sensitive to variations of process variables around set values, are derived from the fitted models using Monte-Carlo simulations. The second part of the research deals with development of an experimental design methodology, tailor-made to address the unique phenomena associated with nanostructure synthesis. A sequential space filling design called Sequential Minimum Energy Design (SMED) for exploring best process conditions for synthesis of nanowires. The SMED is a novel approach to generate sequential designs that are model independent, can quickly "carve out" regions with no observable nanostructure morphology, and allow for the exploration of complex response surfaces.
268

Integration of Long Baseline Positioning System And Vehicle Dynamic Model

Chiou, Ji-Wen 04 August 2011 (has links)
Precise positioning is crucial for the success of navigation of underwater vehicles. At present, different instruments and methods are available for underwater positioning but few of them are reliable for three-dimensional position sensing of underwater vehicles. Long baseline (LBL) positioning is the standard method for three-dimensional underwater navigation. However, the accuracy of LBL positioning suffers from its own drawback of relatively low update rates. To improve the accuracy in positioning an underwater vehicle, integration of additional sensing measurements in a LBL navigation system is necessary. In this study, numerical simulation and experiment are conducted to investigate the effect of interrogate rate on the accuracy of LBL positioning. Numerical and experimental results show that the longer the interrogate rate, the greater the LBL positioning error. In addition, no reply from a transponder to transceiver interrogation is another major error source in LBL positioning. The experimental result also shows that the accuracy of LBL positioning can be significantly improved by the integration of velocity sensing. Therefore, based on Kalman filter, this study integrates a LBL system with vehicle dynamic model to improve the accuracy of positioning an underwater vehicle. For conducting the positioning experiments, a remotely operated vehicle (ROV) with dedicated Graphic User Interface (GUI) is designed, constructed, and tested. To have a precise motion simulation of ROV, a nonlinear dynamic model of ROV with six degrees of freedom (DOF) is used and its hydrodynamic parameters are identified. Finally, the positioning experiment is run by maneuvering the ROV to move along an ¡§S¡¨ trajectory, and Kalman filter is adopted to propagate the error covariance, to update the measurement errors, and to correct the state equation when the measurements of range, depth, and thruster command are available. The experimental result demonstrates the effectiveness of the integrated LBL system with the ROV dynamic model on the improvement of accuracy of positioning an underwater vehicle.
269

Implementations of Dynamic End-to-End Bit-rate Adjustments for Intelligent Video Surveillance Networks

Tsai, YueLin 17 January 2012 (has links)
In the Thesis, we propose a mechanism to dynamically adjust video parameters in an intelligent video surveillance network. Whenever there is an alarm or network encounters congestion, we could adjust video parameters including Frames per Second (FPS), Quality, and Picture Size to adapt to network bandwidth. For examples, we can adjust FPS when an alarm exists in the surveillance system; we can adjust the Quality or Picture Size by counting the total number of video packets received per second to obtain a smooth video when network is congested To demonstrate the proposed schemes, we implement these three adjustable parameters, Quality, Picture size, and FPS on a Linux platform. To do this, we establish a new HTTP connection from a client to a camera and then we develop the corresponding control messages issued by the client in order to change the video parameters. In addition, we implement a video recovery mechanism by measuring the differences in arrival time between every packet (referred to as diff). Finally, we observe with our proposed scheme whether the video quality can be smoother under different background traffics. In the video recovery mechanism, we utilize diff to decide whether a higher quality picture should be persisted or downgraded to a lower quality picture to avoid packet loss under network congestion.
270

Constrained expectation-maximization (EM), dynamic analysis, linear quadratic tracking, and nonlinear constrained expectation-maximation (EM) for the analysis of genetic regulatory networks and signal transduction networks

Xiong, Hao 15 May 2009 (has links)
Despite the immense progress made by molecular biology in cataloging andcharacterizing molecular elements of life and the success in genome sequencing, therehave not been comparable advances in the functional study of complex phenotypes.This is because isolated study of one molecule, or one gene, at a time is not enough byitself to characterize the complex interactions in organism and to explain the functionsthat arise out of these interactions. Mathematical modeling of biological systems isone way to meet the challenge.My research formulates the modeling of gene regulation as a control problem andapplies systems and control theory to the identification, analysis, and optimal controlof genetic regulatory networks. The major contribution of my work includes biologicallyconstrained estimation, dynamical analysis, and optimal control of genetic networks.In addition, parameter estimation of nonlinear models of biological networksis also studied, as a parameter estimation problem of a general nonlinear dynamicalsystem. Results demonstrate the superior predictive power of biologically constrainedstate-space models, and that genetic networks can have differential dynamic propertieswhen subjected to different environmental perturbations. Application of optimalcontrol demonstrates feasibility of regulating gene expression levels. In the difficultproblem of parameter estimation, generalized EM algorithm is deployed, and a set of explicit formula based on extended Kalman filter is derived. Application of themethod to synthetic and real world data shows promising results.

Page generated in 0.0384 seconds