• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3717
  • 915
  • 683
  • 427
  • 160
  • 95
  • 61
  • 57
  • 45
  • 38
  • 36
  • 35
  • 35
  • 34
  • 27
  • Tagged with
  • 7567
  • 1139
  • 886
  • 809
  • 729
  • 726
  • 711
  • 572
  • 536
  • 534
  • 526
  • 523
  • 500
  • 483
  • 476
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
531

State estimation, system identification and adaptive control for networked systems

Fang, Huazhen 14 April 2009
A networked control system (NCS) is a feedback control system that has its control loop physically connected via real-time communication networks. To meet the demands of `teleautomation', modularity, integrated diagnostics, quick maintenance and decentralization of control, NCSs have received remarkable attention worldwide during the past decade. Yet despite their distinct advantages, NCSs are suffering from network-induced constraints such as time delays and packet dropouts, which may degrade system performance. Therefore, the network-induced constraints should be incorporated into the control design and related studies.<p> For the problem of state estimation in a network environment, we present the strategy of simultaneous input and state estimation to compensate for the effects of unknown input missing. A sub-optimal algorithm is proposed, and the stability properties are proven by analyzing the solution of a Riccati-like equation.<p> Despite its importance, system identification in a network environment has been studied poorly before. To identify the parameters of a system in a network environment, we modify the classical Kalman filter to obtain an algorithm that is capable of handling missing output data caused by the network medium. Convergence properties of the algorithm are established under the stochastic framework.<p> We further develop an adaptive control scheme for networked systems. By employing the proposed output estimator and parameter estimator, the designed adaptive control can track the expected signal. Rigorous convergence analysis of the scheme is performed under the stochastic framework as well.
532

An Objective Methodology to Assess Visual Acuity Using Visual Scanning Parameters

Cassel, Daniel 12 January 2010 (has links)
An objective methodology to assess visual acuity (VA) in infants was developed. The methodology is based on the analysis of visual scanning parameters when visual stimuli consisting of homogeneous targets and a target with gratings (TG) are presented. The percentage of time on the TG best predicted the ability of the subject to discriminate between the targets. Using this parameter, the likelihood ratio test was used to test the hypothesis that the TG was discriminated. VA is estimated as the highest spatial frequency for which the probability of false positive is lower than the probability of false negative for stimuli with lower spatial frequencies. VA estimates of 9 adults had an average error of 0.06 logMAR with a testing time of 3.5 minutes. These results suggest that if the attention of infants can be consistently maintained the new methodology will enable more accurate assessment of VA in infants.
533

Discrete Event Simulation in the Preliminary Estimation Phase of Mega Projects: A Case Study of the Central Waterfront Revitalization Project

Nahrvar, Shayan 27 July 2010 (has links)
The methodology of discrete-event simulation provides a promising alternative to solving complicated construction systems. Given the level of uncertainty that exists in the early estimation phase of mega-projects regarding cost and risk, project simulations have become a central part of decision-making and planning. In this paper, an attempt is made to compare the output generated by a model constructed under the Monte Carlo framework with that of Discrete-Event Simulation to determine the similarities and difference between the two methods. To achieve this, the Simphony modeling (DES) environment is used. The result is then compared to a Monte Carlo simulation conducted by Golder Associates.
534

Discrete Event Simulation in the Preliminary Estimation Phase of Mega Projects: A Case Study of the Central Waterfront Revitalization Project

Nahrvar, Shayan 27 July 2010 (has links)
The methodology of discrete-event simulation provides a promising alternative to solving complicated construction systems. Given the level of uncertainty that exists in the early estimation phase of mega-projects regarding cost and risk, project simulations have become a central part of decision-making and planning. In this paper, an attempt is made to compare the output generated by a model constructed under the Monte Carlo framework with that of Discrete-Event Simulation to determine the similarities and difference between the two methods. To achieve this, the Simphony modeling (DES) environment is used. The result is then compared to a Monte Carlo simulation conducted by Golder Associates.
535

A Physical Estimation based Continuous Monitoring Scheme for Wireless Sensor Networks

Deshmukh, Wiwek 16 July 2007 (has links)
Data estimation is emerging as a powerful strategy for energy conservation in sensor networks. In this thesis is reported a technique, called Data Estimation using Physical Method (DEPM), that efficiently conserves battery power in an environment that may take a variety of complex manifestations in real situations. The methodology can be ported easily with minor changes to address a multitude of tasks by altering the parameters of the algorithm and ported on any platform. The technique aims at conserving energy in the limited energy supply source that runs a sensor network by enabling a large number of sensors to go to sleep and having a minimal set of active sensors that may gather data and communicate the same to a base station. DEPM rests on solving a set of linear inhomogeneous algebraic equations which are set up using well-established physical laws. The present technique is powerful enough to yield data estimation at an arbitrary number of point-locations, and provides for easy experimental verification of the estimated data by using only a few extra sensors.
536

Sensor placement for microseismic event location

Errington, Angus Frank Charles 07 November 2006
Mining operations can produce highly localized, low intensity earthquakes that are referred to as microseismic events. Monitoring of microseismic events is useful in predicting and comprehending hazards, and in evaluating the overall performance of a mine design. <p>A robust localization algorithm is used to estimate the source position of the microseismic event by selecting the hypothesized source location that maximizes an energy function generated from the sum of the time--aligned sensor signals. The accuracy of localization for the algorithm characterized by the variance depends in part upon the configuration of sensors. Two algorithms, MAXSRC and MINMAX, are presented that use the variance of localization error, in a particular direction, as a performance measure for a given sensor configuration.<p>The variance of localization error depends, in part, upon the energy spectral density of the microseismic event. The energy spectral density characterization of sensor signals received in two potash mines are presented and compared using two spectral estimation techniques: multitaper estimation and combined time and lag weighting. It is shown that the difference between the the two estimation techniques is negligible. However, the differences between the two mine characterizations, though not large, is significant. An example uses the characterized energy spectral densities to determine the variance of error for a single step localization algorithm.<p>The MAXSRC and MINMAX algorithms are explained. The MAXSRC sensor placement algorithm places a sensor as close as possible to the source position with the maximum variance. The MINMAX sensor placement algorithm minimizes the variance of the source position with the maximum variance after the sensor has been placed. The MAXSRC algorithm is simple and can be solved using an exhaustive search while the MINMAX algorithm uses a genetic algorithm to find a solution. These algorithms are then used in three examples, two of which are simple and synthetic. The other example is from Lanigan Potash Mine. The results show that both sensor placement algorithms produce similar results, with the MINMAX algorithm consistently doing better. The MAXSRC algorithm places a single sensor approximately 100 times faster than the MINMAX algorithm. The example shows that the MAXSRC algorithm has the potential to be an efficient and intuitively simple sensor placement algorithm for mine microseismic event monitoring. The MINMAX algorithm provides, at an increase in computational time, a more robust placement criterion which can be solved adequately using a genetic algorithm.
537

Permeability Estimation from Fracture Calibration Test Analysis in Shale and Tight Gas

Xue, Han 1988- 14 March 2013 (has links)
Permeability estimation in tight and shale reservoirs is challenging because little or no flow will occur without hydraulic fracture stimulation. In the pressure falloff following a fracture calibration test (FCT), radial flow after the fracture closure can be used to estimate the reservoir permeability. However, for very low permeability, the time to reach radial flow can exceed any practical duration. This study shows how to use the reservoir pressure to estimate the maximum reservoir permeability when radial flow is missing in the after-closure response. The approach is straightforward and can also be used for buildup tests. It applies whenever the well completion geometry permits radial flow before the pressure response encounters a real well drainage limits. Recent developments have blurred the boundary between fracture calibration test analysis and classic pressure transient analysis. Adapting the log-log diagnostic plot representation to the FCT analysis has made it possible to perform before and after closure analysis on the same diagnostic plot. This paper also proposes a method for diagnosing abnormal leakoff behavior using the log-log diagnostic plot as an alternative method for the traditional G-function plot. The results show the relationship between reservoir permeability and pressure can be used effectively for both estimation of the permeability upper bound when there is no apparent radial flow and for confirming the permeability estimated from apparent late time radial flow. Numerous field examples illustrate this simple and powerful insight.
538

The determinants of Canadian provincial health expenditures : evidence from dynamic panel

Bilgel, Firat 09 August 2004
This thesis aims to reveal the magnitude of the income elasticity of health expenditure and the impact of non-income determinants of health expenditures in the Canadian Provinces. Health can be seen as a luxury good if the income elasticity exceeds unity and as a necessity good if the income elasticity is below unity. The motivation behind the analysis of the determinants of health spending is to identify the forces that drive the persistent increase in health expenditures in Canada and to explain the disparities in provincial health expenditures, thereby to prescribe sustainable macroeconomic policies regarding health spending. Panel data on real per capita GDP, relative price of health care, the share of publicly funded health expenditure, the share of senior population and life expectancy at birth have been used to investigate the determinants of Canadian real per capita provincial total, private and government health expenditures for the period 1975-2002. Dynamic models of health expenditure are analyzed via Generalized Instrumental Variables and Generalized Method of Moments techniques. Evidence confirms that health is far from being a luxury for Canada and government health expenditures are constrained by the relative prices. Results also cast doubt upon the power of quantitative analysis in explaining the increasing health expenditures.
539

An Objective Methodology to Assess Visual Acuity Using Visual Scanning Parameters

Cassel, Daniel 12 January 2010 (has links)
An objective methodology to assess visual acuity (VA) in infants was developed. The methodology is based on the analysis of visual scanning parameters when visual stimuli consisting of homogeneous targets and a target with gratings (TG) are presented. The percentage of time on the TG best predicted the ability of the subject to discriminate between the targets. Using this parameter, the likelihood ratio test was used to test the hypothesis that the TG was discriminated. VA is estimated as the highest spatial frequency for which the probability of false positive is lower than the probability of false negative for stimuli with lower spatial frequencies. VA estimates of 9 adults had an average error of 0.06 logMAR with a testing time of 3.5 minutes. These results suggest that if the attention of infants can be consistently maintained the new methodology will enable more accurate assessment of VA in infants.
540

Exploration d'une nouvelle méthode d'estimation dans le processus de coalescence avec recombinaison

Massé, Hugues January 2008 (has links) (PDF)
L'estimation de paramètres génétiques est un problème important dans le domaine de la génétique mathématique et statistique. Il existe plusieurs méthodes s'attaquant à ce problème. Certaines d'entre elles utilisent la méthode du maximum de vraisemblance. Celle-ci peut être calculée à l'aide des équations exactes de Griffiths-Tavaré, équations de récurrence provenant du processus de coalescence. Il s'agit alors de considérer plusieurs histoires possibles qui relient les données de l'échantillon initial de séquences d'ADN à un ancêtre commun. Habituellement, certaines des histoires possibles sont simulées, en conjonction avec l'application des méthodes Monte-Carlo. Larribe et al. (2002) utilisent cette méthode (voir chapitre IV). Nous explorons une nouvelle approche permettant d'utiliser les équations de Griffiths-Tavaré de façon différente pour obtenir une estimation quasi exacte de la vraisemblance sans avoir recours aux simulations. Pour que le temps de calcul nécessaire à l'application de la méthode demeure raisonnable, nous devons faire deux compromis majeurs. La première concession consiste à limiter le nombre de recombinaisons permises dans les histoires. La seconde concession consiste à séparer les données en plusieurs parties appelées fenêtres. Nous obtenons ainsi plusieurs vraisemblances marginales que nous mettons ensuite en commun en appliquant le principe de vraisemblance composite. À l'aide d'un programme écrit en C++, nous appliquons notre méthode dans le cadre d'un problème de cartographie génétique fine où nous voulons estimer la position d'une mutation causant une maladie génétique simple. Notre méthode donne des résultats intéressants. Pour de très petits ensembles de données, nous montrons qu'il est possible de permettre un assez grand nombre de recombinaisons pour qu'il y ait convergence dans la courbe de vraisemblance obtenue. Aussi, il est également possible d'obtenir des courbes dont la forme et l'estimation du maximum de vraisemblance sont similaires à celles obtenues avec la méthode de Larribe et al. Cependant, notre méthode n'est pas encore applicable dans son état actuel parce qu'elle est encore trop exigeante en termes de temps de calcul. ______________________________________________________________________________ MOTS-CLÉS DE L’AUTEUR : Équations exactes de Griffiths-Tavaré, Paramètres génétiques, Processus de coalescence, Vraisemblance composite.

Page generated in 0.0299 seconds