Spelling suggestions: "subject:"destimation"" "subject:"coestimation""
531 |
An Objective Methodology to Assess Visual Acuity Using Visual Scanning ParametersCassel, Daniel 12 January 2010 (has links)
An objective methodology to assess visual acuity (VA) in infants was developed. The methodology is based on the analysis of visual scanning parameters when visual stimuli consisting of homogeneous targets and a target with gratings (TG) are presented. The percentage of time on the TG best predicted the ability of the subject to discriminate between the targets. Using this parameter, the likelihood ratio test was used to test the hypothesis that the TG was discriminated. VA is estimated as the highest spatial frequency for which the probability of false positive is lower than the probability of false negative for stimuli with lower spatial frequencies. VA estimates of 9 adults had an average error of 0.06 logMAR with a testing time of 3.5 minutes. These results suggest that if the attention of infants can be consistently maintained the new methodology will enable more accurate assessment of VA in infants.
|
532 |
Discrete Event Simulation in the Preliminary Estimation Phase of Mega Projects: A Case Study of the Central Waterfront Revitalization ProjectNahrvar, Shayan 27 July 2010 (has links)
The methodology of discrete-event simulation provides a promising alternative to solving complicated construction systems. Given the level of uncertainty that exists in the early estimation phase of mega-projects regarding cost and risk, project simulations have become a central part of decision-making and planning. In this paper, an attempt is made to compare the output generated by a model constructed under the Monte Carlo framework with that of Discrete-Event Simulation to determine the similarities and difference between the two methods. To achieve this, the Simphony modeling (DES) environment is used. The result is then compared to a Monte Carlo simulation conducted by Golder Associates.
|
533 |
Discrete Event Simulation in the Preliminary Estimation Phase of Mega Projects: A Case Study of the Central Waterfront Revitalization ProjectNahrvar, Shayan 27 July 2010 (has links)
The methodology of discrete-event simulation provides a promising alternative to solving complicated construction systems. Given the level of uncertainty that exists in the early estimation phase of mega-projects regarding cost and risk, project simulations have become a central part of decision-making and planning. In this paper, an attempt is made to compare the output generated by a model constructed under the Monte Carlo framework with that of Discrete-Event Simulation to determine the similarities and difference between the two methods. To achieve this, the Simphony modeling (DES) environment is used. The result is then compared to a Monte Carlo simulation conducted by Golder Associates.
|
534 |
A Physical Estimation based Continuous Monitoring Scheme for Wireless Sensor NetworksDeshmukh, Wiwek 16 July 2007 (has links)
Data estimation is emerging as a powerful strategy for energy conservation in sensor networks. In this thesis is reported a technique, called Data Estimation using Physical Method (DEPM), that efficiently conserves battery power in an environment that may take a variety of complex manifestations in real situations. The methodology can be ported easily with minor changes to address a multitude of tasks by altering the parameters of the algorithm and ported on any platform. The technique aims at conserving energy in the limited energy supply source that runs a sensor network by enabling a large number of sensors to go to sleep and having a minimal set of active sensors that may gather data and communicate the same to a base station. DEPM rests on solving a set of linear inhomogeneous algebraic equations which are set up using well-established physical laws. The present technique is powerful enough to yield data estimation at an arbitrary number of point-locations, and provides for easy experimental verification of the estimated data by using only a few extra sensors.
|
535 |
Sensor placement for microseismic event locationErrington, Angus Frank Charles 07 November 2006
Mining operations can produce highly localized, low intensity earthquakes that are referred to as microseismic events. Monitoring of microseismic events is useful in predicting and comprehending hazards, and in evaluating the overall performance of a mine design. <p>A robust localization algorithm is used to estimate the source position of the microseismic event by selecting the hypothesized source location that maximizes an energy function generated from the sum of the time--aligned sensor signals. The accuracy of localization for the algorithm characterized by the variance depends in part upon the configuration of sensors. Two algorithms, MAXSRC and MINMAX, are presented that use the variance of localization error, in a particular direction, as a performance measure for a given sensor configuration.<p>The variance of localization error depends, in part, upon the energy spectral density of the microseismic event. The energy spectral density characterization of sensor signals received in two potash mines are presented and compared using two spectral estimation techniques: multitaper estimation and combined time and lag weighting. It is shown that the difference between the the two estimation techniques is negligible. However, the differences between the two mine characterizations, though not large, is significant. An example uses the characterized energy spectral densities to determine the variance of error for a single step localization algorithm.<p>The MAXSRC and MINMAX algorithms are explained. The MAXSRC sensor placement algorithm places a sensor as close as possible to the source position with the maximum variance. The MINMAX sensor placement algorithm minimizes the variance of the source position with the maximum variance after the sensor has been placed. The MAXSRC algorithm is simple and can be solved using an exhaustive search while the MINMAX algorithm uses a genetic algorithm to find a solution. These algorithms are then used in three examples, two of which are simple and synthetic. The other example is from Lanigan Potash Mine. The results show that both sensor placement algorithms produce similar results, with the MINMAX algorithm consistently doing better. The MAXSRC algorithm places a single sensor approximately 100 times faster than the MINMAX algorithm. The example shows that the MAXSRC algorithm has the potential to be an efficient and intuitively simple sensor placement algorithm for mine microseismic event monitoring. The MINMAX algorithm provides, at an increase in computational time, a more robust placement criterion which can be solved adequately using a genetic algorithm.
|
536 |
Permeability Estimation from Fracture Calibration Test Analysis in Shale and Tight GasXue, Han 1988- 14 March 2013 (has links)
Permeability estimation in tight and shale reservoirs is challenging because little or no flow will occur without hydraulic fracture stimulation. In the pressure falloff following a fracture calibration test (FCT), radial flow after the fracture closure can be used to estimate the reservoir permeability. However, for very low permeability, the time to reach radial flow can exceed any practical duration. This study shows how to use the reservoir pressure to estimate the maximum reservoir permeability when radial flow is missing in the after-closure response. The approach is straightforward and can also be used for buildup tests. It applies whenever the well completion geometry permits radial flow before the pressure response encounters a real well drainage limits.
Recent developments have blurred the boundary between fracture calibration test analysis and classic pressure transient analysis. Adapting the log-log diagnostic plot representation to the FCT analysis has made it possible to perform before and after closure analysis on the same diagnostic plot. This paper also proposes a method for diagnosing abnormal leakoff behavior using the log-log diagnostic plot as an alternative method for the traditional G-function plot.
The results show the relationship between reservoir permeability and pressure can be used effectively for both estimation of the permeability upper bound when there is no apparent radial flow and for confirming the permeability estimated from apparent late time radial flow. Numerous field examples illustrate this simple and powerful insight.
|
537 |
The determinants of Canadian provincial health expenditures : evidence from dynamic panelBilgel, Firat 09 August 2004
This thesis aims to reveal the magnitude of the income elasticity of health expenditure and the impact of non-income determinants of health expenditures in the Canadian Provinces. Health can be seen as a luxury good if the income elasticity exceeds unity and as a necessity good if the income elasticity is below unity. The motivation behind the analysis of the determinants of health spending is to identify the forces that drive the persistent increase in health expenditures in Canada and to explain the disparities in provincial health expenditures, thereby to prescribe sustainable macroeconomic policies regarding health spending. Panel data on real per capita GDP, relative price of health care, the share of publicly funded health expenditure, the share of senior population and life expectancy at birth have been used to investigate the determinants of Canadian real per capita provincial total, private and government health expenditures for the period 1975-2002. Dynamic models of health expenditure are analyzed via Generalized Instrumental Variables and Generalized Method of Moments techniques. Evidence confirms that health is far from being a luxury for Canada and government health expenditures are constrained by the relative prices. Results also cast doubt upon the power of quantitative analysis in explaining the increasing health expenditures.
|
538 |
An Objective Methodology to Assess Visual Acuity Using Visual Scanning ParametersCassel, Daniel 12 January 2010 (has links)
An objective methodology to assess visual acuity (VA) in infants was developed. The methodology is based on the analysis of visual scanning parameters when visual stimuli consisting of homogeneous targets and a target with gratings (TG) are presented. The percentage of time on the TG best predicted the ability of the subject to discriminate between the targets. Using this parameter, the likelihood ratio test was used to test the hypothesis that the TG was discriminated. VA is estimated as the highest spatial frequency for which the probability of false positive is lower than the probability of false negative for stimuli with lower spatial frequencies. VA estimates of 9 adults had an average error of 0.06 logMAR with a testing time of 3.5 minutes. These results suggest that if the attention of infants can be consistently maintained the new methodology will enable more accurate assessment of VA in infants.
|
539 |
Exploration d'une nouvelle méthode d'estimation dans le processus de coalescence avec recombinaisonMassé, Hugues January 2008 (has links) (PDF)
L'estimation de paramètres génétiques est un problème important dans le domaine de la génétique mathématique et statistique. Il existe plusieurs méthodes s'attaquant à ce problème. Certaines d'entre elles utilisent la méthode du maximum de vraisemblance. Celle-ci peut être calculée à l'aide des équations exactes de Griffiths-Tavaré, équations de récurrence provenant du processus de coalescence. Il s'agit alors de considérer plusieurs histoires possibles qui relient les données de l'échantillon initial de séquences d'ADN à un ancêtre commun. Habituellement, certaines des histoires possibles sont simulées, en conjonction avec l'application des méthodes Monte-Carlo. Larribe et al. (2002) utilisent cette méthode (voir chapitre IV). Nous explorons une nouvelle approche permettant d'utiliser les équations de Griffiths-Tavaré de façon différente pour obtenir une estimation quasi exacte de la vraisemblance sans avoir recours aux simulations. Pour que le temps de calcul nécessaire à l'application de la méthode demeure raisonnable, nous devons faire deux compromis majeurs. La première concession consiste à limiter le nombre de recombinaisons permises dans les histoires. La seconde concession consiste à séparer les données en plusieurs parties appelées fenêtres. Nous obtenons ainsi plusieurs vraisemblances marginales que nous mettons ensuite en commun en appliquant le principe de vraisemblance composite. À l'aide d'un programme écrit en C++, nous appliquons notre méthode dans le cadre d'un problème de cartographie génétique fine où nous voulons estimer la position d'une mutation causant une maladie génétique simple. Notre méthode donne des résultats intéressants. Pour de très petits ensembles de données, nous montrons qu'il est possible de permettre un assez grand nombre de recombinaisons pour qu'il y ait convergence dans la courbe de vraisemblance obtenue. Aussi, il est également possible d'obtenir des courbes dont la forme et l'estimation du maximum de vraisemblance sont similaires à celles obtenues avec la méthode de Larribe et al. Cependant,
notre méthode n'est pas encore applicable dans son état actuel parce qu'elle est encore trop exigeante en termes de temps de calcul. ______________________________________________________________________________ MOTS-CLÉS DE L’AUTEUR : Équations exactes de Griffiths-Tavaré, Paramètres génétiques, Processus de coalescence, Vraisemblance composite.
|
540 |
Étude du choix d'un modèle d'arborescence en régression logistique 4-nomiale selon l'effet de la valeur des paramètresStafford, Marie-Christine January 2008 (has links) (PDF)
Ce mémoire traite de modèles d'arborescences en régression logistique 4-nomiale pour rendre compte du cas où les résultats proviennent de séquences d'expérience multinomiales consécutives ou parallèles. Dans le premier chapitre, nous rappelons le modèle général de régression logistique multinomiale et présentons une méthode d'estimation individuelle des paramètres. Le chapitre suivant rapporte les recherches de Rousseau et Sankoff sur les modèles d'arborescences en régression logistique et présente du même coup le cadre dans lequel la présente étude s'inscrit.. Le troisième chapitre porte sur différents résultats qui caractérisent les paramètres pour lesquels certaines structures d'arborescences sont équivalentes. Finalement, le dernier chapitre présente une étude de simulations Monte-Carlo effectuée pour comprendre et mettre en évidence les facteurs influençant l'ordre (selon le maximum de vraisemblance) dans lequel les arborescences sont sélectionnées. Ces simulations ont permis d'identifier certains principes auxquels cet ordre obéit, selon la forme du vecteur des paramètres et la grandeur de ces derniers. ______________________________________________________________________________ MOTS-CLÉS DE L’AUTEUR : Régression logistique, Arborescences, Modèles réduits.
|
Page generated in 0.1237 seconds