• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 202
  • 88
  • 54
  • 34
  • 14
  • 13
  • 12
  • 9
  • 6
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 485
  • 86
  • 71
  • 59
  • 56
  • 55
  • 50
  • 48
  • 48
  • 45
  • 45
  • 44
  • 41
  • 40
  • 37
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Some Aspects of Propensity Score-based Estimators for Causal Inference

Pingel, Ronnie January 2014 (has links)
This thesis consists of four papers that are related to commonly used propensity score-based estimators for average causal effects. The first paper starts with the observation that researchers often have access to data containing lots of covariates that are correlated. We therefore study the effect of correlation on the asymptotic variance of an inverse probability weighting and a matching estimator. Under the assumptions of normally distributed covariates, constant causal effect, and potential outcomes and a logit that are linear in the parameters we show that the correlation influences the asymptotic efficiency of the estimators differently, both with regard to direction and magnitude. Further, the strength of the confounding towards the outcome and the treatment plays an important role. The second paper extends the first paper in that the estimators are studied under the more realistic setting of using the estimated propensity score. We also relax several assumptions made in the first paper, and include the doubly robust estimator. Again, the results show that the correlation may increase or decrease the variances of the estimators, but we also observe that several aspects influence how correlation affects the variance of the estimators, such as the choice of estimator, the strength of the confounding towards the outcome and the treatment, and whether constant or non-constant causal effect is present. The third paper concerns estimation of the asymptotic variance of a propensity score matching estimator. Simulations show that large gains can be made for the mean squared error by properly selecting smoothing parameters of the variance estimator and that a residual-based local linear estimator may be a more efficient estimator for the asymptotic variance. The specification of the variance estimator is shown to be crucial when evaluating the effect of right heart catheterisation, i.e. we show either a negative effect on survival or no significant effect depending on the choice of smoothing parameters.   In the fourth paper, we provide an analytic expression for the covariance matrix of logistic regression with normally distributed regressors. This paper is related to the other papers in that logistic regression is commonly used to estimate the propensity score.
142

The Compensation model of working memory in healthy aging: structural and functional neural correlates of the N-back task over the lifespan

Bharadia, Vinay 21 January 2013 (has links)
The concept of age has undergone a shift from a non-specific measure of chronological age, to an identification of underlying biological, psychological and functional factors leading to age-related changes over time. Loss of neurons (atrophy) and cognitive decline in healthy aging fit well in to this age paradigm. The aging brain is thought to undergo functional shifts in information processing in response to atrophy, which is conceptualised as a “Compensation Hypothesis” of cognitive aging. Using behavioural (reaction time, variability measures, and accuracy on the n-back task of working memory), structural (stereological cortical volume estimates) and functional (functional Magnetic Resonance Imaging) approaches, this study documents decreased whole brain, prefrontal and dorsolateral prefrontal cortex volumes in older individuals. Further, slower, less accurate, and more variable performance on the n-back task in older participants was accompanied by a posterior-to-anterior shift in processing, confirming the Compensation Hypothesis of cognitive aging. The behavioural data combined with structural and functional findings, suggest an aging brain that neuropsychologically compensates over time by paradoxically placing further processing demands on a structurally compromised dorsolateral prefrontal cortex. This produces adequate but slower, more variable, and less accurate performance compared to younger brains; compensation occurs in age, but is not complete. Decision making research has pointed to the important role of emotion in judgement, and has implicated the orbitofrontal cortex as critical for this processing modality. The structural data in this study showed preferentially less volume in the dorsolateral prefrontal cortex, but maintained cortical volume in the orbitofrontal cortex with age. Younger individuals took longer and maintained their accuracy with increasing complexity during the n-back task, with older participants decreasing their accuracy but not to the level of chance with increasing task complexity. As such, decision making on the n-back task may have shifted with age from the pure processing power of the structurally compromised dorsolateral prefrontal cortex to increasing reliance on emotionally-guided decision making inputs mediated by the intact orbitofrontal cortex resulting in adequate but not fully compensated performance in older people. These findings are discussed in relation to evolutionary pressures on the human working memory system, Hume’s concepts of reason and the passions, and to the emerging field of neuroeconomics. / Graduate
143

An adaptive atmospheric prediction algorithm to improve density forecasting for aerocapture guidance processes

Wagner, John Joseph 12 January 2015 (has links)
Many modern entry guidance systems depend on predictions of atmospheric parameters, notably atmospheric density, in order to guide the entry vehicle to some desired final state. However, in highly dynamic atmospheric environments such as the Martian atmosphere, the density may vary by as much as 200% from predicted pre-entry trends. This high level of atmospheric density uncertainty can cause significant complications for entry guidance processes and may in extreme scenarios cause complete failure of the entry. In the face of this uncertainty, mission designers are compelled to apply large trajectory and design safety margins which typically drive the system design towards less efficient solutions with smaller delivered payloads. The margins necessary to combat the high levels of atmospheric uncertainty may even preclude scientifically interesting destinations or architecturally useful mission modes such as aerocapture. Aerocapture is a method for inserting a spacecraft into an orbit about a planetary body with an atmosphere without the need for significant propulsive maneuvers. This can reduce the required propellant and propulsion hardware for a given mission which lowers mission costs and increases the available payload fraction. However, large density dispersions have a particularly acute effect on aerocapture trajectories due to the interaction of the high required speeds and relatively low densities encountered at aerocapture altitudes. Therefore, while the potential system level benefits of aerocapture are great, so too are the risks associated with this mission mode in highly uncertain atmospheric environments such as Mars. Contemporary entry guidance systems utilize static atmospheric density models for trajectory prediction and control. These static models are unable to alter the fundamental nature of the underlying state equations which are used to predict atmospheric density. This limits both the fidelity and adaptive freedom of these models and forces the guidance system to retroactively correct for the density prediction errors after those errors have already impacted the trajectory. A new class of dynamic density estimator called a Plastic Ensemble Neural System (PENS) is introduced which is able to generate high fidelity, adaptable density forecast models by altering the underlying atmospheric state equations to better agree with observed atmospheric trends. A new construct called an ensemble echo is also introduced which creates an associative learning architecture, permitting PENS to evolve with increasing atmospheric exposure. The PENS estimator is applied to a numerical guidance system and the performance of the composite system is investigated with over 144,000 guided trajectory simulations. The results demonstrate that the PENS algorithm achieves significant reductions in both the required post-aerocapture performance, and the aerocapture failure rates relative to historical density estimators.
144

Adaptive Neuro Fuzzy Inference System Applications In Chemical Processes

Guner, Evren 01 November 2003 (has links) (PDF)
Neuro-Fuzzy systems are the systems that neural networks (NN) are incorporated in fuzzy systems, which can use knowledge automatically by learning algorithms of NNs. They can be viewed as a mixture of local experts. Adaptive Neuro-Fuzzy inference system (ANFIS) is one of the examples of Neuro Fuzzy systems in which a fuzzy system is implemented in the framework of adaptive networks. ANFIS constructs an input-output mapping based both on human knowledge (in the form of fuzzy rules) and on generated input-output data pairs. Effective control for distillation systems, which are one of the important unit operations for chemical industries, can be easily designed with the known composition values. Online measurements of the compositions can be done using direct composition analyzers. However, online composition measurement is not feasible, since, these analyzers, like gas chromatographs, involve large measurement delays. As an alternative, compositions can be estimated from temperature measurements. Thus, an online estimator that utilizes temperature measurements can be used to infer the produced compositions. In this study, ANFIS estimators are designed to infer the top and bottom product compositions in a continuous distillation column and to infer the reflux drum compositions in a batch distillation column from the measurable tray temperatures. Designed estimator performances are further compared with the other types of estimators such as NN and Extended Kalman Filter (EKF). In this study, ANFIS performance is also investigated in the adaptive Neuro-Fuzzy control of a pH system. ANFIS is used in specialized learning algorithm as a controller. Simple ANFIS structure is designed and implemented in adaptive closed loop control scheme. The performance of ANFIS controller is also compared with that of NN for the case under study.
145

Tournaments in the public sector

Souza Junior, Celso Vila Nova de 31 March 2008 (has links)
Tournament theory shows that a firm may motivate employees by running competitors for rewards either for a group or individualistic schemes. The empirical literature on Tournaments has been grown. However, many studies use no appropriate data. This paper provides the first empirical evidence on three key assumptions in these models using a special case surrounding the incentives for workers in public sector. The dataset contains information from the Coordenacao de Fiscalizacao (i.e., the Inspections Division) of the Secretaria da Receita Federal (SRF) on the bonus program created by the Brazilian government to compensate tax officials for their efforts in collecting taxes and uncovering tax violations. We constructed a larger unbalanced panel data Tax collection containing information upon 110 tax agencies distributed between 10 regions and 45 time period by month, which allowed us to support the predictions raised above. In order to examine the tournaments predictions we emphasize the dynamic of the process taking into account the unobserved heterogeneity and endogeneity problems using appropriate GMM techniques. This enable us to pondered the possible inertia for time adjustments within tax agency, possibly in determining strategies to improve the tax agency performance on the sources most valuable for collection, which supports the hypothesis of learning by doing. The results also demonstrated evidence to support the following tournaments hypothesis: (1) prizes motivate agents to exert effort; (2) number of participants increased as the size of the prize increase; (3) differential in wages and bonus directly affect workers incentives.
146

Efficient Semiparametric Estimators for Nonlinear Regressions and Models under Sample Selection Bias

Kim, Mi Jeong 2012 August 1900 (has links)
We study the consistency, robustness and efficiency of parameter estimation in different but related models via semiparametric approach. First, we revisit the second- order least squares estimator proposed in Wang and Leblanc (2008) and show that the estimator reaches the semiparametric efficiency. We further extend the method to the heteroscedastic error models and propose a semiparametric efficient estimator in this more general setting. Second, we study a class of semiparametric skewed distributions arising when the sample selection process causes sampling bias for the observations. We begin by assuming the anti-symmetric property to the skewing function. Taking into account the symmetric nature of the population distribution, we propose consistent estimators for the center of the symmetric population. These estimators are robust to model misspecification and reach the minimum possible estimation variance. Next, we extend the model to permit a more flexible skewing structure. Without assuming a particular form of the skewing function, we propose both consistent and efficient estimators for the center of the symmetric population using a semiparametric method. We also analyze the asymptotic properties and derive the corresponding inference procedures. Numerical results are provided to support the results and illustrate the finite sample performance of the proposed estimators.
147

Finite horizon robust state estimation for uncertain finite-alphabet hidden Markov models

Xie, Li, Information Technology & Electrical Engineering, Australian Defence Force Academy, UNSW January 2004 (has links)
In this thesis, we consider a robust state estimation problem for discrete-time, homogeneous, first-order, finite-state finite-alphabet hidden Markov models (HMMs). Based on Kolmogorov's Theorem on the existence of a process, we first present the Kolmogorov model for the HMMs under consideration. A new change of measure is introduced. The statistical properties of the Kolmogorov representation of an HMM are discussed on the canonical probability space. A special Kolmogorov measure is constructed. Meanwhile, the ergodicity of two expanded Markov chains is investigated. In order to describe the uncertainty of HMMs, we study probability distance problems based on the Kolmogorov model of HMMs. Using a change of measure technique, the relative entropy and the relative entropy rate as probability distances between HMMs, are given in terms of the HMM parameters. Also, we obtain a new expression for a probability distance considered in the existing literature such that we can use an information state method to calculate it. Furthermore, we introduce regular conditional relative entropy as an a posteriori probability distance to measure the discrepancy between HMMs when a realized observation sequence is given. A representation of the regular conditional relative entropy is derived based on the Radon-Nikodym derivative. Then a recursion for the regular conditional relative entropy is obtained using an information state method. Meanwhile, the well-known duality relationship between free energy and relative entropy is extended to the case of regular conditional relative entropy given a sub-[special character]-algebra. Finally, regular conditional relative entropy constraints are defined based on the study of the probability distance problem. Using a Lagrange multiplier technique and the duality relationship for regular conditional relative entropy, a finite horizon robust state estimator for HMMs with regular conditional relative entropy constraints is derived. A complete characterization of the solution to the robust state estimation problem is also presented.
148

Finite horizon robust state estimation for uncertain finite-alphabet hidden Markov models

Xie, Li, Information Technology & Electrical Engineering, Australian Defence Force Academy, UNSW January 2004 (has links)
In this thesis, we consider a robust state estimation problem for discrete-time, homogeneous, first-order, finite-state finite-alphabet hidden Markov models (HMMs). Based on Kolmogorov's Theorem on the existence of a process, we first present the Kolmogorov model for the HMMs under consideration. A new change of measure is introduced. The statistical properties of the Kolmogorov representation of an HMM are discussed on the canonical probability space. A special Kolmogorov measure is constructed. Meanwhile, the ergodicity of two expanded Markov chains is investigated. In order to describe the uncertainty of HMMs, we study probability distance problems based on the Kolmogorov model of HMMs. Using a change of measure technique, the relative entropy and the relative entropy rate as probability distances between HMMs, are given in terms of the HMM parameters. Also, we obtain a new expression for a probability distance considered in the existing literature such that we can use an information state method to calculate it. Furthermore, we introduce regular conditional relative entropy as an a posteriori probability distance to measure the discrepancy between HMMs when a realized observation sequence is given. A representation of the regular conditional relative entropy is derived based on the Radon-Nikodym derivative. Then a recursion for the regular conditional relative entropy is obtained using an information state method. Meanwhile, the well-known duality relationship between free energy and relative entropy is extended to the case of regular conditional relative entropy given a sub-[special character]-algebra. Finally, regular conditional relative entropy constraints are defined based on the study of the probability distance problem. Using a Lagrange multiplier technique and the duality relationship for regular conditional relative entropy, a finite horizon robust state estimator for HMMs with regular conditional relative entropy constraints is derived. A complete characterization of the solution to the robust state estimation problem is also presented.
149

Recovery based error estimation for the Method of Moments

Strydom, Willem Jacobus 03 1900 (has links)
Thesis (MEng)--Stellenbosch University, 2015. / ENGLISH ABSTRACT: The Method of Moments (MoM) is routinely used for the numerical solution of electromagnetic surface integral equations. Solution errors are inherent to any numerical computational method, and error estimators can be effectively employed to reduce and control these errors. In this thesis, gradient recovery techniques of the Finite Element Method (FEM) are formulated within the MoM context, in order to recover a higher-order charge of a Rao-Wilton-Glisson (RWG) MoM solution. Furthermore, a new recovery procedure, based specifically on the properties of the RWG basis functions, is introduced by the author. These recovered charge distributions are used for a posteriori error estimation of the charge. It was found that the newly proposed charge recovery method has the highest accuracy of the considered recovery methods, and is the most suited for applications within recovery based error estimation. In addition to charge recovery, the possibility of recovery procedures for the MoM solution current are also investigated. A technique is explored whereby a recovered charge is used to find a higher-order divergent current representation. Two newly developed methods for the subsequent recovery of the solenoidal current component, as contained in the RWG solution current, are also introduced by the author. A posteriori error estimation of the MoM current is accomplished through the use of the recovered current distributions. A mixed second-order recovered current, based on a vector recovery procedure, was found to produce the most accurate results. The error estimation techniques developed in this thesis could be incorporated into an adaptive solver scheme to optimise the solution accuracy relative to the computational cost. / AFRIKAANSE OPSOMMING: Die Moment Metode (MoM) vind algemene toepassing in die numeriese oplossing van elektromagnetiese oppervlak integraalvergelykings. Numeriese foute is inherent tot die prosedure: foutberamingstegnieke is dus nodig om die betrokke foute te analiseer en te reduseer. Gradiënt verhalingstegnieke van die Eindige Element Metode word in hierdie tesis in die MoM konteks geformuleer. Hierdie tegnieke word ingespan om die oppervlaklading van 'n Rao-Wilton-Glisson (RWG) MoM oplossing na 'n verbeterde hoër-orde voorstelling te neem. Verder is 'n nuwe lading verhalingstegniek deur die outeur voorgestel wat spesifiek op die eienskappe van die RWG basis funksies gebaseer is. Die verhaalde ladingsverspreidings is geïmplementeer in a posteriori fout beraming van die lading. Die nuut voorgestelde tegniek het die akkuraatste resultate gelewer, uit die groep verhalingstegnieke wat ondersoek is. Addisioneel tot ladingsverhaling, is die moontlikheid van MoM-stroom verhalingstegnieke ook ondersoek. 'n Metode vir die verhaling van 'n hoër-orde divergente stroom komponent, gebaseer op die verhaalde lading, is geïmplementeer. Verder is twee nuwe metodes vir die verhaling van die solenodiale komponent van die RWG stroom deur die outeur voorgestel. A posteriori foutberaming van die MoM-stroom is met behulp van die verhaalde stroom verspreidings gerealiseer, en daar is gevind dat 'n gemengde tweede-orde verhaalde stroom, gebaseer op 'n vektor metode, die beste resultate lewer. Die foutberamingstegnieke wat in hierdie tesis ondersoek is, kan in 'n aanpasbare skema opgeneem word om die akkuraatheid van 'n numeriese oplossing, relatief tot die berekeningskoste, te optimeer.
150

Estimação de perdas técnicas e comerciais : métodos baseados em fluxo de carga e estimador de estados

Rossoni, Aquiles January 2014 (has links)
As perdas técnicas e comerciais apresentam valores significativos nas empresas de distribuição de energia elétrica, prejudicando o seu desempenho técnico e financeiro. A aplicação e avaliação das técnicas de redução de perdas estão diretamente relacionadas à correta estimação das mesmas. Este trabalho tem o objetivo de analisar o desempenho dos métodos baseados em fluxo de carga e estimação de estados na estimação de perdas, considerando sistemas de distribuição equilibrados. O método de fluxo de carga utilizado é o Newton-Raphson. Considerando este fluxo de carga, os métodos de comparação com medições e de fatores de correção são descritos e aplicados, ambos revisados da literatura. Utilizando o estimador de estados por mínimos quadrados ponderados, a estimação das perdas é dada com o auxílio da análise de erros grosseiros. Nos métodos utilizados, o primeiro aplica a análise dos resíduos, conforme é apresentado na literatura, no outro método, é proposta a aplicação da análise dos erros compostos. O estudo de caso é realizado em um sistema de distribuição equilibrado, considerando predições de carga e um número restrito de medições. Nos casos propostos, são inseridos diferentes níveis de perdas comerciais, em diferentes barras, e é considerado que as predições de carga e as medições estão sujeitas a erros. Para cada caso proposto, os resultados apresentam os erros dos métodos na estimação das perdas técnicas e comerciais. Adicionalmente, é realizada uma análise da relação dos erros com o nível de inserção de perdas comerciais. O trabalho também apresenta o desempenho dos métodos na localização das perdas comerciais e um estudo do custo computacional das metodologias. Os resultados demonstram que a precisão da predição de carga é determinante na estimação das perdas comerciais. O método baseado em estimador de estados com a aplicação da análise de erros compostos, proposto neste trabalho, apresentou os melhores resultados na estimação e identificação de perdas. A expansão deste método para sistemas de distribuição maiores e desequilibrados aparenta ser uma alternativa interessante para a estimação e combate de perdas. / The technical and commercial energy losses are significant in distribution companies, hampering the technical and financial performance. The application and evaluation of techniques to reduce losses are directly related to the proper loss estimation. This work aims to analyze the performance of loss estimation using methods based on load flow and state estimation, considering balanced distribution systems. The method of load flow used is the Newton-Raphson. Considering this load flow, the methods of comparison with measurements and correction factors are described, both of them from the literature. Using the weighted least squares state estimator, the losses are estimated using the gross error analysis. One of the methods uses the residue analysis, as presented in the literature, in another method, is proposed to use the composed error analysis. The case study is performed in a balanced distribution system, considering load forecastings and a limited number of measurements. In the proposed cases, different levels of commercial losses are inserted in different buses, and it is considered that load forecatings and measurements are subject to error. For each proposed case, the results show the methods errors in technical and commercial loss estimation. Additionally, an analysis of the relationship of estimation errors with the level of commercial losses is performed. This work also shows the methods performance in the commercial losses location and a study of the computational costs of methods. The results show that the load forecast accuracy is relevant in the commercial losses estimation. The method based on state estimation that uses composed error analysis, proposed in this dissertation, showed the best results in the loss estimation and identification. The expansion of this method for larger and unbalanced distribution systems appears to be an interesting alternative to estimate and combat losses.

Page generated in 0.0366 seconds