• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 241
  • 45
  • 29
  • 24
  • 14
  • 11
  • 6
  • 6
  • 4
  • 3
  • 3
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 463
  • 327
  • 87
  • 67
  • 61
  • 60
  • 59
  • 49
  • 41
  • 38
  • 38
  • 36
  • 35
  • 33
  • 31
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Essays on Treatment Effects Evaluation

Guo, Ronghua 06 September 2012 (has links)
The first chapter uses the propensity score matching method to measure the average impact of insurance on health service utilization in terms of office-based physician visits, total number of reported visits to hospital outpatient departments, and emergency room visits. Four matching algorithms are employed to match propensity scores. The results show that insurance significantly increases office-based physician visits, and its impacts on reported visits to hospital outpatient departments and emergency room visits are positive, but not significant. This implies that physician offices will receive a substantial increase in demand if universal insurance is imposed. Government will need to allocate more resources to physician offices relative to outpatient or emergency room services in the case of universal insurance in order to accommodate the increased demand. The second chapter studies the sensitivity of propensity score matching methods to different estimation methods. Traditionally, parametric models, such as logit and probit, are used to estimate propensity score. Current technology allows us to use computationally intensive methods, either semiparametric or nonparametric, to estimate it. We use the Monte Carlo experimental method to investigate the sensitivity of the treatment effect to different propensity score estimation models under the unconfoundedness assumption. The results show that the average treatment effect on the treated (ATT) estimates are insensitive to the estimation methods when index function for treatment is linear, but logit and probit model do better jobs when the index function is nonlinear. The third chapter proposes a Cross-Sectionally Varying (CVC) Coefficient method to approximate individual treatment effects with nonexperimental data, the distribution of treatment effects, the average treatment effect on the treated and the average treatment effect. The CVC method reparameterizes the outcome of no treatment and the treatment effect in terms of observable variables, and uses these observables together with a Bayesian estimator of their coefficients to approximate individual treatment effects. Monte Carlo simulations demonstrate the efficacy and applicability of the proposed estimator. This method is applied to two datasets: data from the U.S. Job Training Partnership ACT (JTPA) program and a dataset that contains firms’ seasoned equity offerings and operating performances.
12

Assessment of numerical differentiation methods for kinematic orbit solution of the GRACE mission

Krishnan, Sandeep Kalyanapuram 05 March 2013 (has links)
The historical method of precise orbit determination is a dynamic approach. However, with the improvement of GPS tracking data and associated tracking networks, two newer methods have been developed: reduced-dynamic and kinematic. In addition to orbit determination, alternative methods of gravity field recovery have been developed using kinematic orbits which do not rely on any force modeling. However, one significant drawback of kinematic orbits is that they lack any velocity or acceleration information. These have to be derived numerically. Based on the results of this thesis, the Savitzky-Golay filter, without using a remove-restore procedure, is recommended for deriving kinematic velocities of the GRACE mission. In addition, the numerical differentiation methods are tested to see how well accurately they represent the satellite's acceleration for all three orbit types. Finally, with the kinematic orbits properly reconstructed, the results can also be compared to dynamic and reduced-dynamic orbits through K-Band Ranging residuals. / text
13

Rapid Characterization of Cellular Pathways Using Time-Varying Signals

Thomson, Ty M, Endy, Drew 21 October 2005 (has links)
The use of traditional tools for the discovery and characterization of biological systems has resulted in a wealth of biological knowledge. Unfortunately, only a small portion of the biological world is well-understood to date, and the study of the rest remains a daunting task. This work involves using time-varying stimuli in order to more rapidly interrogate and characterize signaling pathways. The time-dependent stimulation of a signaling pathway can be used in conjunction with a model of the pathway to efficiently evaluate and test hypotheses. We are developing this technology using the yeast pheromone signal transduction pathway as a model system. The time-varying stimuli will be applied to the yeast cells via a novel microfluidic device, and the pathway output will be measured via various fluorescent reporters. The output of the pathway can then be compared to the output from a computational model of the pathway in order to test hypotheses and constrain our knowledge of the pathway. Initial work shows that a computational model can be used to identify stimuli time-courses that increase the parameter sensitivity, meaning that corresponding experiments could potentially be much more informative. / Poster presented at the 2005 ICSB meeting, held at Harvard Medical School in Boston, MA.
14

Precision farming in South Africa

Rusch, Peter C 07 January 2004 (has links)
Precision Farming is by far the most exciting new agricultural technology developed during the past decade, and although technology transfer is especially difficult in agriculture for a number of reasons, this technology has survived its initial stages of implementation. Historically field boundaries were often along natural soil boundaries, leading to small fields, which were treated homogenously. As agricultural machinery was developed and grew ever larger, fields were often combined to allow for more efficient cultivation. As result, fields with varying properties were created resulting in inefficiencies. Precision Farming was developed to overcome this problem. In this paper some results of initial research undertaken in South Africa under a variety of circumstances will be shown. / Dissertation (MEng)--University of Pretoria, 2005. / Civil Engineering / Unrestricted
15

Temporal Closeness in Knowledge Mobilization Networks

Doan, William January 2016 (has links)
In this thesis we study the impact of time in the analysis of social networks. To do that we represent a knowledge mobilization network, Knowledge-Net, both as a standard static graph and a time-varying graph and study both graphs to see their differences. For our study, we implemented some temporal metrics and added them to Gephi, an open source software for graph and network analysis which already contains some static metrics. Then we used that software to obtain our results. Knowledge-Net is a network built using the knowledge mobilization concept. In social science, knowledge mobilization is defined as the use of knowledge towards the achievement of goals. The networks which are built using the knowledge mobilization concept make more visible the relations among heterogeneous human and non-human individuals, organizational actors and non-human mobilization actors. A time-varying graph is a graph with nodes and edges appearing and disappearing over time. A journey in a time-varying graph is equivalent to a path in a static graph. The notion of shortest path in a static graph has three variations in a time-varying graph: the shortest journey is the journey with the least number of temporal hops, the fastest journey is the journey that takes the least amount of time and the foremost journey is the journey that arrives the soonest. Out of those three, we focus on the foremost journey for our analysis.
16

Face Recognition Under Varying Illuminations

Faraji, Mohammadreza 01 August 2015 (has links)
Face recognition under illumination is really challenging. This dissertation proposes four effective methods to produce illumination-invariant features for images with various lev- els of illuminations. The proposed methods are called logarithmic fractal dimension (LFD), eight local directional patterns (ELDP), adaptive homomorphic eight local directional pat- terns (AH-ELDP), and complete eight local directional patterns (CELDP), respectively. LFD, employing the log function and the fractal analysis (FA), produces a logarithmic fractal dimension (LFD) image that is illumination-invariant. The proposed FA feature- based method is an effective edge enhancer technique to extract and enhance facial features such as eyes, eyebrows, nose, and mouth. The proposed ELDP code scheme uses Kirsch compass masks to compute the edge responses of a pixel's neighborhood. It then uses all the directional numbers to produce an illumination-invariant image. AH-ELDP first uses adaptive homomorphic filtering to reduce the influence of illumi- nation from an input face image. It then applies an interpolative enhancement function to stretch the filtered image. Finally, it produces eight directional edge images using Kirsch compass masks and uses all the directional information to create an illumination-insensitive representation. CELDP seamlessly combines adaptive homomorphic filtering, simplified logarithmic fractal dimension, and complete eight local directional patterns to produce illumination- invariant representations. Our extensive experiments on Yale B, extended Yale B, CMU-PIE, and AR face databases show the proposed methods outperform several state-of-the-art methods, when using one image per subject for training. We also evaluate the ability of each method to verify and discriminate face images by plotting receiver operating characteristic (ROC) curves which plot true positive rates (TPR) against the false positive rates (FPR). In addition, we conduct an experiment on the Honda UCSD video face database to simulate real face recognition systems which include face detection, landmark localization, face normalization, and face matching steps. This experiment, also, verifies that our proposed methods outperform other state-of-the-art methods.
17

Class Enumeration and Parameter Bias in Growth Mixture Models with Misspecified Time-Varying Covariates: A Monte Carlo Simulation Study

Palka, Jayme M. 12 1900 (has links)
Growth mixture modeling (GMM) is a useful tool for examining both between- and within-persons change over time and uncovering unobserved heterogeneity in growth trajectories. Importantly, the correct extraction of latent classes and parameter recovery can be dependent upon the type of covariates used. Time-varying covariates (TVCs) can influence class membership but are scarcely included in GMMs as predictors. Other times, TVCs are incorrectly modeled as time-invariant covariates (TICs). Additionally, problematic results can occur with the use of maximum likelihood (ML) estimation in GMMs, including convergence issues and sub-optimal maxima. In such cases, Bayesian estimation may prove to be a useful solution. The present Monte Carlo simulation study aimed to assess class enumeration accuracy and parameter recovery of GMMs with a TVC, particularly when a TVC has been incorrectly specified as a TIC. Both ML estimation and Bayesian estimation were examined. Results indicated that class enumeration indices perform less favorably in the case of TVC misspecification, particularly absolute class enumeration indices. Additionally, in the case of TVC misspecification, parameter bias was found to be greater than the generally accepted cutoff of 10%, particularly for variance estimates. It is recommended that researchers continue to use a variety of class enumeration indices during class enumeration, particularly relative indices. Additionally, researchers should take caution when interpreting variance parameter estimates when the GMM contains a misspecified TVC.
18

Internal Variable and Temperature Modeling Behavior of Viscoelastic Structures -- A Control Analysis

Silva, Luciano Afonso 27 August 2003 (has links)
Most of the methodologies dealing with viscoelastic damping focused exclusively on the frequency dependence behavior of the material. Only a few looked into the temperature dependence of the model, although none of them has taken a more serious investigation on the control design subjected to temperature disturbances. The general purpose of this work is to develop and investigate structures with damping modeled by means of internal variables. Thermodynamic principles are used to develop models, which are based on a generalized Maxwell element. Initially, studies are conducted to verify how the method of reduced variables can be applied to account for temperature dependence, as well as to evaluate the number of internal variables necessary for good accuracy of material properties representation. Lumped and finite element models are characterized and validated against other methods. A constrained layer damping model is experimentally validated for many temperatures. A control analysis is carried out on the models with the purpose to identify the role played by the internal variables on the control design. The results show that moving the internal poles is very expensive in terms of control energy. It is also shown that it is not always possible to eliminate the internal coordinates in the reduced order model if the system is highly damped. The problem of having the internal pole moved is solved by applying partial pole placement. This technique shows similar performance as compared to the linear quadratic Gaussian regulator. The control designs are implemented and it is shown that good regulation can be achieved for a fixed temperature. It is further shown that the controller will lose its performance when the model is subjected to temperature changes. To investigate the behavior of the model under different temperatures, a linear temperature-dependent model is developed, which clearly shows how the temperature affects the time response of the model. This model is used as a baseline to develop an adaptive and a time-varying controllers. With the aid of the shift factor, the eigenvalue variation with temperature is used as a time-varying function in the design. The results show that good track performance and regulation can be achieved with a control law that is capable of compensating for temperature variations. / Ph. D.
19

The impact of CBOE options listing on the volatility of NYSE traded stock: a time varying risk approach

Mazouz, Khelifa January 2004 (has links)
No / This paper employs the standard General Auto-regressive Conditional Heteroskedasticity (GARCH(1,1)) process to examine the impact of option listing on volatility the underlying stocks. It takes into consideration the time variation in the individual stock's variance and explicitly tests whether option listing causes any permanent volatility change. It also investigates the impact of option listing on the speed at which information is incorporated into the stock price. The study uses clean samples to avoid sample selection biases and control samples to account for the change in the volatility and/or information flows that may be caused by factors other than option listing.
20

A Varying Field Size Translational Bed Technique for Total Body Irradiation.

Wilder, Ben Richard January 2006 (has links)
Total body irradiation is the irradiation of the entire patient as a conditioning for bone marrow transplants. The conditioning process involves destroying the bone marrow allowing for repopulation of the donor bone marrow cells, suppression of the immune system to allow stop graft rejection, and to eliminate the cancer cell population within the patient. Studies have been done demonstrating the importance of TBI conditioning for BMT5. A range of TBI treatment techniques exist, this department uses a bi-lateral technique which requires bolus packed around the patient to simplify the geometry of the treatment. This investigation will focus on one technique which involves using a translating bed. This technique effectively scans a radiation beam over the patient as the bed moves through the beam. Other investigations on translating beds concentrated on varying the scan speed to achieve a dose uniformity to within ±5%. The recommendations quote a dose uniformity of +5% and -10% as acceptable⁹. The dose uniformity in these investigations was along the midline in the longitudinal direction only. This investigation varied field size to achieve dose uniformity to within ±2.5% along the midline of an anthropomorphic phantom. The goal was to determine if a dynamic multi-leaf collimator could be used to give a uniform in the transverse direction as well as the longitudinal direction. An advantage of utilizing the DMLC for this treatment is the ability to shield organs at risk, i.e. lungs and kidneys, without requiring resources to produce shielding blocks14. Gafchromic-EBT film18 was used as a dosimeter but gave unreliable results due to the lack of film scanning equipment with an appropriate sensitivity for reading the dose to the film. Scans were simulated using Xio treatment planning software. The results from the simulations gave a more reliable indication of the absorbed dose to the midline of the phantom. The disadvantage of this varying field size technique was the time and complexity involved in creating a treatment plan. Within the Xio software exists a limit on the number of beams allowed to be applied in a single plan. There is a maximum of 99 beams allowed which is not enough for complete coverage of a patient. A way around this is to increase the field sizes and decrease the scan speed. This option was not investigated. The advantage of this technique was the increased dose uniformity (±2.5%) in comparison to the varying scan speed techniques (±5%). This technique also allows the patient to be unencumbered during the treatment making the process more comfortable for them.

Page generated in 0.0593 seconds