• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 622
  • 158
  • 86
  • 74
  • 55
  • 47
  • 33
  • 17
  • 16
  • 14
  • 13
  • 12
  • 9
  • 8
  • 8
  • Tagged with
  • 1437
  • 211
  • 191
  • 191
  • 184
  • 180
  • 125
  • 118
  • 104
  • 103
  • 99
  • 86
  • 82
  • 80
  • 79
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
551

Tools for Performance Optimizations and Tuning of Affine Loop Nests

Hartono, Albert January 2009 (has links)
No description available.
552

Flexible Pavement Condition Model Using Clusterwise Regression and Mechanistic-Empirical Procedure for Fatigue Cracking Modeling

Luo, Zairen January 2005 (has links)
No description available.
553

Statistical Inferences under a semiparametric finite mixture model

Zhang, Shiju January 2005 (has links)
No description available.
554

Decay Rates of Aerosolized Particles in Dental Practices

Momenieskandari, Homa 15 September 2022 (has links)
No description available.
555

Wind tunnel blockage corrections forwind turbine measurements

Inghels, Pieter January 2013 (has links)
Wind-tunnel measurements are an important step during the windturbinedesign process. The goal of wind-tunnel tests is to estimate theoperational performance of the wind turbine, for example by measuringthe power and thrust coecients. Depending on the sizes of both thewind turbine and the test section, the eect of blockage can be substantial.Correction schemes for the power and thrust coecients havebeen proposed in the literature, but for high blockage and highly loadedrotors these correction schemes become less accurate.A new method is proposed here to calculate the eect a cylindricalwind-tunnel test section has on the performance of the wind turbine.The wind turbine is modeled with a simplied vortex model. Usingvortices of constant circulation to model the wake vortices, the performancecharacteristics are estimated. The test section is modeled witha panel method, adapted for this specic situation. It uses irrotationalaxisymmetric source panels to enforce the solid-wall boundary condition.Combining both models in an iterative scheme allows for thesimulation of the eect of the presence of the test-section walls on windturbines performace.Based on the proposed wind-tunnel model, a more general empirical correlationscheme is proposed to estimate the performance characteristicsof a wind turbine operating under unconned conditions by correctingthe performance measured in the conned wind-tunnel conguration.The proposed correction scheme performs better than the existing correctionschemes, including cases with high blockage and highly loadedrotors.
556

EMPIRICAL COMPARISON OF THE STATISTICAL METHODS OF ANALYZING INTERVENTION EFFECTS AND CORRELATION ANALYSIS BETWEEN CLINICAL OUTCOMES AND SURROGATE COMPOSITE SCORES IN RANDOMIZED CONTROLLED TRIALS USING COMPETE III TRIAL DATA

Xu, Jian-Yi 10 1900 (has links)
<p><strong>Background:</strong> A better application of evidence-based available therapies and optimal patient care are suggested to have a positive association with patient outcomes for cardiovascular disease (CVD) patients. Electronic integration of care tested in the Computerization of Medical Practices for the Enhancement of Therapeutic Effectiveness (COMPETE) Π study showed that a shared electronic decision-support system to support the primary care of diabetes improved the process of care and some clinical markers of the quality of diabetes care. On the basis of COMPETE Π trial, COMPETE Ш study showed that older adults at increased risk of cardiovascular events, if connected with their family physicians and other providers via an electronic network sharing an intensive, individualized cardiovascular tracking, advice and support program, enhanced their process of care – using a process composite score to lower their cardiovascular risk more than those in conventional care. However, results of the effect of intervention on composite process and clinical outcomes were not similar – there was no significant effect on clinical outcomes.</p> <p><strong>Objectives:</strong> Our objectives were to investigate the robustness of the results based the commonly used statistical models using COMPETE III dataset and explore the validity of the surrogate process composite score using a correlation analysis between the clinical outcomes and process composite score.</p> <p><strong>Methods:</strong> Generalized estimating equations (GEE) were used as a primary statistical model in this study. Three patient-level statistical methods (simple linear regression, fixed-effects regression, and mixed-effects regression) and two center-level statistical approaches (center-level fixed-effects model and center-level random-effects model) were compared to reference GEE model in terms of the robustness of the results – magnitude, direction and statistical significance of the estimated effects on the change of process composite score / on-target clinical composite score. GEE was also used to investigate thecorrelation between the clinical outcomes and surrogate process composite scores.</p> <p><strong>Results:</strong> All six statistical models used in this study produced robust estimates of intervention effect. No significant association between cardiovascular events and on-target clinical composite score and individual component of on-target clinical composite score were found between the intervention group and control group. However, blood pressure, LDL cholesterol, and psychosocial index are significant predictors of cardiovascular events. Process composite score can both predict the cardiovascular events and clinical improvement, but the results were not statistically significant- possibly due to the small number of events. However, the process composite score was significantly associated with the on-target clinical composite score.</p> <p><strong>Conclusions:</strong> We concluded that all five analytic models yielded similar robust estimation of intervention effect comparing to the reference GEE model. The relatively smaller estimate effects in the center-level fixed-effects model suggest that the within-center variation should be considered in the analysis of multicenter RCTs. Process composite score may serve as a good predictor for CVD outcomes.</p> / Master of Science (MSc)
557

FRAGILITY CURVES FOR RESIDENTIAL BUILDINGS IN DEVELOPING COUNTRIES: A CASE STUDY ON NON-ENGINEERED UNREINFORCED MASONRY HOMES IN BANTUL, INDONESIA

Khalfan, Miqdad 04 1900 (has links)
<p>Developing countries typically suffer far greater than developed countries as a result of earthquakes. Poor socioeconomic conditions often lead to poorly constructed homes that are vulnerable to damage during earthquakes. Literature review in this study highlights the lack of existing fragility curves for buildings in developing countries. Furthermore, fragility curves derived using empirical data are almost nonexistent due to the scarcity of post-earthquake damage data and insufficient ground motion recordings in developing countries. Therefore, this research proposes a methodology for developing empirical fragility curves using ground motion data in the form of USGS ShakeMaps.</p> <p>The methodology has been applied to a case study consisting of damage data collected in Bantul Regency, Indonesia in the aftermath of the May 2006 Yogyakarta earthquake in Indonesia. Fragility curves for non-engineered single-storey unreinforced masonry (URM) homes have been derived using the damage dataset for three ground motion parameters; peak ground acceleration (PGA), peak ground velocity (PGV), and pseudo-spectral acceleration (PSA). The fragility curves indicate the high seismic vulnerability of non-engineered URM homes in developing countries. There is a probability of 80% that a seismic event with a PGA of only 0.1g will induce significant cracking of the walls and reduction in the load carrying capacity of a URM home, resulting in moderate damage or collapse. Fragility curves as a function of PGA and PSA were found to reasonably represent the damage data; however, fits for several PGV fragility curves could not be obtained. The case study illustrated the extension of ShakeMaps to fragility curves, and the derived fragility curves supplement to the limited collection of empirical fragility curves for developing countries. Finally, a comparison with an existing fragility study highlights the significant influence of the derivation method used on the fragility curves. The diversity in construction techniques and material quality in developing countries, particularly for non-engineered cannot be sufficiently represented through simplified or idealized analytical models. Therefore, the empirical method is considered to be the most suitable method for deriving fragility curves for structures in developing countries.</p> / Master of Applied Science (MASc)
558

THREE ESSAYS IN EMPIRICAL CORPORATE FINANCE

Khokhar, Abdul Rahman 10 1900 (has links)
<p>This thesis explores the following three important issues in the field of corporate finance: window dressing in corporate cash holdings, market effects of SEC regulation of short-term borrowing disclosure and market response to dividend change announcements by unregulated versus regulated firms.</p> <p>First, I find strong evidence of upward window dressing in cash holdings by U.S. industrial firms during the fourth fiscal quarter. This behavior is robust to several controls and a December year-end dummy. Further cross-sectional analysis reveals that the window dressing is sensitive to firm size and level of information asymmetry. I also find that firms manipulate discretionary accruals to dress up fourth quarter cash, perhaps to gain favourable credit terms on issuing short-term debt.</p> <p>Second, I use portfolios of financial and non-financial SEC registrants to examine the market reaction to proposed SEC short-term borrowing disclosure regulation. Using event study methodology, I find that the market reaction is positive and significant at the announcement date and negative and significant at the voting date. Overall, I observe a positive market reaction, indicating the usefulness of the disclosure from the vantage point of users. The results for various subsets confirm the expectations and suggest that a “one-size-fits-all” approach to regulation is undesirable.</p> <p>Finally, I use large samples of dividend increase and decrease announcements for the period 1960 to 2010 in order to compare stock price reactions of unregulated and regulated firms. I observe a stronger market reaction to the dividend increase announcements of unregulated firms compared to those of regulated firms after controlling for firm characteristics, market factors and contemporaneous earnings announcements, a result consistent with the dividend signaling hypothesis and uniqueness argument for regulated firms. However, I find that the market reaction to dividend decrease announcements is similar for unregulated and regulated firms. The cross-sectional analysis further confirms that the stronger stock price reaction to dividend increase announcements of unregulated firms is associated with the level of information asymmetry.</p> / Doctor of Philosophy (PhD)
559

EMPIRICAL APPLICATION OF DIFFERENT STATISTICAL METHODS FOR ANALYZING CONTINUOUS OUTCOMES IN RANDOMIZED CONTROLLED TRIALS

Zhang, Shiyuan 10 1900 (has links)
<p>Background: Post-operative pain management in total joint replacement surgery remains to be ineffective in up to 50% of patients and remains to have overwhelming impacts in terms of patient well-being and healthcare burden. The MOBILE trial was designed to assess whether the addition of gabapentin to a multimodal perioperative analgesia regimen can reduce morphine consumption or improve analgesia of patients following total joint arthroplasty. We present here empirical application of these various statistical methods to the MOBILE trial.</p> <p>Methods: Part 1: Analysis of covariance (ANCOVA) was used to adjust for baseline measures and to provide an unbiased estimate of the mean group difference of the one year post-operative knee flexion scores in knee arthroplasty patients. Robustness test were done by comparing ANCOVA to three comparative methods: i) the post-treatment scores, ii) change in scores, iii) percentage change from baseline.</p> <p>Part 2: Morphine consumption, taken at 4 time periods, of both the total hip and total knee arthroplasty patients was analyzed using linear mixed-effects model (LMEM) to provide a longitudinal estimate of the group difference. Repeated measures ANOVA and generalized estimating equations were used in a sensitivity analysis to compare robustness of the methods. Additionally, robustness of different covariance matrix structures in the LMEM were tested, namely first order auto-regressive compared to compound symmetry and unstructured.</p> <p>Results: Part 1: All four methods showed similar direction of effect, however ANCOVA (-3.9, 95% CI -9.5, 1.6, p=0.15) and post-treatment score (-4.3, 95% CI -9.8, 1.2, p=0.12) method provided the highest precision of estimate compared to change score (-3.0, 95% CI -9.9, 3.8, p=0.38) and percent change (-0.019, 95% CI -0.087, 0.050, p=0.58).</p> <p>Part 2: There was no statistically significant difference between the morphine consumption in the treatment group and the control group (1.0, 95% CI -4.7, 6.7, p=0.73). The results remained robust across different longitudinal methods and different covariance matrix structures.</p> <p>Conclusion: ANCOVA, through both simulation and empirical studies, provides the best statistical estimation for analyzing continuous outcomes requiring covariate adjustment. More wide-spread of the use of ANCOVA should be recommended amongst not only biostatisticians but also clinicians and trialists. The re-analysis of the morphine consumption aligns with the results of the MOBILE trial that gabapentin did not significantly reduce morphine consumption in patients undergoing major replacement surgeries. More work in area of post-operative pain is required to provide sufficient management for this patient population.</p> / Master of Science (MSc)
560

ROBUST ESTIMATION OF THE PARAMETERS OF g - and - h DISTRIBUTIONS, WITH APPLICATIONS TO OUTLIER DETECTION

Xu, Yihuan January 2014 (has links)
The g - and - h distributional family is generated from a relatively simple transformation of the standard normal. By changing the skewness and elongation parameters g and h, this distributional family can approximate a broad spectrum of commonly used distributional shapes, such as normal, lognormal, Weibull and exponential. Consequently, it is easy to use in simulation studies and has been applied in multiple areas, including risk management, stock return analysis and missing data imputation studies. The current available methods to estimate the g - and - h distributional family include: letter value based method (LV), numerical maximal likelihood method (NMLE), and moment methods. Although these methods work well when no outliers or contaminations exist, they are not resistant to a moderate amount of contaminated observations or outliers. Meanwhile, NMLE is a computational time consuming method when data sample size is large. In this dissertation a quantile based least squares (QLS) estimation method is proposed to fit the g - and - h distributional family parameters and then derive its basic properties. Then QLS method is extended to a robust version (rQLS). Simulation studies are performed to compare the performance of QLS and rQLS methods with LV and NMLE methods to estimate the g - and - h parameters from random samples with or without outliers. In random samples without outliers, QLS and rQLS estimates are comparable to LV and NMLE in terms of bias and standard error. On the other hand, rQLS performs better than other non-robust method to estimate the g - and - h parameters when moderate amount of contaminated observations or outliers exist. The flexibility of the g - and - h distribution and the robustness of rQLS method make it a useful tool in various fields. The boxplot (BP) method had been used in multiple outlier detections by controlling the some-outside rate, which is the probability of one or more observations, in an outlier-free sample, falling into the outlier region. The BP method is distribution dependent. Usually the random sample is assumed normally distributed; however, this assumption may not be valid in many applications. The robustly estimated g - and - h distribution provides an alternative approach without distributional assumptions. Simulation studies indicate that the BP method based on robustly estimated g - and - h distribution identified reasonable number of true outliers while controlling number of false outliers and some-outside rate compared to normal distributional assumption when it is not valid. Another application of the robust g - and - h distribution is as an empirical null distribution in false discovery rate method (denoted as BH method thereafter). The performance of BH method depends on the accuracy of the null distribution. It has been found that theoretical null distributions were often not valid when simultaneously performing many thousands, even millions, of hypothesis tests. Therefore, an empirical null distribution approach is introduced that uses estimated distribution from the data. This is recommended as a substitute to the currently used empirical null methods of fitting a normal distribution or another member of the exponential family. Similar to BP outlier detection method, the robustly estimated g - and - h distribution can be used as empirical null distribution without any distributional assumptions. Several real data examples of microarray are used as illustrations. The QLS and rQLS methods are useful tools to estimate g - and - h parameters, especially rQLS because it noticeably reduces the effect of outliers on the estimates. The robustly estimated g - and - h distributions have multiple applications where distributional assumptions are required, such as boxplot outlier detection or BH methods. / Statistics

Page generated in 0.1159 seconds