• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 32
  • 8
  • 6
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 64
  • 64
  • 19
  • 15
  • 9
  • 8
  • 8
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Testing the Limits of Latent Class Analysis

January 2012 (has links)
abstract: The purpose of this study was to examine under which conditions "good" data characteristics can compensate for "poor" characteristics in Latent Class Analysis (LCA), as well as to set forth guidelines regarding the minimum sample size and ideal number and quality of indicators. In particular, we studied to which extent including a larger number of high quality indicators can compensate for a small sample size in LCA. The results suggest that in general, larger sample size, more indicators, higher quality of indicators, and a larger covariate effect correspond to more converged and proper replications, as well as fewer boundary estimates and less parameter bias. Based on the results, it is not recommended to use LCA with sample sizes lower than N = 100, and to use many high quality indicators and at least one strong covariate when using sample sizes less than N = 500. / Dissertation/Thesis / M.A. Psychology 2012
22

The Impact of Partial Measurement Invariance on Between-group Comparisons of Latent Means for a Second-Order Factor

January 2016 (has links)
abstract: A simulation study was conducted to explore the influence of partial loading invariance and partial intercept invariance on the latent mean comparison of the second-order factor within a higher-order confirmatory factor analysis (CFA) model. Noninvariant loadings or intercepts were generated to be at one of the two levels or both levels for a second-order CFA model. The numbers and directions of differences in noninvariant loadings or intercepts were also manipulated, along with total sample size and effect size of the second-order factor mean difference. Data were analyzed using correct and incorrect specifications of noninvariant loadings and intercepts. Results summarized across the 5,000 replications in each condition included Type I error rates and powers for the chi-square difference test and the Wald test of the second-order factor mean difference, estimation bias and efficiency for this latent mean difference, and means of the standardized root mean square residual (SRMR) and the root mean square error of approximation (RMSEA). When the model was correctly specified, no obvious estimation bias was observed; when the model was misspecified by constraining noninvariant loadings or intercepts to be equal, the latent mean difference was overestimated if the direction of the difference in loadings or intercepts of was consistent with the direction of the latent mean difference, and vice versa. Increasing the number of noninvariant loadings or intercepts resulted in larger estimation bias if these noninvariant loadings or intercepts were constrained to be equal. Power to detect the latent mean difference was influenced by estimation bias and the estimated variance of the difference in the second-order factor mean, in addition to sample size and effect size. Constraining more parameters to be equal between groups—even when unequal in the population—led to a decrease in the variance of the estimated latent mean difference, which increased power somewhat. Finally, RMSEA was very sensitive for detecting misspecification due to improper equality constraints in all conditions in the current scenario, including the nonzero latent mean difference, but SRMR did not increase as expected when noninvariant parameters were constrained. / Dissertation/Thesis / Masters Thesis Educational Psychology 2016
23

On the Topic of Unconstrained Black-Box Optimization with Application to Pre-Hospital Care in Sweden : Unconstrained Black-Box Optimization

Anthony, Tim January 2021 (has links)
In this thesis, the theory and application of black-box optimization methods are explored. More specifically, we looked at two families of algorithms, descent methods andresponse surface methods (closely related to trust region methods). We also looked at possibilities in using a dimension reduction technique called active subspace which utilizes sampled gradients. This dimension reduction technique can make the descent methods more suitable to high-dimensional problems, which turned out to be most effective when the data have a ridge-like structure. Finally, the optimization methods were used on a real-world problem in the context of pre-hospital care where the objective is to minimize the ambulance response times in the municipality of Umea by changing the positions of the ambulances. Before applying the methods on the real-world ambulance problem, a simulation study was performed on synthetic data, aiming at finding the strengths and weaknesses of the different models when applied to different test functions, at different levels of noise. The results showed that we could improve the ambulance response times across several different performance metrics compared to the response times of the current ambulancepositions. This indicates that there exist adjustments that can benefit the pre-hospitalcare in the municipality of Umea. However, since the models in this thesis work find local and not global optimums, there might still exist even better ambulance positions that can improve the response time further. / I denna rapport undersöks teorin och tillämpningarna av diverse blackbox optimeringsmetoder. Mer specifikt så har vi tittat på två familjer av algoritmer, descentmetoder och responsytmetoder (nära besläktade med tillitsregionmetoder). Vi tittar också på möjligheterna att använda en dimensionreduktionsteknik som kallas active subspace som använder samplade gradienter för att göra descentmetoderna mer lämpade för högdimensionella problem, vilket visade sig vara mest effektivt när datat har en struktur där ändringar i endast en riktning har effekt på responsvärdet. Slutligen användes optimeringsmetoderna på ett verkligt problem från sjukhusvården, där målet var att minimera svarstiderna för ambulansutryckningar i Umeå kommun genom att ändra ambulanspositionerna. Innan metoderna tillämpades på det verkliga ambulansproblemet genomfördes också en simuleringsstudie på syntetiskt data. Detta för att hitta styrkorna och svagheterna hos de olika modellerna genom att undersöka hur dem hanterar ett flertal testfunktioner under olika nivåer av brus. Resultaten visade att vi kunde förbättra ambulansernas responstider över flera olika prestandamått jämfört med responstiderna för de nuvarande ambulanspositionerna. Detta indikerar att det finns förändringar av positioneringen av ambulanser som kan gynna den pre-hospitala vården inom Umeå kommun. Dock, eftersom modellerna i denna rapport hittar lokala och inte globala optimala punkter kan det fortfarande finnas ännu bättre ambulanspositioner som kan förbättra responstiden ytterligare.
24

Does the removal of correlated variables affect the classification accuracy of machine learning algorithms?

Johansson Lannge, Elsa January 2021 (has links)
The last decades have seen an increase in both the amount and complexity of the data used in modern industries in business and technology. A key element for managing these data sets is using machine learning algorithms to process structures and find patterns. Variable selection applies to facilitate and improve these processes by finding and removing redundant variables. One way to achieve this is by eliminating variables based on how much they correlate, a premise for this thesis. This study examines how a reduction of correlated variables affects the predictive accuracy of six different machine learning algorithms. Two demarcations are made. First, the correlation between the explanatory variables is set to a high level and secondly, each variable’s correlation with the dependent variable is set to a modest level. The hypothesis states that removing highly correlated explanatory variables should not negatively affect the accuracy. By conducting a Monte Carlo simulation with three models, each consisting of a different number of correlated variables, the change in accuracy could be compared and evaluated. The result suggests an adverse change in accuracy for all algorithms except one. The differences are relatively low, with the largest accuracy decrease being -5.49 percentage points. The conclusion is that the hypothesis does not hold when the explanatory variables are at a modest level of correlation with the dependent variable.
25

ADAS : A simulation study comparing two safety improving Advanced Driver Assistance Systems

Mattsson, David January 2012 (has links)
Driving is a high-risk adventure which many enjoy on a daily basis. The driving task is highly complex, ever-changing, and one which requires continuous attention and rapid decision making. It is a task that is not without risk, where the cost to reach the desired destination can be too great – your life could be at stake. Driving is not without incidents. Rear-end collision is a common problem in the Swedish traffic environment, with over 100 police-reported individual incidents per year. The amount of rear-end collisions can be hypothetically reduced by introducing new technology in the driver’s vehicle, technology which attempts to improve the driver’s safety driving. This technology is called Advanced Driver Assistance Systems – ADAS. In this study two ADAS were evaluated in a driving simulator study: An Adaptive Cruise Control (ACC) which operates on both hazardous and non-hazardous events, and a Collision Warning System (CWS) which operates solely on non-hazardous events. Both of these ADAS function to guard against risky driving and are based on the assumption that drivers will not act in such a manner that they would willingly reduce the effectiveness of the system. A within-subjects simulation study was conducted where participants drove under three conditions: 1) with an adaptive cruise controller, 2) a frontal rear-end collision warning system ADAS, and 3) unaided, in order to investigate differences between the three driving conditions. Particular focus was on whether the two ADAS improved driving safety. The study results indicate that driving enhanced by the two ADAS made the participating drivers drive less safely.
26

Robust Models for Accommodating Outliers in Random Effects Meta Analysis: A Simulation Study and Empirical Study

Stacey, Melanie January 2016 (has links)
In traditional meta-analysis, a random-effects model is used to deal with heterogeneity and the random-effect is assumed to be normally distributed. However, this can be problematic in the presence of outliers. One solution involves using a heavy tailed distribution for the random-effect to more adequately model the excess variation due to the outliers. Failure to consider an alternative approach to the standard in the presence of unusual or outlying points can lead to inaccurate inference. A heavy tailed distribution is favoured because it has the ability to down-weight outlying studies appropriately, therefore the removal of a study does not need to be considered. In this thesis, the performance of the t-distribution and a finite mixture model are assessed as alternatives to the normal distribution through a comprehensive simulation study. The parameters varied are the average mean of the non-outlier studies, the number of studies, the proportion of outliers, the heterogeneity and the outlier shift distance from the average mean. The performance of the distributions is measured using bias, mean squared error, coverage probability, coverage width, Type I error and power. The methods are also compared through an empirical study of meta-analyses from The Cochrane Library (2008). The simulation showed that the performance of the alternative distributions is better than the normal distribution for a number of scenarios, particularly for extreme outliers and high heterogeneity. Generally, the mixture model performed quite well. The empirical study reveals that both alternative distributions are able to reduce the influence of the outlying studies on the overall mean estimate and thus produce more conservative p-values than the normal distribution. It is recommended that a practitioner consider the use of an alternative random-effects distribution in the presence of outliers because they are more likely to provide robust results. / Thesis / Master of Science (MSc)
27

Simulation Study of Sequential Probability Ratio Test (SPRT) in Monitoring an Event Rate

Yu, Xiaomin 16 July 2009 (has links)
No description available.
28

A Simulation Study of Enhancement mode Indium Arsenide Nanowire Field Effect Transistor

Narendar, Harish January 2009 (has links)
No description available.
29

Improved Methods for Interrupted Time Series Analysis Useful When Outcomes are Aggregated: Accounting for heterogeneity across patients and healthcare settings

Ewusie, Joycelyne E January 2019 (has links)
This is a sandwich thesis / In an interrupted time series (ITS) design, data are collected at multiple time points before and after the implementation of an intervention or program to investigate the effect of the intervention on an outcome of interest. ITS design is often implemented in healthcare settings and is considered the strongest quasi-experimental design in terms of internal and external validity as well as its ability to establish causal relationships. There are several statistical methods that can be used to analyze data from ITS studies. Nevertheless, limitations exist in practical applications, where researchers inappropriately apply the methods, and frequently ignore the assumptions and factors that may influence the optimality of the statistical analysis. Moreover, there is little to no guidance available regarding the application of the various methods, and a standardized framework for analysis of ITS studies does not exist. As such, there is a need to identify and compare existing ITS methods in terms of their strengths and limitations. Their methodological challenges also need to be investigated to inform and direct future research. In light of this, this PhD thesis addresses two main objectives: 1) to conduct a scoping review of the methods that have been employed in the analysis of ITS studies, and 2) to develop improved methods that address a major limitation of the statistical methods frequently used in ITS data analysis. These objectives are addressed in three projects. For the first project, a scoping review of the methods that have been used in analyzing ITS data was conducted, with the focus on ITS applications in health research. The review was based on the Arksey and O’Malley framework and the Joanna Briggs Handbook for scoping reviews. A total of 1389 studies were included in our scoping review. The articles were grouped into methods papers and applications papers based on the focus of the article. For the methods papers, we narratively described the identified methods and discussed their strengths and limitations. The application papers were summarized using frequencies and percentages. We identified some limitations of current methods and provided some recommendations useful in health research. In the second project, we developed and presented an improved method for ITS analysis when the data at each time point are aggregated across several participants, which is the most common case in ITS studies in healthcare settings. We considered the segmented linear regression approach, which our scoping review identified as the most frequently used method in ITS studies. When data are aggregated, heterogeneity is introduced due to variability in the patient population within sites (e.g. healthcare facilities) and this is ignored in the segmented linear regression method. Moreover, statistical uncertainty (imprecision) is introduced in the data because of the sample size (number of participants from whom data are aggregated). Ignoring this variability and uncertainty will likely lead to invalid estimates and loss of statistical power, which in turn leads to erroneous conclusions. Our proposed method incorporates patient variability and sample size as weights in a weighted segmented regression model. We performed extensive simulations and assessed the performance of our method using established performance criteria, such as bias, mean squared error, level and statistical power. We also compared our method with the segmented linear regression approach. The results indicated that the weighted segmented regression was uniformly more precise, less biased and more powerful than the segmented linear regression method. In the third project, we extended the weighted method to multisite ITS studies, where data are aggregated at two levels: across several participants within sites as well as across multiple sites. The extended method incorporates the two levels of heterogeneity using weights, where the weights are defined using patient variability, sample size, number of sites as well as site-to-site variability. This extended weighted regression model, which follows the weighted least squares approach is employed to estimate parameters and perform significance testing. We conducted extensive empirical evaluations using various scenarios generated from a multi-site ITS study and compared the performance of our method with that of the segmented linear regression model as well as a pooled analysis method previously developed for multisite studies. We observed that for most scenarios considered, our method produced estimates with narrower 95% confidence intervals and smaller p-values, indicating that our method is more precise and is associated with more statistical power. In some scenarios, where we considered low levels of heterogeneity, our method and the previously proposed method showed comparable results. In conclusion, this PhD thesis facilitates future ITS research by laying the groundwork for developing standard guidelines for the design and analysis of ITS studies. The proposed improved method for ITS analysis, which is the weighted segmented regression, contributes to the advancement of ITS research and will enable researchers to optimize their analysis, leading to more precise and powerful results. / Thesis / Doctor of Philosophy (PhD)
30

Ověřování předpokladů modelu proporcionálního rizika / Ověřování předpokladů modelu proporcionálního rizika

Marčiny, Jakub January 2014 (has links)
The Cox proportional hazards model is a standard tool for modelling the effect of covariates on time to event in the presence of censoring. The appropriateness of this model is conditioned by the validity of the proportional hazards assumption. The assumption is explained in the thesis and methods for its testing are described in detail. The tests are implemented in R, including self-written version of the Lin- Zhang-Davidian test. Their application is illustrated on medical data. The ability of the tests to reveal the violation of the proportional hazards assumption is investigated in a simulation study. The results suggest that the highest power is attained by the newly implemented Lin-Zhang-Davidian test in most cases. In contrast, the weighted version of the Lin-Wei-Ying test was found to have inadequate size for low sample sizes.

Page generated in 0.0868 seconds