• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 21
  • 21
  • 21
  • 6
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Facilitating students application of the integral and the area under the curve concepts in physics problems

Nguyen, Dong-Hai January 1900 (has links)
Doctor of Philosophy / Department of Physics / Nobel S. Rebello / This research project investigates the difficulties students encounter when solving physics problems involving the integral and the area under the curve concepts and the strategies to facilitate students learning to solve those types of problems. The research contexts of this project are calculus-based physics courses covering mechanics and electromagnetism. In phase I of the project, individual teaching/learning interviews were conducted with 20 students in mechanics and 15 students from the same cohort in electromagnetism. The students were asked to solve problems on several topics of mechanics and electromagnetism. These problems involved calculating physical quantities (e.g. velocity, acceleration, work, electric field, electric resistance, electric current) by integrating or finding the area under the curve of functions of related quantities (e.g. position, velocity, force, charge density, resistivity, current density). Verbal hints were provided when students made an error or were unable to proceed. A total number of 140 one-hour interviews were conducted in this phase, which provided insights into students’ difficulties when solving the problems involving the integral and the area under the curve concepts and the hints to help students overcome those difficulties. In phase II of the project, tutorials were created to facilitate students’ learning to solve physics problems involving the integral and the area under the curve concepts. Each tutorial consisted of a set of exercises and a protocol that incorporated the helpful hints to target the difficulties that students expressed in phase I of the project. Focus group learning interviews were conducted to test the effectiveness of the tutorials in comparison with standard learning materials (i.e. textbook problems and solutions). Overall results indicated that students learning with our tutorials outperformed students learning with standard materials in applying the integral and the area under the curve concepts to physics problems. The results of this project provide broader and deeper insights into students’ problem solving with the integral and the area under the curve concepts and suggest strategies to facilitate students’ learning to apply these concepts to physics problems. This study also has significant implications for further research, curriculum development and instruction.
2

Statistical Geocomputing: Spatial Outlier Detection in Precision Agriculture

Chu Su, Peter 29 September 2011 (has links)
The collection of crop yield data has become much easier with the introduction of technologies such as the Global Positioning System (GPS), ground-based yield sensors, and Geographic Information Systems (GIS). This explosive growth and widespread use of spatial data has challenged the ability to derive useful spatial knowledge. In addition, outlier detection as one important pre-processing step remains a challenge because the technique and the definition of spatial neighbourhood remain non-trivial, and the quantitative assessments of false positives, false negatives, and the concept of region outlier remain unexplored. The overall aim of this study is to evaluate different spatial outlier detection techniques in terms of their accuracy and computational efficiency, and examine the performance of these outlier removal techniques in a site-specific management context. In a simulation study, unconditional sequential Gaussian simulation is performed to generate crop yield as the response variable along with two explanatory variables. Point and region spatial outliers are added to the simulated datasets by randomly selecting observations and adding or subtracting a Gaussian error term. With simulated data which contains known spatial outliers in advance, the assessment of spatial outlier techniques can be conducted as a binary classification exercise, treating each spatial outlier detection technique as a classifier. Algorithm performance is evaluated with the area and partial area under the ROC curve up to different true positive and false positive rates. Outlier effects in on-farm research are assessed in terms of the influence of each spatial outlier technique on coefficient estimates from a spatial regression model that accounts for autocorrelation. Results indicate that for point outliers, spatial outlier techniques that account for spatial autocorrelation tend to be better than standard spatial outlier techniques in terms of higher sensitivity, lower false positive detection rate, and consistency in performance. They are also more resistant to changes in the neighbourhood definition. In terms of region outliers, standard techniques tend to be better than spatial autocorrelation techniques in all performance aspects because they are less affected by masking and swamping effects. In particular, one spatial autocorrelation technique, Averaged Difference, is superior to all other techniques in terms of both point and region outlier scenario because of its ability to incorporate spatial autocorrelation while at the same time, revealing the variation between nearest neighbours. In terms of decision-making, all algorithms led to slightly different coefficient estimates, and therefore, may result in distinct decisions for site-specific management. The results outlined here will allow an improved removal of crop yield data points that are potentially problematic. What has been determined here is the recommendation of using Averaged Difference algorithm for cleaning spatial outliers in yield dataset. Identifying the optimal nearest neighbour parameter for the neighbourhood aggregation function is still non-trivial. The recommendation is to specify a large number of nearest neighbours, large enough to capture the region size. Lastly, the unbiased coefficient estimates obtained with Average Difference suggest it is the better method for pre-processing spatial outliers in crop yield data, which underlines its suitability for detecting spatial outlier in the context of on-farm research.
3

Statistical Geocomputing: Spatial Outlier Detection in Precision Agriculture

Chu Su, Peter 29 September 2011 (has links)
The collection of crop yield data has become much easier with the introduction of technologies such as the Global Positioning System (GPS), ground-based yield sensors, and Geographic Information Systems (GIS). This explosive growth and widespread use of spatial data has challenged the ability to derive useful spatial knowledge. In addition, outlier detection as one important pre-processing step remains a challenge because the technique and the definition of spatial neighbourhood remain non-trivial, and the quantitative assessments of false positives, false negatives, and the concept of region outlier remain unexplored. The overall aim of this study is to evaluate different spatial outlier detection techniques in terms of their accuracy and computational efficiency, and examine the performance of these outlier removal techniques in a site-specific management context. In a simulation study, unconditional sequential Gaussian simulation is performed to generate crop yield as the response variable along with two explanatory variables. Point and region spatial outliers are added to the simulated datasets by randomly selecting observations and adding or subtracting a Gaussian error term. With simulated data which contains known spatial outliers in advance, the assessment of spatial outlier techniques can be conducted as a binary classification exercise, treating each spatial outlier detection technique as a classifier. Algorithm performance is evaluated with the area and partial area under the ROC curve up to different true positive and false positive rates. Outlier effects in on-farm research are assessed in terms of the influence of each spatial outlier technique on coefficient estimates from a spatial regression model that accounts for autocorrelation. Results indicate that for point outliers, spatial outlier techniques that account for spatial autocorrelation tend to be better than standard spatial outlier techniques in terms of higher sensitivity, lower false positive detection rate, and consistency in performance. They are also more resistant to changes in the neighbourhood definition. In terms of region outliers, standard techniques tend to be better than spatial autocorrelation techniques in all performance aspects because they are less affected by masking and swamping effects. In particular, one spatial autocorrelation technique, Averaged Difference, is superior to all other techniques in terms of both point and region outlier scenario because of its ability to incorporate spatial autocorrelation while at the same time, revealing the variation between nearest neighbours. In terms of decision-making, all algorithms led to slightly different coefficient estimates, and therefore, may result in distinct decisions for site-specific management. The results outlined here will allow an improved removal of crop yield data points that are potentially problematic. What has been determined here is the recommendation of using Averaged Difference algorithm for cleaning spatial outliers in yield dataset. Identifying the optimal nearest neighbour parameter for the neighbourhood aggregation function is still non-trivial. The recommendation is to specify a large number of nearest neighbours, large enough to capture the region size. Lastly, the unbiased coefficient estimates obtained with Average Difference suggest it is the better method for pre-processing spatial outliers in crop yield data, which underlines its suitability for detecting spatial outlier in the context of on-farm research.
4

Sequencing Effects and Loss Aversion in a Delay Discounting Task

January 2018 (has links)
abstract: The attractiveness of a reward depends in part on the delay to its receipt, with more distant rewards generally being valued less than more proximate ones. The rate at which people discount the value of delayed rewards has been associated with a variety of clinically and socially relevant human behaviors. Thus, the accurate measurement of delay discounting rates is crucial to the study of mechanisms underlying behaviors such as risky sex, addiction, and gambling. In delay discounting tasks, participants make choices between two alternatives: one small amount of money delivered immediately versus a large amount of money delivered after a delay. After many choices, the experimental task will converge on an indifference point: the value of the delayed reward that approximates the value of the immediate one. It has been shown that these indifference points are systematically biased by the direction in which one of the alternatives adjusts. This bias is termed a sequencing effect. The present research proposed a reference-dependent model of choice drawn from Prospect Theory to account for the presence of sequencing effects in a delay discounting task. Sensitivity to reference frames and sequencing effects were measured in two computer tasks. Bayesian and frequentist analyses indicated that the reference-dependent model of choice cannot account for sequencing effects. Thus, an alternative, perceptual account of sequencing effects that draws on a Bayesian framework of magnitude estimation is proposed and furnished with some preliminary evidence. Implications for future research in the measurement of delay discounting and sensitivity to reference frames are discussed. / Dissertation/Thesis / Masters Thesis Psychology 2018
5

Evaluation of a Trough-Only Extrapolated Area Under the Curve Vancomycin Dosing Method on Clinical Outcomes

Lines, Jacob, Burchette, Jessica, Kullab, Susan M., Lewis, Paul 01 February 2021 (has links)
Background Vancomycin dosing strategies targeting trough concentrations of 15–20 mg/L are no longer supported due to lack of efficacy evidence and increased risk of nephrotoxicity. Area-under-the-curve (AUC24) nomograms have demonstrated adequate attainment of AUC24 goals ≥ 400 mg h/L with more conservative troughs (10–15 mg/L). Objective The purpose of this study is to clinically validate a vancomycin AUC24 dosing nomogram compared to conventional dosing methods with regards to therapeutic failure and rates of acute kidney injury. Setting This study was conducted at a tertiary, community, teaching hospital in the United States. Method This retrospective, cohort study compared the rates of therapeutic failures between AUC24-extrapolated dosing and conventional dosing methods. Main outcome measure Primary outcome was treatment failure, defined as all-cause mortality within 30 days, persistent positive methicillin-resistant Staphylococcus aureus blood culture, or clinical failure. Rates of acute kidney injury in non-dialysis patients was a secondary endpoint. Results There were 96 participants in the extrapolated-AUC24 cohort and 60 participants in the conventional cohort. Baseline characteristics were similar between cohorts. Failure rates were 11.5% (11/96) in the extrapolated-AUC24 group compared to 18.3% (11/60) in the conventional group (p = 0.245). Reasons for failure were 6 deaths and 5 clinical failures in the extrapolated-AUC24 cohort and 10 deaths and 1 clinical failure in the conventional group. Acute kidney injury rates were 2.7% (2/73) and 16.4% (9/55) in the extrapolated-AUC24 and conventional cohorts, respectively (p = 0.009). Conclusion Extrapolated-AUC24 dosing was associated with less nephrotoxicity without an increase in treatment failures for bloodstream infections compared to conventional dosing. Further investigation is warranted to determine the relationship between extrapolated-AUC24 dosing and clinical failures.
6

Quantitative biomarkers for predicting kidney transplantation outcomes: The HCUP national inpatient sample

Lee, Taehoon 22 August 2022 (has links)
No description available.
7

Evaluation of a Trough-Only Extrapolated Area Under the Curve Vancomycin Dosing Method on Clinical Outcomes

Lines, Jacob, Burchette, Jessica, Kullab, Susan M., Lewis, Paul 01 January 2020 (has links)
Background Vancomycin dosing strategies targeting trough concentrations of 15–20 mg/L are no longer supported due to lack of efficacy evidence and increased risk of nephrotoxicity. Area-under-the-curve (AUC24) nomograms have demonstrated adequate attainment of AUC24 goals ≥ 400 mg h/L with more conservative troughs (10–15 mg/L). Objective The purpose of this study is to clinically validate a vancomycin AUC24 dosing nomogram compared to conventional dosing methods with regards to therapeutic failure and rates of acute kidney injury. Setting This study was conducted at a tertiary, community, teaching hospital in the United States. Method This retrospective, cohort study compared the rates of therapeutic failures between AUC24-extrapolated dosing and conventional dosing methods. Main outcome measure Primary outcome was treatment failure, defined as all-cause mortality within 30 days, persistent positive methicillin-resistant Staphylococcus aureus blood culture, or clinical failure. Rates of acute kidney injury in non-dialysis patients was a secondary endpoint. Results There were 96 participants in the extrapolated-AUC24 cohort and 60 participants in the conventional cohort. Baseline characteristics were similar between cohorts. Failure rates were 11.5% (11/96) in the extrapolated-AUC24 group compared to 18.3% (11/60) in the conventional group (p = 0.245). Reasons for failure were 6 deaths and 5 clinical failures in the extrapolated-AUC24 cohort and 10 deaths and 1 clinical failure in the conventional group. Acute kidney injury rates were 2.7% (2/73) and 16.4% (9/55) in the extrapolated-AUC24 and conventional cohorts, respectively (p = 0.009). Conclusion Extrapolated-AUC24 dosing was associated with less nephrotoxicity without an increase in treatment failures for bloodstream infections compared to conventional dosing. Further investigation is warranted to determine the relationship between extrapolated-AUC24 dosing and clinical failures.
8

Statistical model selection techniques for the cox proportional hazards model: a comparative study

Njati, Jolando 01 July 2022 (has links)
The advancement in data acquiring technology continues to see survival data sets with many covariates. This has posed a new challenge for researchers in identifying important covariates for inference and prediction for a time-to-event response variable. In this dissertation, common Cox proportional hazards model selection techniques and a random survival forest technique were compared using five performance criteria measures. These performance measures were concordance index, integrated area under the curve, and , and R2 . To carry out this exercise, a multicentre clinical trial data set was used. A simulation study was also implemented for this comparison. To develop a Cox proportional model, a training dataset of 75% of the observations was used and the model selection techniques were implemented to select covariates. Full Cox PH models containing all covariates were also incorporated for analysis for both the clinical trial data set and simulations. The clinical trial data set showed that the full model and forward selection technique performed better with the performance metrics employed, though they do not reduce the complexity of the model as much as the Lasso technique does. The simulation studies also showed that the full model performed better than the other techniques, with the Lasso technique overpenalising the model from the simulation with the smaller data set and many covariates. AIC and BIC were less effective in computation than the rest of the variable selection techniques, but effectively reduced model complexity than their counterparts for the simulations. The integrated area under the curve was the performance metric of choice for choosing the final model for analysis on the real data set. This performance metric gave more efficient outcomes unlike the other metrics on all selection techniques. This dissertation hence showed that variable selection techniques differ according to the study design of the research as well as the performance measure used. Hence, to have a good model, it is important to not use a model selection technique in isolation. There is therefore need for further research and publish techniques that work generally well for different study designs to make the process shorter for most researchers.
9

Using random forest and decision tree models for a new vehicle prediction approach in computational toxicology

Mistry, Pritesh, Neagu, Daniel, Trundle, Paul R., Vessey, J.D. 22 October 2015 (has links)
yes / Drug vehicles are chemical carriers that provide beneficial aid to the drugs they bear. Taking advantage of their favourable properties can potentially allow the safer use of drugs that are considered highly toxic. A means for vehicle selection without experimental trial would therefore be of benefit in saving time and money for the industry. Although machine learning is increasingly used in predictive toxicology, to our knowledge there is no reported work in using machine learning techniques to model drug-vehicle relationships for vehicle selection to minimise toxicity. In this paper we demonstrate the use of data mining and machine learning techniques to process, extract and build models based on classifiers (decision trees and random forests) that allow us to predict which vehicle would be most suited to reduce a drug’s toxicity. Using data acquired from the National Institute of Health’s (NIH) Developmental Therapeutics Program (DTP) we propose a methodology using an area under a curve (AUC) approach that allows us to distinguish which vehicle provides the best toxicity profile for a drug and build classification models based on this knowledge. Our results show that we can achieve prediction accuracies of 80 % using random forest models whilst the decision tree models produce accuracies in the 70 % region. We consider our methodology widely applicable within the scientific domain and beyond for comprehensively building classification models for the comparison of functional relationships between two variables.
10

Multiple hypothesis testing and multiple outlier identification methods

Yin, Yaling 13 April 2010
Traditional multiple hypothesis testing procedures, such as that of Benjamini and Hochberg, fix an error rate and determine the corresponding rejection region. In 2002 Storey proposed a fixed rejection region procedure and showed numerically that it can gain more power than the fixed error rate procedure of Benjamini and Hochberg while controlling the same false discovery rate (FDR). In this thesis it is proved that when the number of alternatives is small compared to the total number of hypotheses, Storeys method can be less powerful than that of Benjamini and Hochberg. Moreover, the two procedures are compared by setting them to produce the same FDR. The difference in power between Storeys procedure and that of Benjamini and Hochberg is near zero when the distance between the null and alternative distributions is large, but Benjamini and Hochbergs procedure becomes more powerful as the distance decreases. It is shown that modifying the Benjamini and Hochberg procedure to incorporate an estimate of the proportion of true null hypotheses as proposed by Black gives a procedure with superior power.<p> Multiple hypothesis testing can also be applied to regression diagnostics. In this thesis, a Bayesian method is proposed to test multiple hypotheses, of which the i-th null and alternative hypotheses are that the i-th observation is not an outlier versus it is, for i=1,...,m. In the proposed Bayesian model, it is assumed that outliers have a mean shift, where the proportion of outliers and the mean shift respectively follow a Beta prior distribution and a normal prior distribution. It is proved in the thesis that for the proposed model, when there exists more than one outlier, the marginal distributions of the deletion residual of the i-th observation under both null and alternative hypotheses are doubly noncentral t distributions. The outlyingness of the i-th observation is measured by the marginal posterior probability that the i-th observation is an outlier given its deletion residual. An importance sampling method is proposed to calculate this probability. This method requires the computation of the density of the doubly noncentral F distribution and this is approximated using Patnaiks approximation. An algorithm is proposed in this thesis to examine the accuracy of Patnaiks approximation. The comparison of this algorithms output with Patnaiks approximation shows that the latter can save massive computation time without losing much accuracy.<p> The proposed Bayesian multiple outlier identification procedure is applied to some simulated data sets. Various simulation and prior parameters are used to study the sensitivity of the posteriors to the priors. The area under the ROC curves (AUC) is calculated for each combination of parameters. A factorial design analysis on AUC is carried out by choosing various simulation and prior parameters as factors. The resulting AUC values are high for various selected parameters, indicating that the proposed method can identify the majority of outliers within tolerable errors. The results of the factorial design show that the priors do not have much effect on the marginal posterior probability as long as the sample size is not too small.<p> In this thesis, the proposed Bayesian procedure is also applied to a real data set obtained by Kanduc et al. in 2008. The proteomes of thirty viruses examined by Kanduc et al. are found to share a high number of pentapeptide overlaps to the human proteome. In a linear regression analysis of the level of viral overlaps to the human proteome and the length of viral proteome, it is reported by Kanduc et al. that among the thirty viruses, human T-lymphotropic virus 1, Rubella virus, and hepatitis C virus, present relatively higher levels of overlaps with the human proteome than the predicted level of overlaps. The results obtained using the proposed procedure indicate that the four viruses with extremely large sizes (Human herpesvirus 4, Human herpesvirus 6, Variola virus, and Human herpesvirus 5) are more likely to be the outliers than the three reported viruses. The results with thefour extreme viruses deleted confirm the claim of Kanduc et al.

Page generated in 0.1105 seconds