Spelling suggestions: "subject:"are under then curve"" "subject:"are under them curve""
1 |
Facilitating students application of the integral and the area under the curve concepts in physics problemsNguyen, Dong-Hai January 1900 (has links)
Doctor of Philosophy / Department of Physics / Nobel S. Rebello / This research project investigates the difficulties students encounter when solving physics problems involving the integral and the area under the curve concepts and the strategies to facilitate students learning to solve those types of problems. The research contexts of this project are calculus-based physics courses covering mechanics and electromagnetism.
In phase I of the project, individual teaching/learning interviews were conducted with 20 students in mechanics and 15 students from the same cohort in electromagnetism. The students were asked to solve problems on several topics of mechanics and electromagnetism. These problems involved calculating physical quantities (e.g. velocity, acceleration, work, electric field, electric resistance, electric current) by integrating or finding the area under the curve of functions of related quantities (e.g. position, velocity, force, charge density, resistivity, current density). Verbal hints were provided when students made an error or were unable to proceed. A total number of 140 one-hour interviews were conducted in this phase, which provided insights into students’ difficulties when solving the problems involving the integral and the area under the curve concepts and the hints to help students overcome those difficulties.
In phase II of the project, tutorials were created to facilitate students’ learning to solve physics problems involving the integral and the area under the curve concepts. Each tutorial consisted of a set of exercises and a protocol that incorporated the helpful hints to target the difficulties that students expressed in phase I of the project. Focus group learning interviews were conducted to test the effectiveness of the tutorials in comparison with standard learning materials (i.e. textbook problems and solutions). Overall results indicated that students learning with our tutorials outperformed students learning with standard materials in applying the integral and the area under the curve concepts to physics problems.
The results of this project provide broader and deeper insights into students’ problem solving with the integral and the area under the curve concepts and suggest strategies to facilitate students’ learning to apply these concepts to physics problems. This study also has significant implications for further research, curriculum development and instruction.
|
2 |
Statistical Geocomputing: Spatial Outlier Detection in Precision AgricultureChu Su, Peter 29 September 2011 (has links)
The collection of crop yield data has become much easier with the introduction of technologies such as the Global Positioning System (GPS), ground-based yield sensors, and Geographic Information Systems (GIS). This explosive growth and widespread use of spatial data has challenged the ability to derive useful spatial knowledge. In addition, outlier detection as one important pre-processing step remains a challenge because the technique and the definition of spatial neighbourhood remain non-trivial, and the quantitative assessments of false positives, false negatives, and the concept of region outlier remain unexplored. The overall aim of this study is to evaluate different spatial outlier detection techniques in terms of their accuracy and computational efficiency, and examine the performance of these outlier removal techniques in a site-specific management context.
In a simulation study, unconditional sequential Gaussian simulation is performed to generate crop yield as the response variable along with two explanatory variables. Point and region spatial outliers are added to the simulated datasets by randomly selecting observations and adding or subtracting a Gaussian error term. With simulated data which contains known spatial outliers in advance, the assessment of spatial outlier techniques can be conducted as a binary classification exercise, treating each spatial outlier detection technique as a classifier. Algorithm performance is evaluated with the area and partial area under the ROC curve up to different true positive and false positive rates. Outlier effects in on-farm research are assessed in terms of the influence of each spatial outlier technique on coefficient estimates from a spatial regression model that accounts for autocorrelation.
Results indicate that for point outliers, spatial outlier techniques that account for spatial autocorrelation tend to be better than standard spatial outlier techniques in terms of higher sensitivity, lower false positive detection rate, and consistency in performance. They are also more resistant to changes in the neighbourhood definition. In terms of region outliers, standard techniques tend to be better than spatial autocorrelation techniques in all performance aspects because they are less affected by masking and swamping effects. In particular, one spatial autocorrelation technique, Averaged Difference, is superior to all other techniques in terms of both point and region outlier scenario because of its ability to incorporate spatial autocorrelation while at the same time, revealing the variation between nearest neighbours.
In terms of decision-making, all algorithms led to slightly different coefficient estimates, and therefore, may result in distinct decisions for site-specific management.
The results outlined here will allow an improved removal of crop yield data points that are potentially problematic. What has been determined here is the recommendation of using Averaged Difference algorithm for cleaning spatial outliers in yield dataset. Identifying the optimal nearest neighbour parameter for the neighbourhood aggregation function is still non-trivial. The recommendation is to specify a large number of nearest neighbours, large enough to capture the region size. Lastly, the unbiased coefficient estimates obtained with Average Difference suggest it is the better method for pre-processing spatial outliers in crop yield data, which underlines its suitability for detecting spatial outlier in the context of on-farm research.
|
3 |
Statistical Geocomputing: Spatial Outlier Detection in Precision AgricultureChu Su, Peter 29 September 2011 (has links)
The collection of crop yield data has become much easier with the introduction of technologies such as the Global Positioning System (GPS), ground-based yield sensors, and Geographic Information Systems (GIS). This explosive growth and widespread use of spatial data has challenged the ability to derive useful spatial knowledge. In addition, outlier detection as one important pre-processing step remains a challenge because the technique and the definition of spatial neighbourhood remain non-trivial, and the quantitative assessments of false positives, false negatives, and the concept of region outlier remain unexplored. The overall aim of this study is to evaluate different spatial outlier detection techniques in terms of their accuracy and computational efficiency, and examine the performance of these outlier removal techniques in a site-specific management context.
In a simulation study, unconditional sequential Gaussian simulation is performed to generate crop yield as the response variable along with two explanatory variables. Point and region spatial outliers are added to the simulated datasets by randomly selecting observations and adding or subtracting a Gaussian error term. With simulated data which contains known spatial outliers in advance, the assessment of spatial outlier techniques can be conducted as a binary classification exercise, treating each spatial outlier detection technique as a classifier. Algorithm performance is evaluated with the area and partial area under the ROC curve up to different true positive and false positive rates. Outlier effects in on-farm research are assessed in terms of the influence of each spatial outlier technique on coefficient estimates from a spatial regression model that accounts for autocorrelation.
Results indicate that for point outliers, spatial outlier techniques that account for spatial autocorrelation tend to be better than standard spatial outlier techniques in terms of higher sensitivity, lower false positive detection rate, and consistency in performance. They are also more resistant to changes in the neighbourhood definition. In terms of region outliers, standard techniques tend to be better than spatial autocorrelation techniques in all performance aspects because they are less affected by masking and swamping effects. In particular, one spatial autocorrelation technique, Averaged Difference, is superior to all other techniques in terms of both point and region outlier scenario because of its ability to incorporate spatial autocorrelation while at the same time, revealing the variation between nearest neighbours.
In terms of decision-making, all algorithms led to slightly different coefficient estimates, and therefore, may result in distinct decisions for site-specific management.
The results outlined here will allow an improved removal of crop yield data points that are potentially problematic. What has been determined here is the recommendation of using Averaged Difference algorithm for cleaning spatial outliers in yield dataset. Identifying the optimal nearest neighbour parameter for the neighbourhood aggregation function is still non-trivial. The recommendation is to specify a large number of nearest neighbours, large enough to capture the region size. Lastly, the unbiased coefficient estimates obtained with Average Difference suggest it is the better method for pre-processing spatial outliers in crop yield data, which underlines its suitability for detecting spatial outlier in the context of on-farm research.
|
4 |
Sequencing Effects and Loss Aversion in a Delay Discounting TaskJanuary 2018 (has links)
abstract: The attractiveness of a reward depends in part on the delay to its receipt, with more distant rewards generally being valued less than more proximate ones. The rate at which people discount the value of delayed rewards has been associated with a variety of clinically and socially relevant human behaviors. Thus, the accurate measurement of delay discounting rates is crucial to the study of mechanisms underlying behaviors such as risky sex, addiction, and gambling. In delay discounting tasks, participants make choices between two alternatives: one small amount of money delivered immediately versus a large amount of money delivered after a delay. After many choices, the experimental task will converge on an indifference point: the value of the delayed reward that approximates the value of the immediate one. It has been shown that these indifference points are systematically biased by the direction in which one of the alternatives adjusts. This bias is termed a sequencing effect.
The present research proposed a reference-dependent model of choice drawn from Prospect Theory to account for the presence of sequencing effects in a delay discounting task. Sensitivity to reference frames and sequencing effects were measured in two computer tasks. Bayesian and frequentist analyses indicated that the reference-dependent model of choice cannot account for sequencing effects. Thus, an alternative, perceptual account of sequencing effects that draws on a Bayesian framework of magnitude estimation is proposed and furnished with some preliminary evidence. Implications for future research in the measurement of delay discounting and sensitivity to reference frames are discussed. / Dissertation/Thesis / Masters Thesis Psychology 2018
|
5 |
Evaluation of a Trough-Only Extrapolated Area Under the Curve Vancomycin Dosing Method on Clinical OutcomesLines, Jacob, Burchette, Jessica, Kullab, Susan M., Lewis, Paul 01 February 2021 (has links)
Background Vancomycin dosing strategies targeting trough concentrations of 15–20 mg/L are no longer supported due to lack of efficacy evidence and increased risk of nephrotoxicity. Area-under-the-curve (AUC24) nomograms have demonstrated adequate attainment of AUC24 goals ≥ 400 mg h/L with more conservative troughs (10–15 mg/L). Objective The purpose of this study is to clinically validate a vancomycin AUC24 dosing nomogram compared to conventional dosing methods with regards to therapeutic failure and rates of acute kidney injury. Setting This study was conducted at a tertiary, community, teaching hospital in the United States. Method This retrospective, cohort study compared the rates of therapeutic failures between AUC24-extrapolated dosing and conventional dosing methods. Main outcome measure Primary outcome was treatment failure, defined as all-cause mortality within 30 days, persistent positive methicillin-resistant Staphylococcus aureus blood culture, or clinical failure. Rates of acute kidney injury in non-dialysis patients was a secondary endpoint. Results There were 96 participants in the extrapolated-AUC24 cohort and 60 participants in the conventional cohort. Baseline characteristics were similar between cohorts. Failure rates were 11.5% (11/96) in the extrapolated-AUC24 group compared to 18.3% (11/60) in the conventional group (p = 0.245). Reasons for failure were 6 deaths and 5 clinical failures in the extrapolated-AUC24 cohort and 10 deaths and 1 clinical failure in the conventional group. Acute kidney injury rates were 2.7% (2/73) and 16.4% (9/55) in the extrapolated-AUC24 and conventional cohorts, respectively (p = 0.009). Conclusion Extrapolated-AUC24 dosing was associated with less nephrotoxicity without an increase in treatment failures for bloodstream infections compared to conventional dosing. Further investigation is warranted to determine the relationship between extrapolated-AUC24 dosing and clinical failures.
|
6 |
Quantitative biomarkers for predicting kidney transplantation outcomes: The HCUP national inpatient sampleLee, Taehoon 22 August 2022 (has links)
No description available.
|
7 |
Evaluation of a Trough-Only Extrapolated Area Under the Curve Vancomycin Dosing Method on Clinical OutcomesLines, Jacob, Burchette, Jessica, Kullab, Susan M., Lewis, Paul 01 January 2020 (has links)
Background Vancomycin dosing strategies targeting trough concentrations of 15–20 mg/L are no longer supported due to lack of efficacy evidence and increased risk of nephrotoxicity. Area-under-the-curve (AUC24) nomograms have demonstrated adequate attainment of AUC24 goals ≥ 400 mg h/L with more conservative troughs (10–15 mg/L). Objective The purpose of this study is to clinically validate a vancomycin AUC24 dosing nomogram compared to conventional dosing methods with regards to therapeutic failure and rates of acute kidney injury. Setting This study was conducted at a tertiary, community, teaching hospital in the United States. Method This retrospective, cohort study compared the rates of therapeutic failures between AUC24-extrapolated dosing and conventional dosing methods. Main outcome measure Primary outcome was treatment failure, defined as all-cause mortality within 30 days, persistent positive methicillin-resistant Staphylococcus aureus blood culture, or clinical failure. Rates of acute kidney injury in non-dialysis patients was a secondary endpoint. Results There were 96 participants in the extrapolated-AUC24 cohort and 60 participants in the conventional cohort. Baseline characteristics were similar between cohorts. Failure rates were 11.5% (11/96) in the extrapolated-AUC24 group compared to 18.3% (11/60) in the conventional group (p = 0.245). Reasons for failure were 6 deaths and 5 clinical failures in the extrapolated-AUC24 cohort and 10 deaths and 1 clinical failure in the conventional group. Acute kidney injury rates were 2.7% (2/73) and 16.4% (9/55) in the extrapolated-AUC24 and conventional cohorts, respectively (p = 0.009). Conclusion Extrapolated-AUC24 dosing was associated with less nephrotoxicity without an increase in treatment failures for bloodstream infections compared to conventional dosing. Further investigation is warranted to determine the relationship between extrapolated-AUC24 dosing and clinical failures.
|
8 |
Statistical model selection techniques for the cox proportional hazards model: a comparative studyNjati, Jolando 01 July 2022 (has links)
The advancement in data acquiring technology continues to see survival data sets with many covariates. This has posed a new challenge for researchers in identifying important covariates for inference and prediction for a time-to-event response variable. In this dissertation, common Cox proportional hazards model selection techniques and a random survival forest technique were compared using five performance criteria measures. These performance measures were concordance index, integrated area under the curve, and , and R2 . To carry out this exercise, a multicentre clinical trial data set was used. A simulation study was also implemented for this comparison. To develop a Cox proportional model, a training dataset of 75% of the observations was used and the model selection techniques were implemented to select covariates. Full Cox PH models containing all covariates were also incorporated for analysis for both the clinical trial data set and simulations. The clinical trial data set showed that the full model and forward selection technique performed better with the performance metrics employed, though they do not reduce the complexity of the model as much as the Lasso technique does. The simulation studies also showed that the full model performed better than the other techniques, with the Lasso technique overpenalising the model from the simulation with the smaller data set and many covariates. AIC and BIC were less effective in computation than the rest of the variable selection techniques, but effectively reduced model complexity than their counterparts for the simulations. The integrated area under the curve was the performance metric of choice for choosing the final model for analysis on the real data set. This performance metric gave more efficient outcomes unlike the other metrics on all selection techniques. This dissertation hence showed that variable selection techniques differ according to the study design of the research as well as the performance measure used. Hence, to have a good model, it is important to not use a model selection technique in isolation. There is therefore need for further research and publish techniques that work generally well for different study designs to make the process shorter for most researchers.
|
9 |
Systematic Review and Meta-Analysis of the Diagnostic Performance of Stockholm3: A Methodological EvaluationHeiter, Linus, Skagerlund, Hampus January 2024 (has links)
This thesis investigates two questions: the methodological strengths and weaknesses of meta-analysis and the diagnostic performance of the Stockholm3 test for clinically significant prostate cancer. Through a systematic review and meta-analysis, we explore the robustness and limitations of meta-analysis, focusing on aspects such as bias assessment, heterogeneity, and the impact of the file-drawer problem. Applying these methods, we evaluate the Stockholm3 test’s performance, comparing it to the conventional Prostate-Specific Antigen (PSA) test. Our analysis synthesizes data from four studies consisting of 6 497 men, indicating that the Stockholm3 test offers improved diagnostic accuracy, with a higher pooled Area Under the Curve (AUC), in turn suggesting better identification of clinically significant prostate cancer. Nonetheless, the study also reveals challenges within the practice of meta-analysis, including variation among study methodologies and the presence of bias. These findings highlight the dual purpose of the research: demonstrating the utility and drawbacks of meta-analysis and validating the Stockholm3 test’s potential as a diagnostic tool. The conclusions drawn emphasize the need for continued research to enhance both meta-analytic methods and the clinical applicability of the Stockholm3 test in broader populations.
|
10 |
Using random forest and decision tree models for a new vehicle prediction approach in computational toxicologyMistry, Pritesh, Neagu, Daniel, Trundle, Paul R., Vessey, J.D. 22 October 2015 (has links)
Yes / Drug vehicles are chemical carriers that provide beneficial aid to the drugs they bear. Taking advantage of their favourable properties can potentially allow the safer use of drugs that are considered highly toxic. A means for vehicle selection without experimental trial would therefore be of benefit in saving time and money for the industry. Although machine learning is increasingly used in predictive toxicology, to our knowledge there is no reported work in using machine learning techniques to model drug-vehicle relationships for vehicle selection to minimise toxicity. In this paper we demonstrate the use of data mining and machine learning techniques to process, extract and build models based on classifiers (decision trees and random forests) that allow us to predict which vehicle would be most suited to reduce a drug’s toxicity. Using data acquired from the National Institute of Health’s (NIH) Developmental Therapeutics Program (DTP) we propose a methodology using an area under a curve (AUC) approach that allows us to distinguish which vehicle provides the best toxicity profile for a drug and build classification models based on this knowledge. Our results show that we can achieve prediction accuracies of 80 % using random forest models whilst the decision tree models produce accuracies in the 70 % region. We consider our methodology widely applicable within the scientific domain and beyond for comprehensively building classification models for the comparison of functional relationships between two variables.
|
Page generated in 0.113 seconds