• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • Tagged with
  • 4
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Reducing Highway Crashes with Network-Level Continuous Friction Measurements

McCarthy, Ross James 16 December 2019 (has links)
When a vehicle changes speed or direction, the interaction between the contacting surfaces of the tire and the pavement form frictional forces. The pavement's contribution to tire-pavement friction is referred to as skid resistance and is provided by pavement microtexture and macrotexture. The amount of skid resistance depreciates over time due to the polishing action of traffic, and for this reason, the skid resistance should be monitored with friction testing equipment. The equipment use one of four test methods to measure network-level friction: ASTM E 274 locked-wheel, ASTM E 2340 fixed-slip technique, ASTM E 1859 variable-slip technique, and sideways-force coefficient (SFC) technique. The fixed-slip, variable-slip, and SFC techniques are used in continuous friction measurement equipment (CFME). In the United States, skid resistance is traditionally measured with a locked-wheel skid trailer (LWST) equipped with either a ASTM E 501 ribbed or a ASTM E 524 smooth 'no tread' tire. Since the LWST fully-locks the test wheel to measure friction, it is only capable of spot testing tangent sections of roadway. By contrast, the remaining three test methods never lock their test wheels and, therefore, they can collect friction measurements continuously on all types of roadway, including curves and t-intersections. For this reason, highway agencies in the U.S. are interested in transitioning from using a LWST to using one of three continuous methods. This dissertation explores the use of continuous friction measurements, collected with a Sideways-force Coefficient Routine Investigation Machine (SCRIM), in a systemic highway safety management approach to reduce crashes that result in fatalities, injuries, and property damage only. The dissertation presents four manuscripts. In the first manuscript, orthogonal regression is used to develop models for converting between friction measurements with a SCRIM and LWST with both a ribbed and smooth tire. The results indicated that the LWST smooth tire measured friction with greater sensitivity to changes in macrotexture than the SCRIM and LWST ribbed tire. The SCRIM also had greater correlation to the LWST ribbed tire than the LWST smooth tire. The second investigation establishes the relationship between friction measured with a SCRIM and the risk of crashes on dry and wet pavement surfaces. The results of this showed that increasing friction decreases both dry and wet pavement crashes; however, friction was found to have greater impact in wet conditions. Due to the negative relationship between friction and crashes, eventually there will be a point where further losses in friction can result in a rapid increase in crash risk. This point can be identified with a friction threshold known as an investigatory level. When measured friction is at or below the investigatory level, an in- and out-of-field investigation is required to determine whether a countermeasure is necessary to improve safety. The third manuscript proposes a statistical regression approach for determining investigatory levels. Since this approach relies on statistical regression, the results are objective and should be the same for any analyst reviewing the same data. The investigatory levels can be used in a systemic approach that identifies locations where crashes can be reduced based on a benefit-cost analysis of surface treatments. Last, the forth manuscript demonstrates a benefit-cost analysis that selects surface treatments based on crash reductions predicted with continuous friction measurements. / Doctor of Philosophy / When a vehicle changes speed or direction, the tires slide over the pavement surface, creating friction that produces the traction that is necessary for the vehicle to change speed or direction. Friction can diminish when water, dust, and other contaminants are present, or over time due to traffic. Over time, the loss in friction causes the risk of a crash to increase. However, this relationship is non-linear, and therefore, eventually there will be a point where further losses in friction can cause a rapid increase in crash risk. For this reason, the pavement friction is monitored with equipment that slides a rubber tire with known properties over a pavement surface. Since friction is lowest when the pavement is wet, the equipment applies a film of water to the surface directly in front of the sliding tire. There are different types of equipment used to measure friction. The physical designs of the equipment and their method of testing may be different. For example, some devices measure friction by sliding a wheel that is angled away from the path of the vehicle, while others slide a wheel that is aligned with the vehicle but reduced in speed compared to the vehicle. The factors that make the equipment different can affect the quantity of friction that is measured, as well as the timing between each consecutive measurement. The advantages that some equipment offers can entice highway agencies to transition from a pre-existing system to a more advantageous system. Before transitioning, the measurements from the two types of equipment should be compared directly to determine their correlation. Statistical regression can also be used to develop models for converting the measurements from the new equipment to the units of the current, which can help engineers interpret the measurements, and to integrate them into an existing database. The presence of water on a pavement surface can result in a temporary loss of friction that can increase the risk of a crash beyond the normal, dry pavement state. This does not guarantee that dry pavements have sufficient friction as is suggested in most literature. In this dissertation, the relationship between friction and the risk of a crash on dry and wet pavements are evaluated together. The results show that increasing friction can decrease the crash risk on both dry and wet pavement surfaces. The amount of friction that is needed to maintain low crash risk is not the same for every section of road. Locations such as approaches to curves or intersections can increase the risk of a crash, and for that reason, some sections of roadway require more friction than others. Minimum levels of friction called investigatory levels can be established to trigger an in- and out-of-field investigation to determine whether improving friction can improve safety when the measured friction is at or below a specific value. This dissertation proposes a methodology for determining the investigatory levels of friction for different sections of roadway using a statistical regression approach. The investigatory levels are then used to identify locations where pavement surface treatments can reduce crashes based on a benefit-cost analysis. Last, the ability of a surface treatment to reduce crashes is evaluated using another statistical regression approach that predicts changes in crash risk using friction measurements. Since there are several treatment options, a treatment is selected based on estimated cost and benefit.
2

Contributions to Imputation Methods Based on Ranks and to Treatment Selection Methods in Personalized Medicine

Matsouaka, Roland Albert January 2012 (has links)
The chapters of this thesis focus two different issues that arise in clinical trials and propose novel methods to address them. The first issue arises in the analysis of data with non-ignorable missing observations. The second issue concerns the development of methods that provide physicians better tools to understand and treat diseases efficiently by using each patient's characteristics and personal biomedical profile. Inherent to most clinical trials is the issue of missing data, specially those that arise when patients drop out the study without further measurements. Proper handling of missing data is crucial in all statistical analyses because disregarding missing observations can lead to biased results. In the first two chapters of this thesis, we deal with the "worst-rank score" missing data imputation technique in pretest-posttest clinical trials. Subjects are randomly assigned to two treatments and the response is recorded at baseline prior to treatment (pretest response), and after a pre-specified follow-up period (posttest response). The treatment effect is then assessed on the change in response from baseline to the end of follow-up time. Subjects with missing response at the end of follow-up are assign values that are worse than any observed response (worst-rank score). Data analysis is then conducted using Wilcoxon-Mann-Whitney test. In the first chapter, we derive explicit closed-form formulas for power and sample size calculations using both tied and untied worst-rank score imputation, where the worst-rank scores are either a fixed value (tied score) or depend on the time of withdrawal (untied score). We use simulations to demonstrate the validity of these formulas. In addition, we examine and compare four different simplification approaches to estimate sample sizes. These approaches depend on whether data from the literature or a pilot study are available. In second chapter, we introduce the weighted Wilcoxon-Mann-Whitney test on un-tied worst-rank score (composite) outcome. First, we demonstrate that the weighted test is exactly the ordinary Wilcoxon-Mann-Whitney test when the weights are equal. Then, we derive optimal weights that maximize the power of the corresponding weighted Wilcoxon-Mann-Whitney test. We prove, using simulations, that the weighted test is more powerful than the ordinary test. Furthermore, we propose two different step-wise procedures to analyze data using the weighted test and assess their performances through simulation studies. Finally, we illustrate the new approach using data from a recent randomized clinical trial of normobaric oxygen therapy on patients with acute ischemic stroke. The third and last chapter of this thesis concerns the development of robust methods for treatment groups identification in personalized medicine. As we know, physicians often have to use a trial-and-error approach to find the most effective medication for their patients. Personalized medicine methods aim at tailoring strategies for disease prevention, detection or treatment by using each individual subject's personal characteristics and medical profile. This would result to (1) better diagnosis and earlier interventions, (2) maximum therapeutic benefits and reduced adverse events, (3) more effective therapy, and (4) more efficient drug development. Novel methods have been proposed to identify subgroup of patients who would benefit from a given treatment. In the last chapter of this thesis, we develop a robust method for treatment assignment for future patients based on the expected total outcome. In addition, we provide a method to assess the incremental value of new covariate(s) in improving treatment assignment. We evaluate the accuracy of our methods through simulation studies and illustrate them with two examples using data from two HIV/AIDS clinical trials.
3

Méthodologie de l’évaluation des biomarqueurs prédictifs quantitatifs et de la détermination d’un seuil pour leur utilisation en médecine personnalisée / Treatment selection markers in precision medicine : methodology of use and estimation of marker threshold

Blangero, Yoann 13 September 2019 (has links)
En France, la recherche contre le cancer est un enjeu majeur de santé publique. On estime notamment que le nombre de nouveaux cas de cancer a plus que doublé entre 1980 et 2012. L’hétérogénéité des caractéristiques tumorales, pour un même cancer, impose des défis complexes dans la recherche de traitements efficaces. Dans ce contexte, des espoirs importants sont placés dans la recherche de biomarqueurs prédictifs reflétant les caractéristiques des patients ainsi que de leur tumeur afin d’orienter le choix de la stratégie thérapeutique. Par exemple, pour les cancers colorectaux métastatiques, il est maintenant reconnu que l’ajout de cetuximab (un anti-EGFR) à la chimiothérapie classique (ici le FOLFOX4), n’apporte un bénéfice qu’aux patients dont le gène KRAS est non muté. Le gène KRAS est ici un biomarqueur prédictif binaire, mais de nombreux biomarqueurs sont le résultat d’une quantification ou d’un dosage. L’objectif de cette thèse est dans un premier temps, de quantifier la capacité globale d’un biomarqueur quantitatif à guider le choix du traitement. Après une revue de la littérature, une nouvelle méthode basée sur une extension des courbes ROC est proposée, et comparée aux méthodes existantes. Son principal avantage est d’être non paramétrique, et d’être indépendante de l’efficacité moyenne des traitements. Dans un second temps, lorsqu’un biomarqueur prédictif quantitatif est étudié, la définition d’un seuil de marqueur au-delà duquel la première option de traitement sera préférée, et en-deçà duquel la deuxième option de traitement sera préférée se pose. Une approche reposant sur la définition d’une fonction d’utilité est proposée permettant alors de tenir compte de l’efficacité des traitements ainsi que de leur impact sur la qualité de vie des patients. Une méthode Bayésienne d’estimation de ce seuil optimal est proposée / In France, the cancer research is a major public health issue. The number of new cancer cases nearly doubled between 1980 and 2012. The heterogeneity of the tumor characteristics, for a given cancer, presents a great challenge in the research of new effective treatments. In this context, much hope is placed in the research of predictive (or treatment selection) biomarkers that reflect the patients’ characteristics in order to guide treatment choice. For example, in the metastatic colorectal cancer setting, it is admitted that the addition of cetuximab (an anti-EGFR) to classical chemotherapy (the FOLFOX4), only improve the outcome of patients with KRAS wild-type tumors. In that context, the KRAS gene is a binary treatment selection marker, but plenty of biomarkers result from some quantifications or dosage measurements. The first aim of this thesis is to quantify the global treatment selection ability of a biomarker. After a review of the existing litterature, a method based on an extension of ROC curves is proposed and compared to existing methods. Its main advantage is that it is non-parametric, and that it does not depend on the mean risk of event in each treatment arm. In a second time, when a quantitative treatment selection biomarker is assessed, there is a need to estimate a marker thereshold value above which one treatment is preferred, and below which the other treatment is recommended. An approach that relies on the definition of a utility function is proposed in order to take into account both efficacy and toxicity of treatments when estimating the optimal threshold. A Bayesian method for the estimation of the optimal threshold is proposed
4

The Art in Medicine - Treatment Decision-Making and Personalizing Care: A Grounded Theory of Physicians' Treatment-Decision Making Process with Their (Stage II, Stage IIIA and Stage IIIB) Non-Small Cell Lung Cancer Patients in Ontario

Akram, Saira 10 1900 (has links)
<p><strong>Introduction:</strong> In Ontario alone, an estimated 6,700 people (3,000 women; 3,700 men) will die of lung cancer in 2011 (Canadian Cancer Society, 2011). A diagnosis of cancer is associated with complex decisions; the array of choices of cancer treatments brings about hope, but also anxiety over which treatment is best suited for the individual patient (Blank, Graves, Sepucha et al., 2006). The overall cancer experience depends on the quality of this decision (Blank et al., 2006). Clinical practice guidelines are knowledge translation tools to facilitate treatment decision-making. In Ontario, guidelines have been developed and disseminated with the purpose to inform clinical decisions, improve evidence based practice, and to reduce unwanted practice variation in the province. But has this been achieved? To study this issue, the purpose of the current study was to gain an in-depth understanding and develop a theoretical framework of how Ontario physicians are making treatment decisions with their non-small cell lung cancer patients. The following research questions guided the study: (a) How do physicians make treatment decisions with their stage II, stage IIIA and stage IIIB non-small cell lung cancer patients in Ontario? (b) How do knowledge translation tools, such as Cancer Care Ontario guidelines, influence the decision-making process?</p> <p><strong>Methods:</strong> A qualitative approach of grounded theory, following a social constructivist paradigm outlined by Kathy Charmaz (2006), was used in this study. 21 semi-structured interviews were conducted; 16 interviews with physicians and 5 with health care administrators. The method of analysis integrated grounded theory philosophy to identify the treatment decision-making process in non-small cell lung cancer, from the physician perspective.</p> <p><strong>Findings:</strong> The theory depicts the treatment decision-making process to involve five key “guides” (or factors) to inform the treatment-decision making process: the unique patient, the unique physician, the family, the clinical team, and the clinical evidence.</p> <p><strong>Conclusion:</strong> Decision-making roles in lung cancer are complex and nuanced. The use of evidence, such as, clinical practice guidelines, is one of many considerations. Information from a large number of sources and a wide array of factors, people, emotions, preferences, clinical expertise, experiences, and clinical evidence informs the dynamic process of treatment decision-making. This theory of the treatment decision-making process (from the physician perspective) has implications relevant to treatment decision-making research, theory development, and guideline development for non-small cell lung cancer.</p> / Master of Science (MSc)

Page generated in 0.1306 seconds