• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 60
  • 14
  • 11
  • 7
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 124
  • 124
  • 23
  • 23
  • 17
  • 14
  • 14
  • 14
  • 13
  • 13
  • 13
  • 12
  • 12
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Toward a Theory of Auto-modeling

Yiran Jiang (16632711) 25 July 2023 (has links)
<p>Statistical modeling aims at constructing a mathematical model for an existing data set. As a comprehensive concept, statistical modeling leads to a wide range of interesting problems. Modern parametric models, such as deepnets, have achieved remarkable success in quite a few application areas with massive data. Although being powerful in practice, many fitted over-parameterized models potentially suffer from losing good statistical properties. For this reason, a new framework named the Auto-modeling (AM) framework is proposed. Philosophically, the mindset is to fit models to future observations rather than the observed sample. Technically, choosing an imputation model for generating future observations, we fit models to future observations via optimizing an approximation to the desired expected loss function based on its sample counterpart and what we call an adaptive {\it duality function}.</p> <p><br></p> <p>The first part of the dissertation (Chapter 2 to 7) focuses on the new philosophical perspective of the method, as well as the details of the main framework. Technical details, including essential theoretical properties of the method are also investigated. We also demonstrate the superior performance of the proposed method via three applications: Many-normal-means problem, $n < p$ linear regression and image classification.</p> <p><br></p> <p>The second part of the dissertation (Chapter 8) focuses on the application of the AM framework to the construction of linear regression models. Our primary objective is to shed light on the stability issue associated with the commonly used data-driven model selection methods such as cross-validation (CV). Furthermore, we highlight the philosophical distinctions between CV and AM. Theoretical properties and numerical examples presented in the study demonstrate the potential and promise of AM-based linear model selection. Additionally, we have devised a conformal prediction method specifically tailored for quantifying the uncertainty of AM predictions in the context of linear regression.</p>
92

Supervised Machine Learning Modeling in Secondary Metallurgy : Predicting the Liquid Steel Temperature After the Vacuum Treatment Step of the Vacuum Tank Degasser

Vita, Roberto January 2022 (has links)
In recent years the steelmaking industry has been subjected to continuous attempts to improve its production route. The main goals has been to increase the competitiveness and to reduce the environmental impact. The development of predictive models has therefore been of crucial importance in order to achieve such optimization. Models are representations or idealizations of reality which can be used to investigate new process strategies without the need of intervention in the process itself. Together with the development of Industry 4.0, Machine Learning (ML) has turned out as a promising modeling approach for the steel industry. However, ML models are generally difficult to interpret, which makes it complicated to investigate if the model accurately represents reality. The present work explores the practical usefulness of applied ML models in the context of the secondary metallurgy processes in steelmaking. In particular, the application of interest is the prediction of the liquid steel temperature after the vacuum treatment step in the Vacuum Tank Degasser (VTD). The choice of the VTD process step is related to its emerging importance in the SSAB Oxelösund steel plant due to the planned future investment in an Electric Arc Furnace (EAF) based production line. The temperature is an important parameter of process control after the vacuum treatment since it directly influences the castability of the steel. Furthermore, there are not many available models which predict the temperature after the vacuum treatment step. The present thesis focuses first on giving a literature background of the statistical modeling approach, mainly addressing the ML approach, and the VTD process. Furthermore, it is reported the methodology behind the construction of the ML model for the application of interest and the results of the numerical experiments. All the statistical concepts used are explained in the literature section. By using the described methodologies, several findings originated from the resulting ML models predicting the temperature of the liquid steel after the vacuum treatment in the VTD.A high complexity of the model is not necessary in order to achieve a high predictive performance on the test data. On the other hand, the data quality is the most important factor to take into account when improving the predictive performance. Itis fundamental having an expertise in both metallurgy and machine learning in order to create a machine learning model that is both relevant and interpretable to domain experts. This knowledge is indeed fundamental for the selection of the input data and the machine learning model framework. Crucial information for the predictions result to be the heat status of the ladle as well as the stirring process time and the temperature benchmarks before and after the vacuum steps. However, to draw specific conclusions, a higher model predictive performance is needed. This can only be obtained upon a significant data quality improvement. / Stålindustrin har under de senaste åren ständigt förbättrat sin produktionsförmåga som i huvudsak har bidragit till ökad konkurrenskraft och minskad miljöpåverkan. Utvecklingen  av  prediktiva  modeller  har  under  denna  process  varit  av  avgörande betydelse för att uppnå dessa bedrifter. Modeller är representationer eller idealiseringar av verkligheten som kan användas för att utvärdera nya processtrategier utan att åberopa  ingrepp  i  själva  processen.    Detta  sparar  industrin  både  tid och pengar.   I takt  med  Industri  4.0  har  maskininlärning  blivit  uppmärksammad  som ett ytterligare modelleringsförfarande inom stålindustrin. Maskininlärningsmodeller är dock generellt svårtolkade, vilket gör det utmanande att undersöka om modellen representerar verkligheten. Detta arbete undersöker den praktiska användningen av maskininlärningsmodeller inom sekundärmetallurgin på  ett  svenskt  stålverk.    Tillämpningen  är  i  synnerhet av   intresse   för   att   kunna   förutspå   temperaturen   hos   det   flytande   stålet   efter vakuumbehandlingssteget  i  VTD-processen. Denna  process  valdes  eftersom den är av  stor  betydelse  för  framtida  ståltillverkning  hos  SSAB  i  Oxelösund. Detta är primärt  på  grund  utav  att  SSAB  kommer  att  investera  i  en ljusbågsugnsbaserad produktionslinje.   Temperaturen är en viktig processparameter eftersom den direkt påverkar   stålets   gjutbarhet. Utöver   detta   har   inga   omfattande   arbeten   gjorts gällande  att  förutspå  temperaturen  efter vakuumbehandlingssteget  med  hjälp av maskininlärningsmodeller. Arbetet presenterar först en litteraturbakgrund inom statistisk modellering med fokus på maskininlärning och VTD-processen.  Därefter redovisas metodiken som använts för att skapa  maskininlärningsmodellerna  som  ligger  till  grund  för  de  numeriska experimenten samt resultaten. Genom att använda de beskrivna metoderna härrörde flera fynd från de skapande maskininlärningsmodellerna. En hög grad av komplexitet är inte   nödvändig   för   att   uppnå   en   hög   prediktiv   förmåga   på   data   som   inte använts  för  att  anpassa  modellens  parametrar.   Å  andra  sidan  är  datakvalitén den viktigaste faktorn om man ämnar att förbättra den prediktiva förmågan hos modellen. Utöver  detta  är  det  av  yttersta  vikt  att  ha  kompetens  inom  både  metallurgi  och maskininlärning för att skapa en modell som är både relevant och tolkbar för experter inom  området  processmetallurgi. Ideligen  är  kunskap  inom  processmetallurgi grundläggande för val av indata och val av maskininlärningsalgoritm. Under analysen av maskininlärningsmodellerna upptäcktes det att skänkens värmestatus, omrörningstiden i processen, samt temperaturriktmärkena före och efter vakuumstegen var de mest avgörande variablerna för modellens prediktiva förmåga. För att kunna dra specifika slutsatser behöver modellen ha en högre prediktiv förmåga. Detta kan endast erhållas efter en betydande förbättring av datakvalitén.
93

Customer Lifetime Value Prediction Using Statistical Modeling: Predicting Online Payments in an Industry Setting / Kundens livstidsvärde förutsägelse med statistisk modellering: predikera online betalningar i en industriell miljö

SUSOYKINA, Alina January 2018 (has links)
Customer lifetime value (CLV) provides a measure of customer revenue contribution. It allows to justify marketing campaigns, overall budgeting, and strategy planning. CLV is an estimated cash flow attributed to the entire relationship with customers in the future. Ability to utilize information gained from CLV analysis at the most efficient way, provides a strong competitive advantage. The concept of CLV was studied and modeled with application to the online payments industry which is relatively new and at its growing phase. Ability to predict CLV accurately conveys a great value for guiding the industry (i.e. emerging companies) to maturity. CLV analysis in this case becomes complex due to the fact that usually the databases of such companies are huge and include transactions from different industries: e-commerce, financial services, travel, gaming etc. This paper aims to define an appropriate model for CLV prediction in the online payments setting. The proposed model segments customers first in order to improve performance of the predictive model. Then Pareto/NBD model was applied to predict CLV at the customer-level for each customer segment separately. Although the results show that it is possible to predict CLV at some extent, the model needs to be further improved and possible pitfalls need to be scrutinized. Discussion on these issues is provided in the following sections. / Kundens livstidsvärde (Customer lifetime value) är ett mått på hur en kund bidrar till företagets omsättning. Det tillåter att åskådliggöra försäljningskampanjer, företagets budget och företagets strategi. Kundens livstidsvärde är en estimering av betalningsflöde som ett företag kan tjäna av kunder i framtiden. Möjligheten att nyttiggöra informationen från kundens livstidsvärde analys ger företag en starkt konkurrenskraftig fördel. Kundens livstidsvärde var studerat och modellerat med anknytning till online betalningstjänster industri, vilken har utvecklats kraftigt inom senaste åren. Möjligheten att predikera kundens livstidsvärde med hög noggranhet medför ett starkt värde för företag som erbjuder tjänster inom online betalningar och kan driva dessa till mognad. Att predikera kundens livstidsvärde inom denna bransch anses vara en komplex process, då databaser hos såna föratag är stora och inkluderar information om transaktioner från olika industrier såsom: elektronisk handel, finansiella tjänster, rese- och spelbolag. I denna studie definieras en modell för att kunna predikera kundens livstidsvärde baserat på data från ett företag som tillhandahåller online betalningstjänster. För att uppnå bättre prestanda, segmenterar den föreslagna modellen kunder först. Därefter en Pareto/NBD modell används, för att predikera kundens livstidsvärde för varje kundsegment. Trots att resultat visar att kundens livtidsvärde kan modelleras till en viss nivå, modellen behöver förbättras och möjliga blindskär måste granskas.
94

Exploration and development of crash modification factors and functions for single and multiple treatments

Park, Juneyoung 01 January 2015 (has links)
Traffic safety is a major concern for the public, and it is an important component of the roadway management strategy. In order to improve highway safety, extensive efforts have been made by researchers, transportation engineers, Federal, State, and local government officials. With these consistent efforts, both fatality and injury rates from road traffic crashes in the United States have been steadily declining over the last six years (2006~2011). However, according to the National Highway Traffic Safety Administration (NHTSA, 2013), 33,561 people died in motor vehicle traffic crashes in the United States in 2012, compared to 32,479 in 2011, and it is the first increase in fatalities since 2005. Moreover, in 2012, an estimated 2.36 million people were injured in motor vehicle traffic crashes, compared to 2.22 million in 2011. Due to the demand of highway safety improvements through systematic analysis of specific roadway cross-section elements and treatments, the Highway Safety Manual (HSM) (AASHTO, 2010) was developed by the Transportation Research Board (TRB) to introduce a science-based technical approach for safety analysis. One of the main parts in the HSM, Part D, contains crash modification factors (CMFs) for various treatments on roadway segments and at intersections. A CMF is a factor that can estimate potential changes in crash frequency as a result of implementing a specific treatment (or countermeasure). CMFs in Part D have been developed using high-quality observational before-after studies that account for the regression to the mean threat. Observational before-after studies are the most common methods for evaluating safety effectiveness and calculating CMFs of specific roadway treatments. Moreover, cross-sectional method has commonly been used to derive CMFs since it is easier to collect the data compared to before-after methods. Although various CMFs have been calculated and introduced in the HSM, still there are critical limitations that are required to be investigated. First, the HSM provides various CMFs for single treatments, but not CMFs for multiple treatments to roadway segments. The HSM suggests that CMFs are multiplied to estimate the combined safety effects of single treatments. However, the HSM cautions that the multiplication of the CMFs may over- or under-estimate combined effects of multiple treatments. In this dissertation, several methodologies are proposed to estimate more reliable combined safety effects in both observational before-after studies and the cross-sectional method. Averaging two best combining methods is suggested to use to account for the effects of over- or under- estimation. Moreover, it is recommended to develop adjustment factor and function (i.e. weighting factor and function) to apply to estimate more accurate safety performance in assessing safety effects of multiple treatments. The multivariate adaptive regression splines (MARS) modeling is proposed to avoid the over-estimation problem through consideration of interaction impacts between variables in this dissertation. Second, the variation of CMFs with different roadway characteristics among treated sites over time is ignored because the CMF is a fixed value that represents the overall safety effect of the treatment for all treated sites for specific time periods. Recently, few studies developed crash modification functions (CMFunctions) to overcome this limitation. However, although previous studies assessed the effect of a specific single variable such as AADT on the CMFs, there is a lack of prior studies on the variation in the safety effects of treated sites with different multiple roadway characteristics over time. In this study, adopting various multivariate linear and nonlinear modeling techniques is suggested to develop CMFunctions. Multiple linear regression modeling can be utilized to consider different multiple roadway characteristics. To reflect nonlinearity of predictors, a regression model with nonlinearizing link function needs to be developed. The Bayesian approach can also be adopted due to its strength to avoid the problem of over fitting that occurs when the number of observations is limited and the number of variables is large. Moreover, two data mining techniques (i.e. gradient boosting and MARS) are suggested to use 1) to achieve better performance of CMFunctions with consideration of variable importance, and 2) to reflect both nonlinear trend of predictors and interaction impacts between variables at the same time. Third, the nonlinearity of variables in the cross-sectional method is not discussed in the HSM. Generally, the cross-sectional method is also known as safety performance functions (SPFs) and generalized linear model (GLM) is applied to estimate SPFs. However, the estimated CMFs from GLM cannot account for the nonlinear effect of the treatment since the coefficients in the GLM are assumed to be fixed. In this dissertation, applications of using generalized nonlinear model (GNM) and MARS in the cross-sectional method are proposed. In GNMs, the nonlinear effects of independent variables to crash analysis can be captured by the development of nonlinearizing link function. Moreover, the MARS accommodate nonlinearity of independent variables and interaction effects for complex data structures. In this dissertation, the CMFs and CMFunctions are estimated for various single and combination of treatments for different roadway types (e.g. rural two-lane, rural multi-lane roadways, urban arterials, freeways, etc.) as below: 1) Treatments for mainline of roadway: - adding a thru lane, conversion of 4-lane undivided roadways to 3-lane with two-way left turn lane (TWLTL) 2) Treatments for roadway shoulder: - installing shoulder rumble strips, widening shoulder width, adding bike lanes, changing bike lane width, installing roadside barriers 3) Treatments related to roadside features: - decrease density of driveways, decrease density of roadside poles, increase distance to roadside poles, increase distance to trees Expected contributions of this study are to 1) suggest approaches to estimate more reliable safety effects of multiple treatments, 2) propose methodologies to develop CMFunctions to assess the variation of CMFs with different characteristics among treated sites, and 3) recommend applications of using GNM and MARS to simultaneously consider the interaction impact of more than one variables and nonlinearity of predictors. Finally, potential relevant applications beyond the scope of this research but worth investigation in the future are discussed in this dissertation.
95

Evaluating Factors Contributing to Crash Severity Among Older Drivers: Statistical Modeling and Machine Learning Approaches

Alrumaidhi, Mubarak S. M. S. 23 February 2024 (has links)
Road crashes pose a significant public health issue worldwide, often leading to severe injuries and fatalities. This dissertation embarks on a comprehensive examination of the factors affecting road crash severity, with a special focus on older drivers and the unique challenges introduced by the COVID-19 pandemic. Utilizing a dataset from Virginia, USA, the research integrates advanced statistical methods and machine learning techniques to dissect this critical issue from multiple angles. The initial study within the dissertation employs multilevel ordinal logistic regression to assess crash severity among older drivers, revealing the complex interplay of various factors such as crash type, road attributes, and driver behavior. It highlights the increased risk of severe crashes associated with head-on collisions, driver distraction or impairment, and the non-use of seat belts, specifically affecting older drivers. These findings are pivotal in understanding the unique vulnerabilities of this demographic on the road. Furthermore, the dissertation explores the efficacy of both parametric and non-parametric machine learning models in predicting crash severity. It emphasizes the innovative use of synthetic resampling techniques, particularly random over-sampling examples (ROSE) and synthetic minority over-sampling technique (SMOTE), to address class imbalances. This methodological advancement not only improves the accuracy of crash severity predictions for severe crashes but also offers a comprehensive understanding of diverse factors, including environmental and roadway characteristics. Additionally, the dissertation examines the influence of the COVID-19 pandemic on road safety, revealing a paradoxical decrease in overall traffic crashes accompanied by an increase in the rate of severe injuries. This finding underscores the pandemic's transformative effect on driving behaviors and patterns, heightening risks for vulnerable road users like pedestrians and cyclists. The study calls for adaptable road safety strategies responsive to global challenges and societal shifts. Collectively, the studies within this dissertation contribute substantially to transportation safety research. They demonstrate the complex nature of factors influencing crash severity and the efficacy of tailored approaches in addressing these challenges. The integration of advanced statistical methods with machine learning techniques offers a profound understanding of crash dynamics and sets a new benchmark for future research in transportation safety. This dissertation underscores the evolving challenges in road safety, especially amidst demographic shifts and global crises, and advocates for adaptive, evidence-based strategies to enhance road safety for all, particularly vulnerable groups like the older drivers. / Doctor of Philosophy / Road crashes are a major concern worldwide, often leading to serious injuries and loss of life. This dissertation delves into the critical issue of road crash severity, with a special focus on older drivers and the challenges brought about by the COVID-19 pandemic. Drawing on data from Virginia, USA, the research combines cutting-edge statistical methods and machine learning to shed light on this pressing matter. One important part of the research focuses on older drivers. It uses advanced analysis to find out why crashes involving this group might be more serious. The study discovered that situations like head-on collisions, driver distraction or impairment, and not wearing seat belts greatly increase the risk for older drivers. Understanding these risks is crucial in identifying the special needs of older drivers on the road. Then, the study explores the power of machine learning in predicting crash severity. Here, the research stands out by using innovative techniques to balance out the data, leading to more accurate predictions. This part of the study not only improves our understanding of what leads to severe crashes but also highlights how different environmental and road factors play a role. Following this, the research looks at how the COVID-19 pandemic has impacted road safety. Interestingly, while the overall number of crashes went down during the pandemic, the rate of severe injuries in the crashes that occurred increased. This suggests that the pandemic changed driving behaviors, posing increased risks especially to pedestrians and cyclists. In summary, this dissertation offers valuable insights into the complex factors affecting road crash severity. It underscores the importance of using advanced analysis techniques to understand these dynamics better, especially in the face of demographic changes and global challenges like the pandemic. The findings are not just academically significant; they provide practical guidance for policymakers and road safety experts to develop strategies that make roads safer for everyone, particularly older drivers.
96

A Statistical Approach to Modeling Wheel-Rail Contact Dynamics

Hosseini, SayedMohammad 12 January 2021 (has links)
The wheel-rail contact mechanics and dynamics that are of great importance to the railroad industry are evaluated by applying statistical methods to the large volume of data that is collected on the VT-FRA state-of-the-art roller rig. The intent is to use the statistical principles to highlight the relative importance of various factors that exist in practice to longitudinal and lateral tractions and to develop parametric models that can be used for predicting traction in conditions beyond those tested on the rig. The experiment-based models are intended to be an alternative to the classical traction-creepage models that have been available for decades. Various experiments are conducted in different settings on the VT-FRA Roller Rig at the Center for Vehicle Systems and Safety at Virginia Tech to study the relationship between the traction forces and the wheel-rail contact variables. The experimental data is used to entertain parametric and non-parametric statistical models that efficiently capture this relationship. The study starts with single regression models and investigates the main effects of wheel load, creepage, and the angle of attack on the longitudinal and lateral traction forces. The assumptions of the classical linear regression model are carefully assessed and, in the case of non-linearities, different transformations are applied to the explanatory variables to find the closest functional form that captures the relationship between the response and the explanatory variables. The analysis is then extended to multiple models in which interaction among the explanatory variables is evaluated using model selection approaches. The developed models are then compared with their non-parametric counterparts, such as support vector regression, in terms of "goodness of fit," out-of-sample performance, and the distribution of predictions. / Master of Science / The interaction between the wheel and rail plays an important role in the dynamic behavior of railway vehicles. The wheel-rail contact has been extensively studied through analytical models, and measuring the contact forces is among the most important outcomes of such models. However, these models typically fall short when it comes to addressing the practical problems at hand. With the development of a high-precision test rig—called the VT-FRA Roller Rig, at the Center for Vehicle Systems and Safety (CVeSS)—there is an increased opportunity to tackle the same problems from an entirely different perspective, i.e. through statistical modeling of experimental data. Various experiments are conducted in different settings that represent railroad operating conditions on the VT-FRA Roller Rig, in order to study the relationship between wheel-rail traction and the variables affecting such forces. The experimental data is used to develop parametric and non-parametric statistical models that efficiently capture this relationship. The study starts with single regression models and investigates the main effects of wheel load, creepage, and the angle of attack on the longitudinal and lateral traction forces. The analysis is then extended to multiple models, and the existence of interactions among the explanatory variables is examined using model selection approaches. The developed models are then compared with their non-parametric counterparts, such as support vector regression, in terms of "goodness of fit," out-of-sample performance, and the distribution of the predictions. The study develops regression models that are able to accurately explain the relationship between traction forces, wheel load, creepage, and the angle of attack.
97

Problèmes de comportement à long terme chez les patients pédiatriques atteints de leucémie lymphoblastique aiguë

Marcoux, Sophie 12 1900 (has links)
Les améliorations dans les protocoles de traitement pour la majorité des cancers pédiatriques ont augmenté de façon marquée les taux de survie. Cependant, des risques élevés de multiples problèmes de santé chez les survivants sont bien documentés. En ce qui concerne spécifiquement les problèmes neuropsychologiques, les principaux facteurs de risque individuels connus à ce jour (l’âge au diagnostic, le genre du patient, l’exposition aux radiations) demeurent insuffisants pour cibler efficacement et prévenir les séquelles à long terme. Les objectifs généraux de cette thèse étaient : 1) la caractérisation des trajectoires individuelles de problèmes de comportement chez une population de patients pédiatriques atteints de leucémie lymphoblastique aiguë; 2) l’identification des principaux déterminants génétiques, médicaux et psychosociaux associés aux problèmes de comportements. Les hypothèses étaient : 1) Il existe une association entre les trajectoires individuelles de problèmes de comportement et a - des facteurs psychosociaux liés au fonctionnement familial, b - des polymorphismes dans les gènes modérateurs des effets thérapeutiques du méthotrexate et des glucocorticoïdes, c - des variables liées aux traitements oncologiques. 2) L'utilisation de modèles statistiques multi-niveaux peut permettre d’effectuer cette caractérisation des trajectoires individuelles et l’identification des facteurs de risque associés. 138 patients pédiatriques (0-18 ans) ayant reçu un diagnostic de leucémie lymphoblastique aiguë entre 1993 et 1999 au CHU Ste-Justine ont participé à une étude longitudinale d’une durée de 4 ans. Un instrument validé et standardisés, le Child Behavior Checklist, a été utilisé pour obtenir un indice de problèmes de comportement, tel que rapporté par la mère, au moment du diagnostic, puis 1, 2, 3 et 4 ans post-diagnostic. Des données génétiques, psychosociales et médicales ont aussi été collectées au cours de cette même étude longitudinale, puis ont été exploitées dans les modélisations statistiques effectuées. Les résultats obtenus suggèrent que les problèmes de comportement de type internalisés et externalisés possèdent des trajectoires et des facteurs de risque distincts. Les problèmes internalisés sont des manifestations de troubles affectifs chez le patient, tels que des symptômes dépressifs ou anxieux, par exemple. Ceux-ci sont très prévalents tôt après le diagnostic et se normalisent par la suite, indiquant des difficultés significatives, mais temporaires. Des facteurs médicaux exacerbant l'expérience de stress, soit le risque de rechute associé au diagnostic et les complications médicales affectant la durée de l'hospitalisation, ralentissent cette normalisation. Les problèmes externalisés se manifestent dans le contact avec autrui; des démonstrations d’agression ou de violence font partie des symptômes. Les problèmes externalisés sont plus stables dans le temps relativement aux problèmes internalisés. Des variables pharmacologiques et génétiques contribuent aux différences individuelles : l'administration d’un glucocorticoïde plus puissant du point de vue des effets pharmacologiques et toxicologiques, ainsi que l’homozygotie pour l’haplotype -786C844T du gène NOS3 sont liés à la modulation des scores de problèmes externalisés au fil du temps. Finalement, le niveau de stress familial perçu au diagnostic est positivement corrélé avec le niveau initial de problèmes externalisés chez le patient, tandis que peu après la fin de la période d’induction, le niveau de stress familial est en lien avec le niveau initial de problèmes internalisés. Ces résultats supportent l'idée qu'une approche holistique est essentielle pour espérer mettre en place des interventions préventives efficaces dans cette population. À long terme, ces connaissances pourraient contribuer significativement à l'amélioration de la qualité de vie des patients. Ces travaux enrichissent les connaissances actuelles en soulignant les bénéfices des suivis longitudinaux et multidisciplinaires pour comprendre la dynamique de changement opérant chez les patients. Le décloisonnement des savoirs semble devenir incontournable pour aspirer dépasser le cadre descriptif et atteindre un certain niveau de compréhension des phénomènes observés. Malgré des défis méthodologiques et logistiques évidents, ce type d’approche est non seulement souhaitable pour étudier des processus dynamiques, mais les travaux présentés dans cette thèse indiquent que cela est possible avec les moyens analytiques actuels. / Recent improvements in pediatric cancers treatment have led to marked increases in patient survival rate. However, it has been well documented that pediatric cancer survivors are at elevated risk for various other health problems. With respect specifically to neuropsychological side effects, known predictors (mainly: age at diagnosis, patient gender, exposure to radiation therapy) remain insufficient so far to target, and prevent efficiently, long term sequelae in this population. General objectives related to this thesis were: 1) characterization of individual trajectories of behavioral problems in pediatric patients with acute lymphoblastic leukemia; 2) the identification of genetic, medical and psychosocial determinants of behavioral problems in this population. This research program was based on the following hypotheses: 1) there is an association between the trajectories of individual behavioral problems and a – familial well-being-related psychosocial factors, b – gene polymorphisms involved in the therapeutic responses to methotrexate and glucocorticoids, c – anti-cancer treatments-related variables. 2) Multilevel statistical modeling can be used to characterize patient groups according to their individual behavioral problem trajectories, and can also identify predictive factors. 138 pediatric patients (0-18 years old) who received an acute lymphoblastic leukemia diagnosis between 1993 and 1999 at CHU Ste-Justine participated in this 4 years-long longitudinal study. A standardized and validated instrument, the Child Behavior Checklist, was used to measure behavior problems, as reported by the mother, at diagnosis, and then 1, 2, 3 and 4 years post-diagnosis. Genetic, psychosocial and medical data were also collected during this longitudinal study; these data were exploited in the context of the statistical modeling performed. Results obtained suggest that internalized and externalized behavioral problems have distinct trajectories and have different predictive factors. Internalized problems are affective issues presented by the patient, such as depressive or anxious symptoms. They are highly prevalent post-diagnosis and normalize over the following years, suggestive of temporary yet significant problems. Stress-enhancing medical variables such as a higher relapse risk at diagnosis and medical complications requiring a longer hospitalization slow down the normalization process. Externalized problems need interpersonal contact to occur; violence or aggressiveness manifestations are some examples. Compared to internalized problems, externalized problems are much more stable across time. However, pharmacological and genetic variables do contribute to individual differences in trajectories. In particular, administration of a more potent glucocorticoid (from pharmacological and toxicological perspectives) and being homozygous for NOS3 gene -786C844T haplotype are linked to modulation of externalized problems in time. Finally, the level of perceived family stress at time of diagnosis is positively correlated with initial externalized problems, while shortly after the induction period, the level of familial stress is linked with the initial internalized problems. Together, these results support the idea that a holistic care strategy is essential to develop efficient, preventive interventions in this population, due to the multifactorial nature of these behavioral problems. The knowledge generated in the present studies could contribute to better quality of life for these patients. This thesis also brings a more holistic contribution to our current knowledge of behavioral problems in this population, by highlighting the need for individual, multidisciplinary follow-ups, with particular emphasis on repeated measurements and appropriate statistical analyses. More than ever, knowledge de-compartmentalization appears essential in reaching a certain comprehension level of observed phenomena, rather than adhering to descriptive settings. It indicates that, despite obvious methodological and logistic challenges, this type of research is not only desirable in studying dynamic processes, but is certainly achievable with current analytical tools.
98

Modélisation de la pharmacocinétique et des mécanismes d’action intracellulaire du 5-fluorouracile : applications à l’étude de la variabilité de l’effet thérapeutique en population et à l’innovation thérapeutique / Modeling of pharmacokinetics and intracellular mechanisms of action of 5-fluorouracil : applications to the study of the therapeutic effect variability in population and therapeutic innovation

Bodin, Justine 24 September 2010 (has links)
Les traitements existants des métastases hépatiques du cancer colorectal montrent une efficacité insuffisante. Le projet GR5FU visait à améliorer cette efficacité et consistait à délivrer le 5-fluorouracile (5FU) dans le foie via son encapsulation dans des globules rouges (GR). Dans ce contexte, la modélisation visait à prédire la quantité de 5FU à encapsuler dans les GR pour atteindre une efficacité équivalente à celle du 5FU standard. Dans cette thèse, nous avons construit et implémenté un modèle mathématique multi-échelle qui relie l’injection du 5FU à son efficacité sur la croissance tumorale en intégrant sa pharmacocinétique et son mécanisme d’action intracellulaire. Des simulations de population de ce modèle, s’appuyant sur des paramètres de la littérature, nous ont permis (i) de reproduire des résultats cliniques montrant le pouvoir prédictif de l’enzyme Thymidylate Synthase (TS) et (ii) d’identifier deux prédicteurs potentiels de la réponse au 5FU à l’échelle d’une population virtuelle, en complément du niveau de TS : la vitesse de croissance tumorale et le métabolisme intracellulaire des pyrimidines. Nous avons également analysé, à l’aide de modèles à effets mixtes, (i) la croissance in vivo de la tumeur intra-hépatique VX2 sans traitement, tenant lieu de modèle animal de métastase hépatique, et (ii) la distribution plasmatique et hépatique du 5FU chez l’animal. Cette modélisation statistique nous a permis d’identifier les modèles décrivant des données expérimentales, d’estimer les paramètres de ces modèles et leur variabilité, et de générer une meilleure connaissance de la croissance de la tumeur VX2 et de la pharmacocinétique animale du 5FU, en particulier hépatique. Dans cette thèse, nous avons illustré comment l’intégration du métabolisme d’un médicament et de son mécanisme d’action dans un modèle global et la simulation de ce modèle à l’échelle d’une population virtuelle, constituent une approche prometteuse pour optimiser le développement d’hypothèses thérapeutiques innovantes en collaboration avec des expérimentateurs. / Existing treatments for liver metastases of colorectal cancer show a lack of efficacy. In order to improve the prognosis of patients, the GR5FU project has been implemented. It consisted in delivering the drug 5-fluorouracil (5FU) in the liver via its encapsulation in red blood cells (RBC) to increase its efficacy / toxicity ratio. In this context, the modeling aimed at predicting the amount of 5FU to encapsulate in RBC to achieve an efficacy equivalent to standard 5FU. In this thesis, we have created and implemented a multiscale mathematical model that links the injection of 5FU to its efficacy on tumor growth by integrating its pharmacokinetics and mechanism of intracellular action. Population simulations of this model, using parameters from the literature, allowed us (i) to reproduce clinical results showing the predictive power of TS enzyme level and (ii) to identify two potential predictors of response to 5FU at the level of a population of virtual patients, in addition to TS level. We also analyzed, using mixed effects models, (i) the in vivo growth of intrahepatic VX2 tumor without treatment, serving as an animal model of liver metastasis, and (ii) the distribution of 5FU in the animal’s organism. This statistical modelization enabled us to identify the models describing experimental data, to estimate the parameters of these models and their variability, and generate a better knowledge of VX2 tumor growth and animal 5FU pharmacokinetics. In this thesis, we illustrated how the integration of drug metabolism and its mechanism of action in a global model and the simulation of this model at the scale of a virtual population, form a promising approach to optimize the development of innovative therapeutic hypotheses in collaboration with experimentalists.
99

Theoretical and practical considerations for implementing diagnostic classification models

Kunina-Habenicht, Olga 25 August 2010 (has links)
Kognitive Diagnosemodelle (DCMs) sind konfirmatorische probabilistische Modelle mit kategorialen latenten Variablen, die Mehrfachladungsstrukturen erlauben. Sie ermöglichen die Abbildung der Kompetenzen in mehrdimensionalen Profilen, die zur Erstellung informativer Rückmeldungen dienen können. Diese Dissertation untersucht in zwei Anwendungsstudien und einer Simulationsstudie wichtige methodische Aspekte bei der Schätzung der DCMs. In der Arbeit wurde ein neuer Mathematiktest entwickelt basierend auf theoriegeleiteten vorab definierten Q-Matrizen. In den Anwendungsstudien (a) illustrierten wir die Anwendung der DCMs für empirische Daten für den neu entwickelten Mathematiktest, (b) verglichen die DCMs mit konfirmatorischen Faktorenanalysemodellen (CFAs), (c) untersuchten die inkrementelle Validität der mehrdimensionalen Profile und (d) schlugen eine Methode zum Vergleich konkurrierender DCMs vor. Ergebnisse der Anwendungsstudien zeigten, dass die geschätzten DCMs meist einen nicht akzeptablen Modellfit aufwiesen. Zudem fanden wir nur eine vernachlässigbare inkrementelle Validität der mehrdimensionalen Profile nach der Kontrolle der Personenparameter bei der Vorhersage der Mathematiknote. Zusammengenommen sprechen diese Ergebnisse dafür, dass DCMs per se keine zusätzliche Information über die mehrdimensionalen CFA-Modelle hinaus bereitstellen. DCMs erlauben jedoch eine andere Aufbereitung der Information. In der Simulationsstudie wurde die Präzision der Parameterschätzungen in log-linearen DCMs sowie die Sensitivität ausgewählter Indizes der Modellpassung auf verschiedene Formen der Fehlspezifikation der Interaktionsterme oder der Q-Matrix untersucht. Die Ergebnisse der Simulationsstudie zeigen, dass die Parameterwerte für große Stichproben korrekt geschätzt werden, während die Akkuratheit der Parameterschätzungen bei kleineren Stichproben z. T. beeinträchtigt ist. Ein großer Teil der Personen wird in Modellen mit fehlspezifizierten Q-Matrizen falsch klassifiziert. / Cognitive diagnostic classification models (DCMs) have been developed to assess the cognitive processes underlying assessment responses. Current dissertation aims to provide theoretical and practical considerations for estimation of DCMs for educational applications by investigating several important underexplored issues. To avoid problems related to retrofitting of DCMs to an already existing data, test construction of the newly mathematics assessment for primary school DMA was based on a-priori defined Q-matrices. In this dissertation we compared DCMs with established psychometric models and investigated the incremental validity of DCMs profiles over traditional IRT scores. Furthermore, we addressed the issue of the verification of the Q-matrix definition. Moreover, we examined the impact of invalid Q-matrix specification on item, respondent parameter recovery, and sensitivity of selected fit measures. In order to address these issues one simulation study and two empirical studies illustrating applications of several DCMs were conducted. In the first study we have applied DCMs in general diagnostic modelling framework and compared those models to factor analysis models. In the second study we implemented a complex simulation study and investigated the implications of Q-matrix misspecification on parameter recovery and classification accuracy for DCMs in log-linear framework. In the third study we applied results of the simulation study to a practical application based on the data for 2032 students for the DMA. Presenting arguments for additional gain of DCMs over traditional psychometric models remains challenging. Furthermore, we found only a negligible incremental validity of multivariate proficiency profiles compared to the one-dimensional IRT ability estimate. Findings from the simulation study revealed that invalid Q-matrix specifications led to decreased classification accuracy. Information-based fit indices were sensitive to strong model misspecifications.
100

Modelo para a avaliação do risco de crédito de municípios brasileiros / Model for the evaluation of the credit risk of Brazilian cities

Vicente, Ernesto Fernando Rodrigues 22 January 2004 (has links)
Tanto na área pública como na área privada, as necessidades de financiamento são diretamente proporcionais às decisões de investimento. Para cada unidade monetária a ser investida há a necessidade de se obter fundos para o financiamento desse investimento. Quando são levantadas questões sobre o assunto –necessidades de financiamento- e essas questões são associadas às finanças municipais, surge uma lacuna para a qual, até o momento, não há estudos e/ou pesquisas que forneçam uma resposta sobre como medir o risco de crédito dos municípios brasileiros. A busca dessa resposta é o objetivo deste trabalho. A pesquisa bibliográfica forneceu o aporte teórico, tanto em finanças e crédito, como no uso de modelos econométricos. A análise de modelos de insolvência, aplicados a empresas, contribuiu para orientar os modelos que poderiam ser testados e possivelmente orientados para a análise do risco de crédito dos municípios. A Lei de Responsabilidade Fiscal (LRF), como uma primeira medida para iniciar o processo de gestão responsável, e, provavelmente, em um futuro próximo, a obrigatoriedade de divulgação dos demonstrativos financeiros e auditorias independentes sejam também componentes obrigatórios na gestão municipal, como também a adoção de “ratings" municipais, contribuíram para a motivação do desenvolvimento de um modelo de risco de crédito de municípios . Após a obtenção dos dados financeiros dos municípios brasileiros (no sitio da Secretaria do Tesouro Nacional), dos dados demográficos (disponibilizados em CD pelo Instituto Brasileiro de Geografia e Estatística - Base de informações municipais 3), da opinião de diversos especialistas sobre seu conceito em relação ao risco de crédito apresentado por diversos municípios, do tratamento desses dados e da constituição de um banco de dados integrando todas as informações selecionadas, e aplicando-se a análise estatística discriminante ao banco de dados obtido, obteve-se um modelo estatístico com um nível de acerto aproximado de 70% / As many on public area as on private area, the financing needs are relative to investment decisions. For each monetary unit to be invested there is need to obtain funds to financing. When questions are made about this issues –financing needs- and those questions are associated to municipal finances, one hiatus appears at this moment, wich there wasn’t studies or researches to be able to provide a reply or a solution on the subject to measure the brazilians municipal credit risk. The search for this solution is the subject of the present work. The bibliographic research provide the theoretical base, as many in finances and credit, as econometrics modeling. The bankrupt modeling analysis applied to companies, contributed to orient the templates that could be tested and possibly oriented to municipal credit risk analysis. A special Law of Fiscal Responsibility (LRF), is the first rule to begin the responsible management process, and probably, in the near future, the obligation of disclosure the financial statements, and independent audits that may be the mandatory components on municipal management, as well as the adoption or acceptance of municipal ratings contributed to the motivation to development of one model of municipal credit risk. After the attainment of brazilian cities financial information, from the official National Treasure site, demographic data (available in CD of Brazilian Institut of Geography & Statistics’ database of municipal information), and about expertise’s judgments on the subject of concept in relation to credit risk presented for many cities, about the treatment of these information and the creation of a database that grant the full integration of selected information, applying the discriminant function analysiys to the database obtained, resulted a statistic model that hit a target level with approximatly 70%.

Page generated in 0.1322 seconds