Spelling suggestions: "subject:"fonte carlosimulations"" "subject:"fonte carlosimulation""
81 |
Density-Dependent Diffusion Models with Biological ApplicationsFOUAD, AHMED MOHAMED January 2015 (has links)
Diffusion is defined as the movement of a substance down a concentration gradient. The physics of diffusion is well described by Fick’s law. A density-dependent diffusion process is a one in which the diffusion coefficient is a function of the localized density of the diffusing substance. In my thesis I analyze the density-dependent diffusion behavior of two independent processes of biological interest. The first is tumor growth and invasion. The second is single-file diffusion, which in turn has a considerable biological significance, since it has been recently used to model transitions of proteins through DNA molecules. Tumor invasion of normal tissue is a complex process, involving cell migration and proliferation. It is useful to mathematically model tumor invasion in an attempt to find a common, underlying mechanism by which primary and metastatic cancers invade and destroy normal tissue. In our approach we make no assumptions about the process of carcinogenesis, that is, we are not modeling the genetic changes which result in transformation, nor do we seek to understand the causes of these changes. Similarly, we do not attempt to model the large-scale morphological features of tumors such as central necrosis. Rather, we concentrate on the microscopic-scale population interactions occurring at the tumor-host interface, reasoning that these processes strongly influence the clinically-significant manifestations of invasive cancer. We analyze a reaction-diffusion model due to Gawlinski, Gatenby, and others of the acid-mediated tumor invasion mechanism that incorporates the ion concentration as a reaction factor affecting the tumor growth and invasion process. It also adds density-dependent diffusion parameters to the reaction terms yielding independent reaction-diffusion equations for the normal, tumor, and acid populations. For reasonable biological parameters we study the fixed-points of the partial differential equations central of the model and their stability. The fixed-points determine the balance reached by the normal, tumor, and acid populations. As for the second application we present on density-dependent diffusion processes, we consider a model for single-file diffusion that is relevant in a variety of biological processes, for example ion transport in channels. The model is of mathematical and physical as well as biological interest because it exhibits an anomalously-slow tracer diffusion fundamentally different from diffusion without the single-file restriction. We carry out extensive computer simulations to study the role of particle adhesion and space availability (hard-core exclusion) in the model. Both tracer (tagged-particle) and bulk or collective diffusion are considered. Tracer diffusion focuses on the diffusion of the individual particles relative to their starting points, whereas bulk diffusion focuses on the diffusion of the particle distribution as a whole. The nature of the diffusion depends strongly on the initial particle distribution, and both homogeneous and inhomogeneous (for example Gaussian) distributions are considered. For all these models a density-dependent diffusion behavior is confirmed by studying the time evolutions of the moments and widths of these distributions. / Physics
|
82 |
Micromechanics-Based Strength and Lifetime Prediction of Polymer CompositesBandorawalla, Tozer Jamshed 22 March 2002 (has links)
With the increasing use of composite materials for diverse applications ranging from civil infrastructure to offshore oil exploration, the durability of these materials is an important issue. Practical and accurate models for lifetime will enable engineers to push the boundaries of design and make the most efficient use of composite materials, while at the same time maintaining the utmost standards of safety. The work described in this dissertation is an effort to predict the strength and rupture lifetime of a unidirectional carbon fiber/polymer matrix composite using micromechanical techniques. Sources of material variability are incorporated into these models to predict probabilistic distributions for strength and lifetime. This approach is best suited to calculate material reliability for a desired lifetime under a given set of external conditions.
A systematic procedure, with experimental verification at each important step, is followed to develop the predictive models in this dissertation. The work begins with an experimental and theoretical understanding of micromechanical stress redistribution due to fiber fractures in unidirectional composite materials. In-situ measurements of fiber stress redistribution are made in macromodel composites where the fibers are large enough that strain gages can be mounted directly onto the fibers. The measurements are used to justify and develop a new form of load sharing where the load of the broken fiber is redistributed only onto the nearest adjacent neighbors. The experimentally verified quasi-static load sharing is incorporated into a Monte Carlo simulation for tensile strength modeling. Very good agreement is shown between the predicted and experimental strength distribution of a unidirectional composite.
For the stress-rupture models a time and temperature dependent load-sharing analysis is developed to compute stresses due an arbitrary sequence of fiber fractures. The load sharing is incorporated into a simulation for stress rupture lifetime. The model can be used to help understand and predict the role of temperature in accelerated measurement of stress-rupture lifetimes. It is suggested that damage in the gripped section of purely unidirectional specimens often leads to inaccurate measurements of rupture lifetime. Hence, rupture lifetimes are measured for [90/0_3]_s carbon fiber/polymer matrix specimens where surface 90 deg plies protect the 0 deg plies from damage. Encouraging comparisons are made between the experimental and predicted lifetimes of the [90/0_3]_s laminate. Finally, it is shown that the strength-life equal rank assumption is erroneous because of fundamental differences between quasi-static and stress-rupture failure behaviors in unidirectional polymer composites. / Ph. D.
|
83 |
Non-equilibrium Phase Transitions and Steady States in Biased Diffusion of Two SpeciesKorniss, György 21 April 1997 (has links)
We investigate the dynamics of a three-state stochastic lattice gas, consisting of holes and two oppositely "charged" species of particles, under the influence of an "electric" field, at zero total charge. Interacting only through an excluded volume constraint, particles can hop to nearest neighbor empty sites, but particle-particle exchange between oppositely charged particles is also allowed on a separate time scale. Controlled by this relative time scale, particle density and drive, the system orders into a charge-segregated state. Using a combination of Monte Carlo simulations and continuum field theory techniques, we study the order of these transitions and map out the steady state phase diagram of the system. On a single sheet of transitions, a line of multicritical points is found, separating the first order and continuous transitions. Furthermore, we study the steady-state structure factors in the disordered phase where homogeneous configurations are stable against small harmonic perturbations. The average structure factors show a discontinuity singularity at the origin which in real space predicts an intricate crossover between power laws of different kinds. We also seek for generic statistical properties of these quantities. The probability distributions of the structure factors are universal asymmetric exponential distributions.
This research was supported in part by grants from the National Science Foundation through the Division of Materials Research. / Ph. D.
|
84 |
Study of Electromagnetic Scattering from Randomly Rough Ocean-Like Surfaces Using Integral-Equation-Based Numerical TechniqueToporkov, Jakov V. 04 May 1998 (has links)
A numerical study of electromagnetic scattering by one-dimensional perfectly conducting randomly rough surfaces with an ocean-like Pierson-Moskowitz spectrum is presented. Simulations are based on solving the Magnetic Field Integral Equation (MFIE) using the numerical technique called the Method of Ordered Multiple Interactions (MOMI). The study focuses on the application and validation of this integral equation-based technique to scattering at low grazing angles and considers other aspects of numerical simulations crucial to obtaining correct results in the demanding low grazing angle regime.
It was found that when the MFIE propagator matrix is used with zeros on its diagonal (as has often been the practice) the results appear to show an unexpected sensitivity to the sampling interval. This sensitivity is especially pronounced in the case of horizontal polarization and at low grazing angles. We show - both numerically and analytically - that the problem lies not with the particular numerical technique used (MOMI) but rather with how the MFIE is discretized. It is demonstrated that the inclusion of so-called "curvature terms" (terms that arise from a correct discretization procedure and are proportional to the second surface derivative) in the diagonal of the propagator matrix eliminates the problem completely. A criterion for the choice of the sampling interval used in discretizing the MFIE based on both electromagnetic wavelength and the surface spectral cutoff is established. The influence of the surface spectral cutoff value on the results of scattering simulations is investigated and a recommendation for the choice of this spectral cutoff for numerical simulation purposes is developed.
Also studied is the applicability of the tapered incident field at low grazing incidence angles. It is found that when a Gaussian-like taper with fixed beam waist is used there is a characteristic pattern (anomalous jump) in the calculated average backscattered cross section at incidence angles close to grazing that indicates a failure of this approximate (non-Maxwellian) taper. This effect is very pronounced for the horizontal polarization and is not observed for vertical polarization and the differences are explained. Some distinctive features associated with the taper failure are visible in the surface current (solution to the MFIE) as well. Based on these findings we are able to refine one of the previously proposed criteria that relate the taper waist to the angle of incidence and demonstrate its robustness. / Ph. D.
|
85 |
Characterization of the GATE Monte Carlo platform for non-isocentric treatments and patient specific treatment plan verification at MedAustron - Vienna - Austria / Caractérisation de la plate-forme GATE Monte Carlo pour les traitements non isocentriques et vérification du plan de traitement spécifique au patient chez MedAustron - Vienna - AustriaElia, Alessio 08 January 2019 (has links)
L'objectif de cette thèse est de développer et de valider une méthode de calcul de dose indépendante afin de soutenir le travail de mise en service intense d'une installation de traitement par faisceaux d'ions légers (LIBT) et de valider le calcul de dose du système de planification de traitement (SP). Le travail porte sur les traitements de protonthérapie et est organisé en collaboration entre le laboratoire CREATIS (Lyon, France) et le centre de thérapie ionique MedAustron - Vienna - Austria (Wiener Neustadt, Autriche). Chez MedAustron - Vienna - Austria, afin d’exploiter une pénombre latérale aiguë du faisceau de protons et d’améliorer la précision des algorithmes de calcul de la dose TPS, l’intervalle entre la fenêtre de la tête de traitement et le patient est réduit en déplaçant le patient vers la tête de traitement. Par conséquent, les traitements non isocentriques doivent être pris en compte avec précision lors de la modélisation ainsi que lors de la phase de validation, car l'éloignement de la cible de l'isocentre de la pièce peut réduire la précision du traitement. Dans cette étude, la paramétrisation du faisceau de crayons à protons suit les recommandations de Grevillot et al. (2011), mais comprenant une description complète de la buse. Un soin particulier est apporté à la modélisation des propriétés du faisceau de crayon dans des conditions non isocentriques, y compris l'utilisation d'un Range Shifter (RaShi). La caractérisation du faisceau de crayon est basée uniquement sur les profils de fluence mesurés dans le profil de dose d’air et de profondeur acquis dans l’eau. De plus, le modèle présenté est calibré en dose absolue sur la base d'un nouveau formalisme produit-zone-dose présenté dans Palmans et Vatnitsky (2016). Finalement, une validation détaillée est effectuée dans l'eau, pour les distributions de doses tridimensionnelles de forme régulière. Plusieurs paramètres couramment exploités en dosimétrie des protons, tels que la distance parcourue, la pénombre distale, la modulation, la taille des champs et la pénombre latérale pour la dosimétrie protonique sont évalués à des fins de validation. Le modèle optique à faisceau de crayon a atteint une précision de l'exigence clinique de 1 mm / 10% et il n'est pas affecté par la complexité des traitements non isocentriques ni par l'utilisation d'un RaShi. Les plages sont reproduites entre 0,2 et 0,35 mm (déviation maximale) sans et avec le décaleur de plage, respectivement. La différence de dose dans les conditions de référence est de 0,5%. La validation de l'administration de la dose en 3D dans l'eau était à 1,2% maximum. La concordance des paramètres distaux et longitudinaux est généralement meilleure que 1 mm. Les résultats obtenus serviront de référence pour la future mise en œuvre clinique du système de calcul de dose indépendant MedAustron - Vienna - Austria. / The goal of this PhD is to develop and validate an independent dose calculation method in order to support the intense commissioning work of a Light Ion Beam Therapy (LIBT) facility, and to validate the Treatment Planning System (TPS) dose calculation. The work focuses on proton therapy treatments and is held as a collaboration between the CREATIS laboratory (Lyon, France) and the MedAustron - Vienna - Austria Ion Therapy Center (Wiener Neustadt, Austria). At MedAustron - Vienna - Austria, in order to exploit a sharp lateral penumbra for the proton beam as well as to improve the accuracy of the TPS dose calculation algorithms, the air gap between the treatment head window and the patient is reduced by moving the patient towards the treatment head. Therefore, non-isocentric treatments have to be accurately taken into consideration during modeling as well as validation phase as moving the target away from the room isocenter may lead to reduced treatment accuracy. In this study, the parametrization of the proton pencil beam follows the recommendations provided in Grevillot et al. (2011), but including a full nozzle description. Special care is taken to model the pencil beam properties in non-isocentric conditions, including the use of a Range Shifter (RaShi). The characterization of the pencil beam is based solely on fluence profiles measured in air and depth dose profile acquired in water. In addition, the presented model is calibrated in absolute dose based on a newly formalism in dose-area-product presented in Palmans and Vatnitsky (2016). Eventually, a detailed validation is performed in water, for three-dimensional regular-shaped dose distributions. Several parameters commonly exploited in proton dosimetry such as range, distal penumbra, modulation, field sizes and lateral penumbra for proton dosimetry are evaluated for validation purposes. The pencil beam optics model reached an accuracy within the clinical requirement of 1mm/10% and it is not affected by the complexity of non-isocentric treatments and the use of a RaShi. Ranges are reproduced within 0.2 and 0.35 mm (max deviation) without and with range shifter, respectively. The dose difference in reference conditions is within 0.5%. The 3D dose delivery validation in water was within 1.2% at maximum. The agreement of distal and longitudinal parameters is mostly better than 1 mm. The obtained results will be used as a reference for the future clinical implementation of the MedAustron - Vienna - Austria independent dose calculation system. As an example of the potential clinical outcome of the presented work, the patient specific quality assurance measurements performed in water have been successfully reproduced within the clinical requirement of 5% accuracy for a few patients.
|
86 |
ZEPLIN-III direct dark matter search : final results and measurements in support of next generation instrumentsReichhart, Lea January 2013 (has links)
Astrophysical observations give convincing evidence for a vast non-baryonic component, the so-called dark matter, accounting for over 20% of the overall content of our Universe. Direct dark matter search experiments explore the possibility of interactions of these dark matter particles with ordinary baryonic matter via elastic scattering resulting in single nuclear recoils. The ZEPLIN-III detector operated on the basis of a dualphase (liquid/gas) xenon target, recording events in two separate response channels { scintillation and ionisation. These allow discrimination between electron recoils (from background radiation) and the signal expected from Weakly Interacting Massive Particle (WIMP) elastic scatters. Following a productive first exposure, the detector was upgraded with a new array of ultra-low background photomultiplier tubes, reducing the electron recoil background by over an order of magnitude. A second major upgrade to the detector was the incorporation of a tonne-scale active veto detector system, surrounding the WIMP target. Calibration and science data taken in coincidence with ZEPLIN-III showed rejection of up to 30% of the dominant electron recoil background and over 60% of neutron induced nuclear recoils. Data taking for the second science run finished in May 2011 with a total accrued raw fiducial exposure of 1,344 kg days. With this extensive data set, from over 300 days of run time, a limit on the spin-independent WIMP-nucleon cross-section of 4.8 10-8 pb near 50 GeV/c2 WIMP mass with 90% confidence was set. This result combined with the first science run of ZEPLIN-III excludes the scalar cross-section above 3.9 10-8 pb. Studying the background data taken by the veto detector allowed a calculation of the neutron yield induced by high energy cosmic-ray muons in lead of (5.8 0.2) 10-3 neutrons/muon/(g/cm2) for a mean muon energy of 260 GeV. Measurements of this kind are of great importance for large scale direct dark matter search experiments and future rare event searches in general. Finally, this work includes a comprehensive measurement of the energy dependent quenching factor for low energy nuclear recoils in a plastic scintillator, such as from the ZEPLIN-III veto detector, increasing accuracy for future simulation packages featuring large scale plastic scintillator detector systems.
|
87 |
Développement de codes de simulation Monte-Carlo de la radiolyse de l'eau par des électrons, ions lourds, photons et neutrons applications à divers sujets d'intérêt expérimentalPlante, Ianik January 2008 (has links)
Water is a major component of living organisms, which can be 70-85% of the weight of cells. For this reason, water is a main target of ionizing radiations and plays a central role in radiobiology. Heavy ions, electrons and photons interact with water molecules; mainly by ionization and excitation. Neutrons interact with water molecules by elastic interactions, which generate recoil ions that will create ionizations and excitations in water molecules. These fast events (~10[superscript -12] s) lead to the formation of Reactive Oxygen Species (ROS). The ROS, in particular the hydroxyl radical (¨OH), interact with neighbour molecules such as proteins, lipids and nucleic acids by chemical interaction. Microbeams can irradiate selectively either the external membrane, the cytoplasm and the cell nucleus. These studies have shown that cell survival is greatly reduced when the nucleus is irradiated, but that this is not the case when cytoplasm or cell membrane is irradiated. Thus, DNA is a very sensitive site to ionizing radiation and ROS. For this reason, DNA has long been considered the most important molecule to explain radiobiological effects such as cell death. However, this concept has been challenged recently by new experimental results that have shown that cells which have not been directly in contact with radiation are also affected. This is called the bystander effect. Further studies have shown that a group of cells and their environment reacts collectively to radiation. A hypothesis put forward to explain this radiobiological phenomenon is that a irradiated cell will secrete signalling molecules that will affect non-irradiated cells. The implicated phenomenon and molecules are poorly understood at this moment. The purpose of this work is to improve our comprehension of the phenomenon in the microsecond that follows the irradiation. To these ends, a new Monte-Carlo simulation program of water radiolysis by photons has been generated. For photons of energy <2 MeV, they interact with water mainly by Compton and photoelectric effects, which create energetic electrons in water. The created electrons are then followed by our existing programs to simulate the radiolysis of water by photons. Similarly, a new code has been built to simulate the neutrons interaction with water. This code simulates the elastic collisions of a neutron with water molecules and calculates the number and energy of recoil protons and oxygen ions. The main part of this Ph.D. work was the generation of a non-homogeneous Monte-Carlo Step-By-Step (SBS) simulation code of non-homogeneous radiation chemistry. This new program has been used successfully to simulate radiolysis of water by ions of various LET, pH, ion types ([superscript 1]H[superscript +], [superscript 4]He[superscript 2+], [superscript 12]C[superscript 6+]) and temperature. The program has also been used to simulate the dose-rate effect and the Fricke and Ceric dosimeters. More complex systems (glycine, polymer gels and HCN) have also been simulated.
|
88 |
Troubles dépressifs majeurs : approche méthodologique pour la modélisation médico-économique des stratégies de prévention des récidives par modèles de simulation à événements Discrets / Modelling the cost-effectiveness of prophylactic interventions in patients with recurrent major depressive disorders : a methodological approach with Discrete Event SimulationLe Lay, Agathe 16 December 2009 (has links)
Les troubles dépressifs représentent aujourd’hui l’une des causes de handicap et de mortalité précoce les plus fréquentes dans le monde. Les projections établies par l’Organisation Mondiale de la Sante à l’horizon 2020 prévoient qu’ils seront classés second, juste après les maladies cardiovasculaires. De nombreuses études ont montré l’efficacité des antidépresseurs dans la prévention des rechutes, cependant la situation semble moins claire s’agissant de la prévention des récidives. Un certain nombre de travaux de recherche ont été menés visant à évaluer l’impact médico-économique des stratégies thérapeutiques préventives, en recourant à la construction de modèles de simulation, ceux-ci permettant une représentation schématique de l’évolution de la pathologie au cours du temps. Cependant, afin d’être en mesure d’évaluer l’impact économique des stratégies de prévention des troubles dépressifs, un certain nombre de facteurs doivent être pris en considération dans l’élaboration du modèle représentatif de la pathologie. Nous montrons que l’intégration de l’ensemble des facteurs déterminants des récidives, tout en considérant un horizon temporel suffisamment large afin de capter les bénéfices thérapeutiques et (éventuellement) économiques sur le long terme, n’est pas sans poser problème. Nous montrons que les modèles disponibles dans la littérature sont seulement en mesure de proposer une forme partielle d’abstraction de la pathologie dépressive, généralement réduite à un ou deux facteurs de risque principaux, parmi lesquels l’observance du traitement, l’histoire médicale ou encore les caractéristiques sociodémographiques du patient. Nous proposons alors d’envisager les modèles de simulation à événements discrets en tant que réponse possible pour la représentation des facteurs de risque des troubles dépressifs récurrents, et détaillons les principes de la méthode. Nous tentons ensuite de développer un modèle ≪ princeps ≫ à partir de données épidémiologiques. Nous montrons alors que la flexibilité associée à ce type de modélisation permet de proposer un cadre d’analyse au plus près de la réalité de la pathologie dépressive / Depressive disorders represent today one of the most frequent causes of disability and premature death worldwide. Research on the natural history of depressive disorders has shown that it is indeed a chronic rather than an acute disease. Many studies have shown the effectiveness of antidepressants in preventing relapse; however, the situation seems less clear with regard to the prevention of recurrence. A number of research activities have been conducted to evaluate the pharmaco-economic impact of preventive strategies with the help of simulation models. These techniques represent a convenient tool enabling the schematic representation of disease progression over time. However, in order to be able to assess the economic impact of prevention strategies for depressive disorders, a number of factors must be taken into account when developing the model structure. We show that the integration of all determining factors, especially on a wide-enough time horizon in order to capture the therapeutic and possible economic benefits in the long term can be somewhat problematic. We show that the models available in the literature only present a partial framework aiming at depicting disease’s risk factors (medical history, treatment compliance or socio-demographic characteristics) and progression over time. We propose then to consider the use of discrete event simulation models as a possible tool for modelling recurrent depressive disorders, and we provide a detailed description of the principles of this methodology. We then try to develop a core model based on epidemiological evidence. We show that the flexibility associated with this type of modelling method can provide an analytical framework that depicts the characteristics of the depressive pathology in a more realistic fashion
|
89 |
Modélisation de la croissance de boîtes quantiques sous contrainte élastique / Modeling the growth of quantum dots under elastic strainGaillard, Philippe 14 February 2014 (has links)
La formation et la morphologie des boîtes quantiques est un sujet d'un grand intérêt, ces structures ayant de nombreuses application potentielles, en particulier en microélectronique et optoélectronique. Cette thèse porte sur l'étude théorique et numérique de la croissance et de la morphologie d'ilots par épitaxie par jet moléculaire. Un premier modèle de croissance est une étude non-linéaire de l'instabilité de type Asaro-Tiller-Grinfeld, il convient pour les systèmes à faibles désaccords de maille, et est plus spécifiquement appliquée au cas où le désaccord de maille est anisotrope (voir le cas du GaN sur AlGaN). Le calcul de l'instabilité que nous avons effectué prend en compte les effets élastiques causés par le désaccord de maille, les effets de mouillage et les effets d'évaporation. La résolution numérique de l'instabilité nous permet de constater une croissance plus rapide dans le cas anisotrope comparé au cas isotrope, ainsi que la croissance d'ilots fortement anisotropes.Le deuxième modèle est basé sur des simulations Monte Carlo cinétiques, qui permettent de décrire la nucléation d'ilots 3D. Ces simulations sont utilisées pour les systèmes à fort désaccord de maille, comme Ge sur Si. Nos simulations prennent en compte la diffusion des adatomes, les effets élastiques, et un terme simulant la présence de facettes (105). Des ilots pyramidaux se formaent, conformément aux expériences et subissent un mûrissement interrompu. Les résultats obtenus ont été comparés au cas de la nucléation 2D, et on retrouve en particulier une densité d'ilots en loi de puissance par rapport au rapport D/F du coefficient de diffusion et du flux de déposition. / The growth and morphology of quantum dots is currently a popular subject as these structures have numerous potential uses, specifically in microelectronics and optoelectronics. Control of the size, shape and distribution of these dots is of critical importance for the uses that are being considered. This thesis presents a theoretical and numerical study of the growth of islands during molecular beam epitaxy. In order to study these dots, we used two models : a nonlinear study of an Asaro-Tiller-Grinfeld like instability, and kinetic Monte Carlo simulations. The first model is appropriate for low misfit systems, and is detailed in the case where misfit is anisotropic (this is the case when depositing GaN on AlGaN). In this case we took into account elastic effects, wetting effects and evaporation. Numerical calculations show faster growth, compared to the isotropic misfit case, and the growth of strongly anisotropic islands.The second model is based on kinetic Monte Carlo simulations that can describe 3D island nucleation. We use these simulations to study systems with high misfits, specifically Ge on Si. Adatom diffusion on a surface is considered and takes into account elastic effects, and surface energy anisotropy, that allows us to stabilize (105) facets. Simulation results show the growth of pyramid-shaped 3D islands, as observed in experiments, and their ripening is interrupted. The results of these simulations are then compared to the case of 2D nucleation, and we find that several of the known 2D properties also apply to 3D islands. Specifically, island density depends on a power law of D/F, the diffusion coefficient divided by the deposition flux.
|
90 |
The performance of inverse probability of treatment weighting and propensity score matching for estimating marginal hazard ratiosNåtman, Jonatan January 2019 (has links)
Propensity score methods are increasingly being used to reduce the effect of measured confounders in observational research. In medicine, censored time-to-event data is common. Using Monte Carlo simulations, this thesis evaluates the performance of nearest neighbour matching (NNM) and inverse probability of treatment weighting (IPTW) in combination with Cox proportional hazards models for estimating marginal hazard ratios. Focus is on the performance for different sample sizes and censoring rates, aspects which have not been fully investigated in this context before. The results show that, in the absence of censoring, both methods can reduce bias substantially. IPTW consistently had better performance in terms of bias and MSE compared to NNM. For the smallest examined sample size with 60 subjects, the use of IPTW led to estimates with bias below 15 %. Since the data were generated using a conditional parametrisation, the estimation of univariate models violates the proportional hazards assumption. As a result, censoring the data led to an increase in bias.
|
Page generated in 0.0937 seconds