• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 61
  • 22
  • 19
  • 5
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 125
  • 125
  • 24
  • 23
  • 21
  • 18
  • 18
  • 14
  • 13
  • 13
  • 12
  • 12
  • 11
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

A role for endothelial cells in regenerative and personalized medicine

Peacock, Matthew Richard 22 January 2016 (has links)
REGENERATIVE MEDICINE: VASCULARIZED SKELETAL MUSCLE Tissue engineering is a compelling strategy to create replacement tissues and in this study, skeletal muscle. One major hurdle in the field is how to vascularize large tissue-engineered constructs exceeding the nutrient delivery capability of diffusion. Endothelial colony forming cells and mesenchymal progenitor cells form blood vessels de novo and were co-injected with satellite cells in Matrigel, an extracellular matrix, or PuraMatrix, a synthetic hydrogel. Our approach focused on the ability of bioengineered vascular networks to induce murine and human satellite cells to differentiate and form organized skeletal muscle when injected. We found that perfused human blood vessels were formed in both Matrigel and PuraMatrix and that murine satellite cells differentiated and formed organized myotubes with striations, indicative of adult skeletal muscle. Mesenchymal progenitor cells also induced differentiation of satellite cells in vitro. Human Satellite cells, however, did not show signs of differentiation in either Matrigel or Puramatrix. These data have provided a proof of concept of engineering vascularized skeletal muscle using murine satellite cells. INDUCTION OF CARDIOMYOGENESIS The heart's regenerative capabilities are not robust enough to repair the amount of damaged tissue from myocardial infarction. A novel approach to relieve the ischemia is to deliver cells with vasculogenic ability, endothelial colony forming cells and mesenchymal progenitor cells, to assemble de novo blood vessels and support recovery of cardiomyocytes. In our study, we used an in vitro transwell system that prevent cell contact, but allow diffusion of soluble factors to investigate if endothelial colony forming cells or mesenchymal progenitor cells secrete factors that induce cardiomyogenesis. We found that neonatal rat cardiomyocyte proliferation is enhanced in the presence of endothelial colony forming cells and mesenchymal progenitor cells; however, presence of these cells without fetal bovine serum is not sufficient to initiate cardiomyogenesis. PERSONALIZED THERAPY FOR RENAL CELL CARCINOMA TESTING IN AN ENDOTHEIAL CELL MODEL Sunitinib and Pazopanib are both tyrosine kinase inhibitors with high specificity for vascular endothelial growth factor receptor 2 and are used in the treatment of Renal Cell Carcinoma to inhibit angiogenesis. Recent clinical findings suggest that a subset of the population with a single nucleotide polymorphism in vascular endothelial growth factor receptor 2 respond better to Pazopanib treatment. We used a standard in vitro angiogenesis assay, endothelial cell proliferation, to test the effects of the single nucleotide polymorphism on responsiveness to Sunitinib and Pazopanib. We found that cells containing the polymorphism are more sensitive to Pazopanib than Sunitinib, confirming the clinical finding. We also analyzed the inhibition of phosphorylated vascular endothelial growth factor receptor 2 and confirmed drug activity on the phosphorylated protein. These findings could have personalized clinical implications for the 3% of the population with the polymorphism.
12

Machine Learning Methods for Personalized Medicine Using Electronic Health Records

Wu, Peng January 2019 (has links)
The theme of this dissertation focuses on methods for estimating personalized treatment using machine learning algorithms leveraging information from electronic health records (EHRs). Current guidelines for medical decision making largely rely on data from randomized controlled trials (RCTs) studying average treatment effects. However, RCTs are usually conducted under specific inclusion/exclusion criteria, they may be inadequate to make individualized treatment decisions in real-world settings. Large-scale EHR provides opportunities to fulfill the goals of personalized medicine and learn individualized treatment rules (ITRs) depending on patient-specific characteristics from real-world patient data. On the other hand, since patients' electronic health records (EHRs) document treatment prescriptions in the real world, transferring information in EHRs to RCTs, if done appropriately, could potentially improve the performance of ITRs, in terms of precision and generalizability. Furthermore, EHR data domain usually consists text notes or similar structures, thus topic modeling techniques can be adapted to engineer features. In the first part of this work, we address challenges with EHRs and propose a machine learning approach based on matching techniques (referred as M-learning) to estimate optimal ITRs from EHRs. This new learning method performs matching method instead of inverse probability weighting as commonly used in many existing methods for estimating ITRs to more accurately assess individuals' treatment responses to alternative treatments and alleviate confounding. Matching-based value functions are proposed to compare matched pairs under a unified framework, where various types of outcomes for measuring treatment response (including continuous, ordinal, and discrete outcomes) can easily be accommodated. We establish the Fisher consistency and convergence rate of M-learning. Through extensive simulation studies, we show that M-learning outperforms existing methods when propensity scores are misspecified or when unmeasured confounders are present in certain scenarios. In the end of this part, we apply M-learning to estimate optimal personalized second-line treatments for type 2 diabetes patients to achieve better glycemic control or reduce major complications using EHRs from New York Presbyterian Hospital (NYPH). In the second part, we propose a new domain adaptation method to learn ITRs in by incorporating information from EHRs. Unless assuming no unmeasured confounding in EHRs, we cannot directly learn the optimal ITR from the combined EHR and RCT data. Instead, we first pre-train “super" features from EHRs that summarize physicians' treatment decisions and patients' observed benefits in the real world, which are likely to be informative of the optimal ITRs. We then augment the feature space of the RCT and learn the optimal ITRs stratifying by these features using RCT patients only. We adopt Q-learning and a modified matched-learning algorithm for estimation. We present theoretical justifications and conduct simulation studies to demonstrate the performance of our proposed method. Finally, we apply our method to transfer information learned from EHRs of type 2 diabetes (T2D) patients to improve learning individualized insulin therapies from an RCT. In the last part of this work, we report M-learning proposed in the first part to learn ITRs using interpretable features extracted from EHR documentation of medications and ICD diagnoses codes. We use a latent Dirichlet allocation (LDA) model to extract latent topics and weights as features for learning ITRs. Our method achieves confounding reduction in observational studies through matching treated and untreated individuals and improves treatment optimization by augmenting feature space with clinically meaningful LDA-based features. We apply the method to extract LDA-based features in EHR data collected at NYPH clinical data warehouse in studying optimal second-line treatment for T2D patients. We use cross validation to show that ITRs outperforms uniform treatment strategies (i.e., assigning insulin or another class of oral organic compounds to all individuals), and including topic modeling features leads to more reduction of post-treatment complications.
13

Personalized Medicine: Studies of Pharmacogenomics in Yeast and Cancer

Chen, Bo-Juen January 2013 (has links)
Advances in microarray and sequencing technology enable the era of personalized medicine. With increasing availability of genomic assays, clinicians have started to utilize genetics and gene expression of patients to guide clinical care. Signatures of gene expression and genetic variation in genes have been associated with disease risks and response to clinical treatment. It is therefore not difficult to envision a future where each patient will have clinical care that is optimized based on his or her genetic background and genomic profiles. However, many challenges exist towards the full realization of the potential personalized medicine. The human genome is complex and we have yet to gain a better understanding of how to associate genomic data with phenotype. First, the human genome is very complex: more than 50 million sequence variants and more than 20,000 genes have been reported. Many efforts have been devoted to genome-wide association studies (GWAS) in the last decade, associating common genetic variants with common complex traits and diseases. While many associations have been identified by genome-wide association studies, most of our phenotypic variation remains unexplained, both at the level of the variants involved and the underlying mechanism. Finally, interaction between genetics and environment presents additional layer of complexity governing phenotypic variation. Currently, there is much research developing computational methods to help associate genomic features with phenotypic variation. Modeling techniques such as machine learning have been very useful in uncovering the intricate relationships between genomics and phenotype. Despite some early successes, the performance of most models is disappointing. Many models lack robustness and predictions do not replicate. In addition, many successful models work as a black box, giving good predictions of phenotypic variation but unable to reveal the underlying mechanism. In this thesis I propose two methods addressing this challenge. First, I describe an algorithm that focuses on identifying causal genomic features of phenotype. My approach assumes genomic features predictive of phenotype are more likely to be causal. The algorithm builds models that not only accurately predict the traits, but also uncover molecular mechanisms that are responsible for these traits. . The algorithm gains its power by combining regularized linear regression, causality testing and Bayesian statistics. I demonstrate the application of the algorithm on a yeast dataset, where genotype and gene expression are used to predict drug sensitivity and elucidate the underlying mechanisms. The accuracy and robustness of the algorithm are both evaluated statistically and experimentally validated. The second part of the thesis takes on a much more complicated system: cancer. The availability of genomic and drug sensitivity data of cancer cell lines has recently been made available. The challenge here is not only the increasing complexity of the system (e.g. size of genome), but also the fundamental differences between cancers and tissues. Different cancers or tissues provide different contexts influencing regulatory networks and signaling pathways. In order to account for this, I propose a method to associate contextual genomic features with drug sensitivity. The algorithm is based on information theory, Bayesian statistics, and transfer learning. The algorithm demonstrates the importance of context specificity in predictive modeling of cancer pharmacogenomics. The two complementary algorithms highlight the challenges faced in personalized medicine and the potential solutions. This thesis detailed the results and analysis that demonstrate the importance of causality and context specificity in predictive modeling of drug response, which will be crucial for us towards bringing personalized medicine in practice.
14

Methods for Personalized and Evidence Based Medicine

Shahn, Zach January 2016 (has links)
There is broad agreement that medicine ought to be `evidence based' and `personalized' and that data should play a large role in achieving both these goals. But the path from data to improved medical decision making is not clear. This thesis presents three methods that hopefully help in small ways to clear the path. Personalized medicine depends almost entirely on understanding variation in treatment effect. Chapter 1 describes latent class mixture models for treatment effect heterogeneity that distinguish between continuous and discrete heterogeneity, use hierarchical shrinkage priors to mitigate overfitting and multiple comparisons concerns, and employ flexible error distributions to improve robustness. We apply different versions of these models to reanalyze a clinical trial comparing HIV treatments and a natural experiment on the effect of Medicaid on emergency department utilization. Medical decisions often depend on observational studies performed on large longitudinal health insurance claims databases. These studies usually claim to identify a causal effect, but empirical evaluations have demonstrated that standard methods for causal discovery perform poorly in this context, most likely in large part due to the presence of unobserved confounding. Chapter 2 proposes an algorithm called Ensembles of Granger Graphs (EGG) that does not rely on the assumption that unobserved confounding is absent. In a simulation and experiments on a real claims database, EGG is robust to confounding, has high positive predictive value, and has high power to detect strong causal effects. While decision making inherently involves causal inference, purely predictive models aid many medical decisions in practice. Predictions from health histories are challenging because the space of possible predictors is so vast. Not only are there thousands of health events to consider, but also their temporal interactions. In Chapter 3, we adapt a method originally developed for speech recognition that greedily constructs informative labeled graphs representing temporal relations between multiple health events at the nodes of randomized decision trees. We use this method to predict strokes in patients with atrial fibrillation using data from a Medicaid claims database. I hope the ideas illustrated in these three projects inspire work that someday genuinely improves healthcare. I also include a short `bonus' chapter on an improved estimate of effective sample size in importance sampling. This chapter is not directly related to medicine, but finds a home in this thesis nonetheless.
15

Statistical Learning Methods for Personalized Medicine

Qiu, Xin January 2018 (has links)
The theme of this dissertation is to develop simple and interpretable individualized treatment rules (ITRs) using statistical learning methods to assist personalized decision making in clinical practice. Considerable heterogeneity in treatment response is observed among individuals with mental disorders. Administering an individualized treatment rule according to patient-specific characteristics offers an opportunity to tailor treatment strategies to improve response. Black-box machine learning methods for estimating ITRs may produce treatment rules that have optimal benefit but lack transparency and interpretability. Barriers to implementing personalized treatments in clinical psychiatry include a lack of evidence-based, clinically interpretable, individualized treatment rules, a lack of diagnostic measure to evaluate candidate ITRs, a lack of power to detect treatment modifiers from a single study, and a lack of reproducibility of treatment rules estimated from single studies. This dissertation contains three parts to tackle these barriers: (1) methods to estimate the best linear ITR with guaranteed performance among the class of linear rules; (2) a tree-based method to improve the performance of a linear ITR fitted from the overall sample and identify subgroups with a large benefit; and (3) an integrative learning combining information across trials to provide an integrative ITR with improved efficiency and reproducibility. In the first part of the dissertation, we propose a machine learning method to estimate optimal linear individualized treatment rules for data collected from single stage randomized controlled trials (RCTs). In clinical practice, an informative and practically useful treatment rule should be simple and transparent. However, because simple rules are likely to be far from optimal, effective methods to construct such rules must guarantee performance, in terms of yielding the best clinical outcome (highest reward) among the class of simple rules under consideration. Furthermore, it is important to evaluate the benefit of the derived rules on the whole sample and in pre-specified subgroups (e.g., vulnerable patients). To achieve both goals, we propose a robust machine learn- ing algorithm replacing zero-one loss with an authentic approximation loss (ramp loss) for value maximization, referred to as the asymptotically best linear O-learning (ABLO), which estimates a linear treatment rule that is guaranteed to achieve optimal reward among the class of all linear rules. We then develop a diagnostic measure and inference procedure to evaluate the benefit of the obtained rule and compare it with the rules estimated by other methods. We provide theoretical justification for the proposed method and its inference procedure, and we demonstrate via simulations its superior performance when compared to existing methods. Lastly, we apply the proposed method to the Sequenced Treatment Alternatives to Relieve Depression (STAR*D) trial on major depressive disorder (MDD) and show that the estimated optimal linear rule provides a large benefit for mildly depressed and severely depressed patients but manifests a lack-of-fit for moderately depressed patients. The second part of the dissertation is motivated by the results of real data analysis in the first part, where the global linear rule estimated by ABLO from the overall sample performs inadequately on the subgroup of moderately depressed patients. Therefore, we aim to derive a simple and interpretable piece-wise linear ITR to maintain certain optimality that leads to improved benefit in subgroups of patients, as well as the overall sample. In this work, we propose a tree-based robust learning method to estimate optimal piece-wise linear ITRs and identify subgroups of patients with a large benefit. We achieve these goals by simultaneously identifying qualitative and quantitative interactions through a tree model, referred to as the composite interaction tree (CITree). We show that it has improved performance compared to existing methods on both overall sample and subgroups via extensive simulation studies. Lastly, we fit CITree to Research Evaluating the Value of Augmenting Medication with Psychotherapy (REVAMP) trial for treating major depressive disorders, where we identified both qualitative and quantitative interactions and subgroups of patients with a large benefit. The third part deals with the difficulties in the low power of identifying ITRs and replicating ITRs due to small sample sizes of single randomized controlled trials. In this work, a novel integrative learning method is developed to synthesize evidence across trials and provide an integrative ITR that improves efficiency and reproducibility. Our method does not require all studies to collect a common set of variables and thus allows information to be combined from ITRs identified from randomized controlled trials with heterogeneous sets of baseline covariates collected from different domains with different resolution. Based on the research goal, the integrative learning can be used to enhance a high-resolution ITR by borrowing information from coarsened ITRs or improve the coarsened ITR from a high-resolution ITR. With a simple modification, the proposed integrative learning can also be applied to improve the estimation of ITRs for studies with blockwise missing feature variables. We conduct extensive simulation studies to show that our method has improved performance compared to existing methods where only single-trial ITRs are used to learn personalized treatment rules. Lastly, we apply the proposed method to RCTs of major depressive disorder and other comorbid mental disorders. We found that by combining information from two studies, the integrated ITR has a greater benefit and improved efficiency compared to single-trial rules or universal non-personalized treatment rule.
16

Microfluidic Selection of Aptamers towards Applications in Precision Medicine

Olsen, Timothy Richard January 2018 (has links)
Precision medicine represents a shift in medicine where large datasets are gathered for massive patient groups to draw correlations between disease cohorts. An individual patient can then be compared to these large datasets to determine the best treatment strategy. While electronic health records and next generation sequencing techniques have enabled much of the early applications for precision medicine, the human genome only represents a fraction of the information present and important to a person’s health. A person’s proteome (peptides and proteins) and glycome (glycans and glycosylation patterns) contain biomarkers that indicate health and disease; however, tools to detect and analyze such biomarkers remain scarce. Thus, precision medicine databases are lacking a major source of phenotypic data due to the absence of available methods to explore these domains, despite the potential of such data to allow further stratification of patients and individualized therapeutic strategies. Available methods to detect non-nucleic acid biomarkers are currently not well suited to address the needs of precision medicine. Mass spectrometry techniques, while capable of generating high throughput data, lack standardization, require extensive preparative steps, and have many sources of errors. Immunoassays rely on antibodies which are time consuming and expensive to produce for newly discovered biomarkers. Aptamers, analogous to antibodies but composed of nucleotides and isolated through in vitro methods, have potential to identify non-nucleic acid biomarkers but methods to isolate aptamers remain labor and resource intensive and time consuming. Recently, microfluidic technology has been applied to the aptamer discovery process to reduce the aptamer development time, while consuming smaller amounts of reagents. Methods have been demonstrated that employ capillary electrophoresis, magnetic mixers, and integrated functional chambers to select aptamers. However, these methods are not yet able to fully integrate the entire aptamer discovery process on a single chip and must rely on off-chip processes to identify aptamers. In this thesis, new approaches for aptamer selection are developed that aim to integrate the entire process for aptamer discovery on a single chip. These approaches are capable of performing efficient aptamer selection and polymerase chain reaction based amplification while utilizing highly efficient bead-based reactions. The approaches use pressure driven flow, electrokinetic flow or a combination of both to transfer aptamer candidates through multiple rounds of affinity selection and PCR amplification within a single microfluidic device. As such, the approaches are capable of isolating aptamer candidates within a day while consuming <500 µg of a target molecule. The utility of the aptamer discovery approach is then demonstrated with examples in precision medicine over a broad spectrum (small molecule to protein) of molecular targets. Seeking to demonstrate the potential of the device to generate probes capable of accessing the human glycome (an emerging source of precision medicine biomarkers), aptamers are isolated against gangliosides GM1, GM3, and GD3, and a glycosylated peptide. Finally, personalized, patient specific aptamers are isolated against a multiple myeloma patient serum sample. The aptamers have high affinity only for the patient derived antibody.
17

Race, Genes and Health: Public Conceptions about the Effectiveness of Race-Based Medicine and Personalized Genomic Medicine

Feldman, Naumi Mira January 2014 (has links)
OBJECTIVE: Personalized genomic medicine (PGM) has been lauded as the future of medicine, as new human genomic research findings are applied towards the development of screenings, diagnostic tools and treatments that are tailored to the genomic profiles of individuals. However, the development of PGM is still in its nascent stages, therefore, some have supported the development of clinical tools and treatments based on population-level characteristics, such as race or ethnicity. Race-based medicine (RBM), has been, and continues to be, promoted as an interim form of PGM, and although an academic debate has flourished over medical, social and ethical concerns related to RBM, to date, there have only been a few small studies that have examined lay beliefs and attitudes regarding RBM. The extent to which the greater American public would believe in the effectiveness of RBM and indicate an intention to use RBM is unclear. Furthermore, it is possible that racial and ethnic groups would differ in their beliefs and attitudes regarding RBM, considering RBM implies the controversial and contested conceptualization of race as having some genetic basis. Therefore, the purpose of this dissertation study was to use, for the first time, a nationally representative sample of adult Americans and examine the importance of race with respect to the following: beliefs and attitudes regarding RBM; the extent to which these beliefs and attitudes can be influenced by mass media messages about the relationship between race and genetics; and how beliefs and attitudes regarding RBM compare with those regarding PGM. METHODS: In order to answer these questions, this dissertation study used a nationally representative sample of self-identified non-Hispanic white, non-Hispanic black and Hispanic U.S. residents who participated in an online survey examining beliefs and attitudes regarding RBM and PGM, and the effect of a vignette experiment using mock news articles that varied in their messages about the relationship between race and genes on these beliefs and attitudes. The survey assessed the following constructs using new measures designed for this dissertation study: RBM's effectiveness at the individual, clinical level; PGM's effectiveness at the individual, clinical level; preferences for using RBM; preferences for using PGM; and RBM's ability to address health inequalities in the U.S. Means, frequencies, mean-difference tests and multiple regression were used to examine the effect of race and/or the vignette experiment on beliefs and attitudes regarding RBM and PGM. RESULTS: The results of this dissertation study show that the majority of white, black and Hispanic Americans equally agreed that RBM would not be clinically effective at the individual level, but the majority of all groups also equally agreed that they would prefer to use RBM if it was available. More than forty percent of all respondents who did not believe RBM would be effective at the individual level, still preferred to use a race-specific treatment if it was available. The three racial/ethnic groups examined in this study did diverge in belief in RBM's ability to reduce health inequalities. Greater portions of the black and Hispanic respondents believed RBM would be effective at reducing health inequalities than white respondents. Racial differences were also seen in the effect of the vignette experiment on RBM beliefs and attitudes. While the vignette experiment had no effect on whites' beliefs and attitudes regarding RBM, vignettes that stated or implied a genetic basis to racial difference were associated with lower endorsement of RBM beliefs and attitudes among the black respondents. Finally, the results indicated that both white and black Americans endorsed PGM's effectiveness at the individual level at greater levels than RBM's effectiveness, and both groups indicated greater preferences for using PGM than RBM. However, while most white respondents indicated that they believed PGM would be effective at the individual level and that they would prefer to use PGM if it was available, nearly half of the black respondents did not believe PGM would be clinically effective, and 1 out of 4 black respondents did not prefer to use PGM. CONCLUSIONS: The results suggest that white, black and Hispanic Americans do not significantly differ in their beliefs and attitudes regarding the effectiveness of or preferences for using RBM. This finding diverges from prior studies that showed racial differences in beliefs and attitudes regarding RBM. The lack of racial difference may be due to a lack of familiarity with this concept, for the results also suggested that once respondents were exposed to varying mock news article messages about the relationship between race and genes, racial differences began to emerge. The results also showed discordance between belief in RBM's effectiveness and preferences for using RBM. This finding suggests that there is still an incentive for the pharmaceutical and diagnostic testing industries to develop and market RBM even if there is generally low public opinion regarding RBM's effectiveness. PGM has been promoted by the biomedical industry as a potential solution to racial and ethnic health disparities both in the U.S. and globally, and RBM has been promoted as an interim form of PGM until it is further developed. Despite noted clinical, social and ethical concerns regarding RBM specifically, proponents of RBM have focused on promoting the message of its potential to mitigate racial and ethnic health disparities. The results from this study indicate that on the surface at least, this argument may in fact resonate with black and Hispanic Americans. In addition to being the first nationally representative study to examine potential racial differences in RBM beliefs and attitudes, this dissertation was also the first nationally representative study to examine potential racial differences in beliefs and attitudes regarding PGM. Although the results clearly showed that all Americans endorsed the effectiveness of and preferences for using PGM at greater levels than RBM, whites were significantly more likely than blacks to believe PGM would be clinically effective and to indicate a preference for using PGM. Thus, while the merits of PGM may seem apparent to the clinical and academic communities, the results of this study indicate that there is not universal support for PGM among the public. Cautious support for PGM from black respondents may reflect more general mistrust towards the medical community and new forms of health technologies. Even though racial and ethnic minority populations seem open to RBM and PGM as potential strategies to address health inequalities, support for both could change as the public becomes more familiar with both concepts, whether through exposure to mass media messages, mass marketing of treatments and genetic testing, or through their clinical providers. The findings from this dissertation study significantly advance our knowledge of the American public's beliefs and attitudes regarding RBM and PGM, particularly with respect to racial differences, and should be considered by stakeholders in current and future debates surrounding efforts to develop and promote both.
18

Risque hémoragique sous anticoagulants : Vers une prise en charge personnalisée / Bleeding risk related to anticoagulant agent : Towards a personalized medicine.

Moustafa, Farès 20 November 2017 (has links)
Introduction. Devant les nombreux facteurs pouvant influencer le risque hémorragique despatients sous anticoagulant, le concept de médecine personnalisée pourrait avoir unimpact favorable dans la prise en charge globale de ces patients.Hypothèse et objectif. L’hypothèse de ce travail de thèse est que l’utilisation et l’analysede registre de « vraie vie » pourrait permettre de définir des « profils » hémorragique depatient permettant une prise en charge personnalisée des patients.Matériel et Méthode. Ce travail de thèse a utilisé deux registres de vraie vie, le registreRIETE (registre international multicentrique prospectif) et le registre RATED (registremonocentrique).Résultats. Nous avons montré l’importance du recueil biologique dans l’analyse desaccidents hémorragiques sous anticogulants avec une perte plus élevée de facteurs decoagulation, lors d’hémorragie gastro-intestinales par rapport aux intracraniennes sousAVK. A l’inverse, ce risque est diminué de moitié en cas de mutation du facteur V Leiden.Grâce au registre RIETE, nous nous sommes ensuite intéressés aux métrorragies sousanticoagulant (peu décrites dans la littérature) où seulement 0,17% des femmesprésentaient des saignements utérins majeurs. Nous avons par la suite montré que lespatients fragiles (CrCl ≤50 mL / min, un âge ≥75 ans ou un poids corporel ≤50 kg) ont unrisque deux fois plus élevé de saignement grave. Enfin, afin de montrer lacomplémentarité des registres de vraie vie et des données d’essais randomisés, nousnous sommes intéressés aux patients exclus de ces essais randomisés et avons montréégalement un risque hémorragique 4 fois plus élevé.Conclusion. Ce travail de thèse a permis de démontrer l’intérêt de travailler non pastoutes hémorragies confondues mais hémorragies par hémorragies, avec un intérêt deréalisation de registres prospectifs de vraie vie avec la mise en oeuvre de bio-banquepermettant une analyse plus personnalisée de la prise en charge des patients. / Introduction. Given many risk factors that may influence the risk of hemorrhage underanticoagulant therapy, the concept of personalized medicine could have a favorableimpact in the overall management of these patients.Hypothesis and objective. The hypothesis of this thesis is that the use and analyze of "reallife" registries could allow to define hemorrhagic "profiles" allowing a personalizedmanagement of the patients.Material and method. This thesis work has used two real life registries, the RIETE registry(international, multicentric and prospective register) and RATED (monocentric register).Results. We showed the importance of biological database in the analysis of hemorrhagicevents under anticoagulants with a higher loss of coagulation factors in gastrointestinalbleeding compared with intracranial bleeding under AVK. Conversely, this bleeding risk istwo times lower in case of factor V Leiden mutation. Thanks to the RIETE registry, wewere interested in abnormal uterine bleeding under anticoagulant therapy (few studies inthe literature) with major uterine bleeding only for 0.17% of women. Then, we showed thatfragile patients (CrCl ≤50 mL / min, age ≥75 years or body weight ≤50 kg) have a 2-foldhigher risk for major bleeding. Finally, in order to show the complementarity between datafrom real life registries and randomized trial, we assessed patients normally excluded fromthese randomized trials and also showed a 4-fold higher bleeding risk in these excludedpatients.Conclusion. This thesis work allowed us to demonstrate the interest of working not only onoverall hemorrhages but on each type of hemorrhage separately, with a particular interestto create real life prospective registries with the implementation of bio-bank allowing amore personalized analysis of the intake of patients.
19

An Engineering Approach Towards Personalized Cancer Therapy

Vahedi, Golnaz 2009 August 1900 (has links)
Cells behave as complex systems with regulatory processes that make use of many elements such as switches based on thresholds, memory, feedback, error-checking, and other components commonly encountered in electrical engineering. It is therefore not surprising that these complex systems are amenable to study by engineering methods. A great deal of effort has been spent on observing how cells store, modify, and use information. Still, an understanding of how one uses this knowledge to exert control over cells within a living organism is unavailable. Our prime objective is "Personalized Cancer Therapy" which is based on characterizing the treatment for every individual cancer patient. Knowing how one can systematically alter the behavior of an abnormal cancerous cell will lead towards personalized cancer therapy. Towards this objective, it is required to construct a model for the regulation of the cell and utilize this model to devise effective treatment strategies. The proposed treatments will have to be validated experimentally, but selecting good treatment candidates is a monumental task by itself. It is also a process where an analytic approach to systems biology can provide significant breakthrough. In this dissertation, theoretical frameworks towards effective treatment strategies in the context of probabilistic Boolean networks, a class of gene regulatory networks, are addressed. These proposed analytical tools provide insight into the design of effective therapeutic interventions.
20

The impacts on utilizing genetic testing to analyze the clinical treatment: An analysis of the effectiveness on drugs of diabetes

Liu, Wen-Sheng 13 June 2008 (has links)
Abstract According to recent clinical treatment, doctors give patients medicines based on clinical experience and biochemical data. However, biochemical data simply provides an initial physiology reaction. Although the data is enough for doctors to diagnose diseases, it does not help much for doctors to indicate the most useful medicine. Therefore, doctors will use the first line, cheap or low dose medicine to cure patients by previous clinical experience. It will not only extend the time of treatment but also lower the medical quality. Not to mention the side effects and increases the cost. Consequently, using SNP¡]Single Nucleotide Polymorphism¡^will help doctors to find out different patients¡¦ genotype and forecast the result of medicine. It will control disease efficiently and decrease the medical costs. Methods: This study will be discussed with an accurate test of how to check the genotypes of diabetes mellitus and predict the result of treatment from pharmacogenetic. The method was using PCR (Polymerase Chain Reaction) and RFLP (Restriction Fragment Length Polymorphism) to analyze patients¡¦ different genotype. Besides, this study uses the One-Way ANOVA to interpret the relationship between ABCC8-E16 and type 2 diabetes. In conclusion, the antidiabetic drugs- Sulfonylurea derivatives are suitable for ABCC8-E16 genotype patients. This result can be a reference for doctors to remedy diabetics. It will not only save the cost but also shorten the time of treatment, and it will impact deeply for personalized medicine in the future. type 2 diabetes, Sulfonylurea, SNP, PCR, RFLP, pharmacogenetic, personalized medicine

Page generated in 0.0793 seconds