• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 205
  • 100
  • 35
  • 32
  • 31
  • 7
  • 6
  • 6
  • 5
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 524
  • 524
  • 84
  • 81
  • 66
  • 60
  • 46
  • 46
  • 39
  • 38
  • 37
  • 36
  • 35
  • 31
  • 31
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
381

Management strategy of landfill leachate and landfill gas condensate

Zhao, Renzun 15 October 2012 (has links)
Studies were conducted to evaluate the impact of landfill leachate discharge on the operation of waste water treatment plants (WWTPs). Two aspects of interferences were found: one is UV quenching substances, which are bio-refractory and able to penetrate the biological treatment processes, consequently interfere the UV disinfection in WWTPs. The other one is organic nitrogen, which can pass the nitrification-denitrification process and contribute to the effluent total nitrogen (TN). Also, treatability study was conducted for landfill gas (LFG) condensate. In a laboratory study, leachate samples were fractionated into humic acids (HA), fulvic acids (FA) and Hydrophilic (Hpi) fractions, the specific UV254 absorbance (SUVA254) of the three fractions follows: HA > FA > Hpi. However, the overall UV254 absorbance of the Hpi fraction was important because there was more hydrophilic organic matter than humic or fulvic acids. It was found that the size distribution of the three fractions follows: HA > FA > Hpi. This indicates that membrane separation following biological treatment is a promising technology for the removal of humic substances from landfill leachates. Leachate samples treated in this manner could usually meet the UV transmittance requirement of the POTWs. Also, nitrogen species in landfill leachates under various stabilization states were investigated. Although the effect of landfill stabilization state on the characteristics of organic matter and ammonia is well documented, there are few investigations into the landfill leachate organic nitrogen under different stabilization stages. Ammonia was found to leach out slower than organic matter and can maintain a constant level within the first a couple of years (< 10 years). The concentration and biodegradability of organic nitrogen were found to decrease with landfill age. A size distribution study showed that most of organic nitrogen in landfill leachates is < 1 kDa. The protein concentration was analyzed and showed a strong correlation with the organic nitrogen. Different slopes of regression curves of untreated and treated leachates indicate that protein is more biodegradable than the other organic nitrogen species in landfill leachates. XAD-8 resin was employed to isolate the hydrophilic fraction of leachate samples, hydrophilic organic nitrogen was found to be more biodegradable/bioavailable than the hydrophobic fractions. Furthermore, biological and physical-chemical treatment methods were applied to a landfill biogas (LFG) condensate to explore the feasible treatment alternatives for organic contaminant and arsenic removal efficiency. Sequencing batch reactor (SBR) showed effectiveness for the degradation of organic matter, even in an environment containing high levels of arsenic. This indicated a relatively low toxicity of organic arsenic as compared to inorganic arsenic. However, for arsenic removal, oxidation-coagulation, including biological oxidation, conventional oxidation and advanced oxidation followed by ferric salt coagulation, and carbon adsorption were not effective for what is believed to be tri-methyl arsenic. Among these, advanced oxidation-coagulation showed the best treatment efficiency (15.1% removal). Only reverse osmosis (RO) could reduce the arsenic concentration to an acceptable level to meet discharge limits. These results implied high stability and low toxicity of organic arsenic. / Ph. D.
382

New Theoretical Techniques For Analyzing And Mitigating Password Cracking Attacks

Peiyuan Liu (18431811) 26 April 2024 (has links)
<p dir="ltr">Brute force guessing attacks continue to pose a significant threat to user passwords. To protect user passwords against brute force attacks, many organizations impose restrictions aimed at forcing users to select stronger passwords. Organizations may also adopt stronger hashing functions in an effort to deter offline brute force guessing attacks. However, these defenses induce trade-offs between security, usability, and the resources an organization is willing to investigate to protect passwords. In order to make informed password policy decisions, it is crucial to understand the distribution over user passwords and how policy updates will impact this password distribution and/or the strategy of a brute force attacker.</p><p dir="ltr">This first part of this thesis focuses on developing rigorous statistical tools to analyze user password distributions and the behavior of brute force password attackers. In particular, we first develop several rigorous statistical techniques to upper and lower bound the guessing curve of an optimal attacker who knows the user password distribution and can order guesses accordingly. We apply these techniques to analyze eight password datasets and two PIN datasets. Our empirical analysis demonstrates that our statistical techniques can be used to evaluate password composition policies, compare the strength of different password distributions, quantify the impact of applying PIN blocklists, and help tune hash cost parameters. A real world attacker may not have perfect knowledge of the password distribution. Prior work introduced an efficient Monte Carlo technique to estimate the guessing number of a password under a particular password cracking model, i.e., the number of guesses an attacker would check before this particular password. This tool can also be used to generate password guessing curves, but there is no absolute guarantee that the guessing number and the resulting guessing curves are accurate. Thus, we propose a tool called Confident Monte Carlo that uses rigorous statistical techniques to upper and lower bound the guessing number of a particular password as well as the attacker's entire guessing curve. Our empirical analysis also demonstrate that this tool can be used to help inform password policy decisions, e.g., identifying and warning users with weaker passwords, or tuning hash cost parameters.</p><p dir="ltr">The second part of this thesis focuses on developing stronger password hashing algorithms to protect user passwords against offline brute force attacks. In particular, we establish that the memory hard function Scrypt, which has been widely deployed as password hash function, is maximally bandwidth hard. We also present new techniques to construct and analyze depth robust graph with improved concrete parameters. Depth robust graph play an essential rule in the design and analysis of memory hard functions.</p>
383

Three Essays on Analysis of U.S. Infant Mortality Using Systems and Data Science Approaches

Ebrahimvandi, Alireza 02 January 2020 (has links)
High infant mortality (IM) rates in the U.S. have been a major public health concern for decades. Many studies have focused on understanding causes, risk factors, and interventions that can reduce IM. However, death of an infant is the result of the interplay between many risk factors, which in some cases can be traced to the infancy of their parents. Consequently, these complex interactions challenge the effectiveness of many interventions. The long-term goal of this study is to advance the common understanding of effective interventions for improving health outcomes and, in particular, infant mortality. To achieve this goal, I implemented systems and data science methods in three essays to contribute to the understanding of IM causes and risk factors. In the first study, the goal was to identify patterns in the leading causes of infant mortality across states that successfully reduced their IM rates. I explore the trends at the state-level between 2000 and 2015 to identify patterns in the leading causes of IM. This study shows that the main drivers of IM rate reduction is the preterm-related mortality rate. The second study builds on these findings and investigates the risk factors of preterm birth (PTB) in the largest obstetric population that has ever been studied in this field. By applying the latest statistical and machine learning techniques, I study the PTB risk factors that are both generalizable and identifiable during the early stages of pregnancy. A major finding of this study is that socioeconomic factors such as parent education are more important than generally known factors such as race in the prediction of PTB. This finding is significant evidence for theories like Lifecourse, which postulate that the main determinants of a health trajectory are the social scaffolding that addresses the upstream roots of health. These results point to the need for more comprehensive approaches that change the focus from medical interventions during pregnancy to the time where mothers become vulnerable to the risk factors of PTB. Therefore, in the third study, I take an aggregate approach to study the dynamics of population health that results in undesirable outcomes in major indicators like infant mortality. Based on these new explanations, I offer a systematic approach that can help in addressing adverse birth outcomes—including high infant mortality and preterm birth rates—which is the central contribution of this dissertation. In conclusion, this dissertation contributes to a better understanding of the complexities in infant mortality and health-related policies. This work contributes to the body of literature both in terms of the application of statistical and machine learning techniques, as well as in advancing health-related theories. / Doctor of Philosophy / The U.S. infant mortality rate (IMR) is 71% higher than the average rate for comparable countries in the Organization for Economic Co-operation and Development (OECD). High infant mortality and preterm birth rates (PBR) are major public health concerns in the U.S. A wide range of studies have focused on understanding the causes and risk factors of infant mortality and interventions that can reduce it. However, infant mortality is a complex phenomenon that challenges the effectiveness of the interventions, and the IMR and PBR in the U.S. are still higher than any other advanced OECD nation. I believe that systems and data science methods can help in enhancing our understanding of infant mortality causes, risk factors, and effective interventions. There are more than 130 diagnoses—causes—for infant mortality. Therefore, for 50 states tracking the causes of infant mortality trends over a long time period is very challenging. In the first essay, I focus on the medical aspects of infant mortality to find the causes that helped the reduction of the infant mortality rates in certain states from 2000 to 2015. In addition, I investigate the relationship between different risk factors with infant mortality in a regression model to investigate and find significant correlations. This study provides critical recommendations to policymakers in states with high infant mortality rates and guides them on leveraging appropriate interventions. Preterm birth (PTB) is the most significant contributor to the IMR. The first study showed that a reduction in infant mortality happened in states that reduced their preterm birth. There exists a considerable body of literature on identifying the PTB risk factors in order to find possible explanations for consistently high rates of PTB and IMR in the U.S. However, they have fallen short in two key areas: generalizability and being able to detect PTB in early pregnancy. In the second essay, I investigate a wide range of risk factors in the largest obstetric population that has ever been studied in PTB research. The predictors in this study consist of a wide range of variables from environmental (e.g., air pollution) to medical (e.g., history of hypertension) factors. Our objective is to increase the understanding of factors that are both generalizable and identifiable during the early stage of pregnancy. I implemented state-of-the-art statistical and machine learning techniques and improved the performance measures compared to the previous studies. The results of this study reveal the importance of socioeconomic factors such as, parent education, which can be as important as biomedical indicators like the mother's body mass index in predicting preterm delivery. The second study showed an important relationship between socioeconomic factors such as, education and major health outcomes such as preterm birth. Short-term interventions that focus on improving the socioeconomic status of a mother during pregnancy have limited to no effect on birth outcomes. Therefore, we need to implement more comprehensive approaches and change the focus from medical interventions during pregnancy to the time where mothers become vulnerable to the risk factors of PTB. Hence, we use a systematic approach in the third study to explore the dynamics of health over time. This is a novel study, which enhances our understanding of the complex interactions between health and socioeconomic factors over time. I explore why some communities experience the downward spiral of health deterioration, how resources are generated and allocated, how the generation and allocation mechanisms are interconnected, and why we can see significantly different health outcomes across otherwise similar states. I use Ohio as the case study, because it suffers from poor health outcomes despite having one of the best healthcare systems in the nation. The results identify the trap of health expenditure and how an external financial shock can exacerbate health and socioeconomic factors in such a community. I demonstrate how overspending or underspending in healthcare can affect health outcomes in a society in the long-term. Overall, this dissertation contributes to a better understanding of the complexities associated with major health issues of the U.S. I provide health professionals with theoretical and empirical foundations of risk assessment for reducing infant mortality and preterm birth. In addition, this study provides a systematic perspective on the issue of health deterioration that many communities in the US are experiencing, and hope that this perspective improves policymakers' decision-making.
384

Evaluation of the critical parameters and polymeric coat performance in compressed multiparticulate systems

Benhadia, Abrehem M.A. January 2019 (has links)
Compression of coated pellets is a practical alternative to capsule filling. The current practice is to add cushioning agents to minimize the stress on the coated pellets. Cushioning agents however add bulkiness and reduce the overall drug loading capacity. In this study, we investigated the performance of compressed coated pellets with no cushioning agent to evaluate the feasibility of predicting the coat behaviour using thermo-mechanical and rheological analysis techniques. Different coating formulations were made of ethyl cellulose (EC) as a coating polymer and two different kinds of additives were incorporated into the polymeric coating solution. Triethyl Citrate (TEC) and Polyethylene glycol 400(PEG400) were used as plasticizers at different levels to the coating formulations (10%, 20%, 30%). Thermal, mechanical and rheological measurements of the coating film formulations were achieved to investigate the effect of plasticizers. Thermal gravimetric analysis results (TGA) showed higher residual moisture content in films plasticised with PEG 400 compared to their TEC counterparts. Differential Scanning Calorimetry (DSC), Dynamic Mechanical Analysis (DMA) and Parallel Plate Shear Rheometer (PPSR) were used to study the influence of the level and type of plasticisers incorporated in coating film formulation on the performance of the coating film. In this study, both DSC and DMA were used to investigate the Tg for each film coating formulation in order to evaluate the effect of the additives. In general DMA results for the Tg value of the films were always higher by 10-20% than those measured by the DSC. Furthermore, clamp size and the frequency of the oscillation have an influence on the evaluation of Tg. Complex viscosity for different coating film formulations revealed that the shear hinning gradient changes with temperature and plasticiser type and concentration. The value of complex viscosity from DMA and PPSR exhibits power law behaviour. The rheological moduli were indirectly affected by the level of plasticiser. There was a discrepancy between the complex viscosity results obtained from both DMA and PPSR at similar temperature but they follow the same trend. The non plasticized polymer showed a 10 time higher complex viscosity values when measured by DMA over that measured by PPSR. The difference was smaller in plasticized films but it was not consistent. Therefore a consistent coefficient to correlate the DMA and PPSR couldn’t be accurately determined Coated pellets were compressed and key process parameters were evaluated. The obtained results revealed that the coating thickness has a significant effect on the release profile of the final products. It was found that by increasing the coating film thickness, the percentage released decreased. Also the compression force has lower influence on the drug release profile, while the dwell time has very low effect on the percentage release from the final products. Optimum release profile was obtained at a coating level of 5.5% w/w and a compression force of 4700N In conclusion, the elasticity of the plasticised EC films in this study meant that the internal stress is not dissipated during compression and the dwell time range that was used in this experiment. Increasing the thickness therefore was necessary to enhance the strength of the film and avoid cracking. The mechanical and rheological profiling was helpful therefore to understand the behaviour of the coated pellets and predict the film properties at various steps of the process of coating and compression (i.e., various shear rate regimes). Experimental design approach to studying the key process and formulation parameters helped identify the optimum values for the process.
385

A multi-wavelength study of a sample of galaxy clusters / Susan Wilson

Wilson, Susan January 2012 (has links)
In this dissertation we aim to perform a multi-wavelength analysis of galaxy clusters. We discuss various methods for clustering in order to determine physical parameters of galaxy clusters required for this type of study. A selection of galaxy clusters was chosen from 4 papers, (Popesso et al. 2007b, Yoon et al. 2008, Loubser et al. 2008, Brownstein & Mo at 2006) and restricted by redshift and galactic latitude to reveal a sample of 40 galaxy clusters with 0.0 < z < 0.15. Data mining using Virtual Observatory (VO) and a literature survey provided some background information about each of the galaxy clusters in our sample with respect to optical, radio and X-ray data. Using the Kayes Mixture Model (KMM) and the Gaussian Mixing Model (GMM), we determine the most likely cluster member candidates for each source in our sample. We compare the results obtained to SIMBADs method of hierarchy. We show that the GMM provides a very robust method to determine member candidates but in order to ensure that the right candidates are chosen we apply a select choice of outlier tests to our sources. We determine a method based on a combination of GMM, the QQ Plot and the Rosner test that provides a robust and consistent method for determining galaxy cluster members. Comparison between calculated physical parameters; velocity dispersion, radius, mass and temperature, and values obtained from literature show that for the majority of our galaxy clusters agree within 3 range. Inconsistencies are thought to be due to dynamically active clusters that have substructure or are undergoing mergers, making galaxy member identi cation di cult. Six correlations between di erent physical parameters in the optical and X-ray wavelength were consistent with published results. Comparing the velocity dispersion with the X-ray temperature, we found a relation of T0:43 as compared to T0:5 obtained from Bird et al. (1995). X-ray luminosity temperature and X-ray luminosity velocity dispersion relations gave the results LX T2:44 and LX 2:40 which lie within the uncertainty of results given by Rozgacheva & Kuvshinova (2010). These results all suggest that our method for determining galaxy cluster members is e cient and application to higher redshift sources can be considered. Further studies on galaxy clusters with substructure must be performed in order to improve this method. In future work, the physical parameters obtained here will be further compared to X-ray and radio properties in order to determine a link between bent radio sources and the galaxy cluster environment. / MSc (Space Physics), North-West University, Potchefstroom Campus, 2013
386

A multi-wavelength study of a sample of galaxy clusters / Susan Wilson

Wilson, Susan January 2012 (has links)
In this dissertation we aim to perform a multi-wavelength analysis of galaxy clusters. We discuss various methods for clustering in order to determine physical parameters of galaxy clusters required for this type of study. A selection of galaxy clusters was chosen from 4 papers, (Popesso et al. 2007b, Yoon et al. 2008, Loubser et al. 2008, Brownstein & Mo at 2006) and restricted by redshift and galactic latitude to reveal a sample of 40 galaxy clusters with 0.0 < z < 0.15. Data mining using Virtual Observatory (VO) and a literature survey provided some background information about each of the galaxy clusters in our sample with respect to optical, radio and X-ray data. Using the Kayes Mixture Model (KMM) and the Gaussian Mixing Model (GMM), we determine the most likely cluster member candidates for each source in our sample. We compare the results obtained to SIMBADs method of hierarchy. We show that the GMM provides a very robust method to determine member candidates but in order to ensure that the right candidates are chosen we apply a select choice of outlier tests to our sources. We determine a method based on a combination of GMM, the QQ Plot and the Rosner test that provides a robust and consistent method for determining galaxy cluster members. Comparison between calculated physical parameters; velocity dispersion, radius, mass and temperature, and values obtained from literature show that for the majority of our galaxy clusters agree within 3 range. Inconsistencies are thought to be due to dynamically active clusters that have substructure or are undergoing mergers, making galaxy member identi cation di cult. Six correlations between di erent physical parameters in the optical and X-ray wavelength were consistent with published results. Comparing the velocity dispersion with the X-ray temperature, we found a relation of T0:43 as compared to T0:5 obtained from Bird et al. (1995). X-ray luminosity temperature and X-ray luminosity velocity dispersion relations gave the results LX T2:44 and LX 2:40 which lie within the uncertainty of results given by Rozgacheva & Kuvshinova (2010). These results all suggest that our method for determining galaxy cluster members is e cient and application to higher redshift sources can be considered. Further studies on galaxy clusters with substructure must be performed in order to improve this method. In future work, the physical parameters obtained here will be further compared to X-ray and radio properties in order to determine a link between bent radio sources and the galaxy cluster environment. / MSc (Space Physics), North-West University, Potchefstroom Campus, 2013
387

Characterizing and controlling program behavior using execution-time variance

Kumar, Tushar 27 May 2016 (has links)
Immersive applications, such as computer gaming, computer vision and video codecs, are an important emerging class of applications with QoS requirements that are difficult to characterize and control using traditional methods. This thesis proposes new techniques reliant on execution-time variance to both characterize and control program behavior. The proposed techniques are intended to be broadly applicable to a wide variety of immersive applications and are intended to be easy for programmers to apply without needing to gain specialized expertise. First, we create new QoS controllers that programmers can easily apply to their applications to achieve desired application-specific QoS objectives on any platform or application data-set, provided the programmers verify that their applications satisfy some simple domain requirements specific to immersive applications. The controllers adjust programmer-identified knobs every application frame to effect desired values for programmer-identified QoS metrics. The control techniques are novel in that they do not require the user to provide any kind of application behavior models, and are effective for immersive applications that defy the traditional requirements for feedback controller construction. Second, we create new profiling techniques that provide visibility into the behavior of a large complex application, inferring behavior relationships across application components based on the execution-time variance observed at all levels of granularity of the application functionality. Additionally for immersive applications, some of the most important QoS requirements relate to managing the execution-time variance of key application components, for example, the frame-rate. The profiling techniques not only identify and summarize behavior directly relevant to the QoS aspects related to timing, but also indirectly reveal non-timing related properties of behavior, such as the identification of components that are sensitive to data, or those whose behavior changes based on the call-context.
388

Analyse des paramètres atmosphériques des étoiles naines blanches dans le voisinage solaire

Giammichele, Noemi 12 1900 (has links)
Ce mémoire présente une analyse homogène et rigoureuse de l’échantillon d’étoiles naines blanches situées à moins de 20 pc du Soleil. L’objectif principal de cette étude est d’obtenir un modèle statistiquement viable de l’échantillon le plus représentatif de la population des naines blanches. À partir de l’échantillon défini par Holberg et al. (2008), il a fallu dans un premier temps réunir le plus d’information possible sur toutes les candidates locales sous la forme de spectres visibles et de données photométriques. En utilisant les modèles d’atmosphère de naines blanches les plus récents de Tremblay & Bergeron (2009), ainsi que différentes techniques d’analyse, il a été permis d’obtenir, de façon homogène, les paramètres atmosphériques (Teff et log g) des naines blanches de cet échantillon. La technique spectroscopique, c.-à-d. la mesure de Teff et log g par l’ajustement des raies spectrales, fut appliquée à toutes les étoiles de notre échantillon pour lesquelles un spectre visible présentant des raies assez fortes était disponible. Pour les étoiles avec des données photométriques, la distribution d’énergie combinée à la parallaxe trigonométrique, lorsque mesurée, permettent de déterminer les paramètres atmosphériques ainsi que la composition chimique de l’étoile. Un catalogue révisé des naines blanches dans le voisinage solaire est présenté qui inclut tous les paramètres atmosphériques nouvellement determinés. L’analyse globale qui en découle est ensuite exposée, incluant une étude de la distribution de la composition chimique des naines blanches locales, de la distribution de masse et de la fonction luminosité. / We present improved atmospheric parameters of nearby white dwarfs lying within 20 pc of the Sun. The aim of the current study is to obtain the best statistical model of the least-biased sample of the white dwarf population. A homogeneous analysis of the local population is performed combining detailed spectroscopic and photometric analyses based on improved model atmosphere calculations for various spectral types including DA, DB, DQ, and DZ stars. The spectroscopic technique is applied to all stars in our sample for which optical spectra are available. Photometric energy distributions, when available, are also combined to trigonometric parallax measurements to derive effective temperatures, stellar radii, as well as atmospheric compositions. A revised catalog of white dwarfs in the solar neighborhood is presented. We provide for the first time a comprehensive analysis of the mass distribution and the chemical distribution of white dwarf stars in a volume-limited sample.
389

Mechanism and Prediction of Post-Operative Atrial Fibrillation Based on Atrial Electrograms

Xiong, Feng 03 1900 (has links)
La fibrillation auriculaire (FA) est une arythmie touchant les oreillettes. En FA, la contraction auriculaire est rapide et irrégulière. Le remplissage des ventricules devient incomplet, ce qui réduit le débit cardiaque. La FA peut entraîner des palpitations, des évanouissements, des douleurs thoraciques ou l’insuffisance cardiaque. Elle augmente aussi le risque d'accident vasculaire. Le pontage coronarien est une intervention chirurgicale réalisée pour restaurer le flux sanguin dans les cas de maladie coronarienne sévère. 10% à 65% des patients qui n'ont jamais subi de FA, en sont victime le plus souvent lors du deuxième ou troisième jour postopératoire. La FA est particulièrement fréquente après une chirurgie de la valve mitrale, survenant alors dans environ 64% des patients. L'apparition de la FA postopératoire est associée à une augmentation de la morbidité, de la durée et des coûts d'hospitalisation. Les mécanismes responsables de la FA postopératoire ne sont pas bien compris. L'identification des patients à haut risque de FA après un pontage coronarien serait utile pour sa prévention. Le présent projet est basé sur l'analyse d’électrogrammes cardiaques enregistrées chez les patients après pontage un aorte-coronaire. Le premier objectif de la recherche est d'étudier si les enregistrements affichent des changements typiques avant l'apparition de la FA. Le deuxième objectif est d'identifier des facteurs prédictifs permettant d’identifier les patients qui vont développer une FA. Les enregistrements ont été réalisés par l'équipe du Dr Pierre Pagé sur 137 patients traités par pontage coronarien. Trois électrodes unipolaires ont été suturées sur l'épicarde des oreillettes pour enregistrer en continu pendant les 4 premiers jours postopératoires. La première tâche était de développer un algorithme pour détecter et distinguer les activations auriculaires et ventriculaires sur chaque canal, et pour combiner les activations des trois canaux appartenant à un même événement cardiaque. L'algorithme a été développé et optimisé sur un premier ensemble de marqueurs, et sa performance évaluée sur un second ensemble. Un logiciel de validation a été développé pour préparer ces deux ensembles et pour corriger les détections sur tous les enregistrements qui ont été utilisés plus tard dans les analyses. Il a été complété par des outils pour former, étiqueter et valider les battements sinusaux normaux, les activations auriculaires et ventriculaires prématurées (PAA, PVA), ainsi que les épisodes d'arythmie. Les données cliniques préopératoires ont ensuite été analysées pour établir le risque préopératoire de FA. L’âge, le niveau de créatinine sérique et un diagnostic d'infarctus du myocarde se sont révélés être les plus importants facteurs de prédiction. Bien que le niveau du risque préopératoire puisse dans une certaine mesure prédire qui développera la FA, il n'était pas corrélé avec le temps de l'apparition de la FA postopératoire. Pour l'ensemble des patients ayant eu au moins un épisode de FA d’une durée de 10 minutes ou plus, les deux heures précédant la première FA prolongée ont été analysées. Cette première FA prolongée était toujours déclenchée par un PAA dont l’origine était le plus souvent sur l'oreillette gauche. Cependant, au cours des deux heures pré-FA, la distribution des PAA et de la fraction de ceux-ci provenant de l'oreillette gauche était large et inhomogène parmi les patients. Le nombre de PAA, la durée des arythmies transitoires, le rythme cardiaque sinusal, la portion basse fréquence de la variabilité du rythme cardiaque (LF portion) montraient des changements significatifs dans la dernière heure avant le début de la FA. La dernière étape consistait à comparer les patients avec et sans FA prolongée pour trouver des facteurs permettant de discriminer les deux groupes. Cinq types de modèles de régression logistique ont été comparés. Ils avaient une sensibilité, une spécificité et une courbe opérateur-receveur similaires, et tous avaient un niveau de prédiction des patients sans FA très faible. Une méthode de moyenne glissante a été proposée pour améliorer la discrimination, surtout pour les patients sans FA. Deux modèles ont été retenus, sélectionnés sur les critères de robustesse, de précision, et d’applicabilité. Autour 70% patients sans FA et 75% de patients avec FA ont été correctement identifiés dans la dernière heure avant la FA. Le taux de PAA, la fraction des PAA initiés dans l'oreillette gauche, le pNN50, le temps de conduction auriculo-ventriculaire, et la corrélation entre ce dernier et le rythme cardiaque étaient les variables de prédiction communes à ces deux modèles. / Atrial fibrillation (AF) is an abnormal heart rhythm (cardiac arrhythmia). In AF, the atrial contraction is rapid and irregular, and the filling of the ventricles becomes incomplete, leading to reduce cardiac output. Atrial fibrillation may result in symptoms of palpitations, fainting, chest pain, or even heart failure. AF is an also an important risk factor for stroke. Coronary artery bypass graft surgery (CABG) is a surgical procedure to restore the perfusion of the cardiac tissue in case of severe coronary heart disease. 10% to 65% of patients who never had a history of AF develop AF on the second or third post CABG surgery day. The occurrence of postoperative AF is associated with worse morbidity and longer and more expensive intensive-care hospitalization. The fundamental mechanism responsible of AF, especially for post-surgery patients, is not well understood. Identification of patients at high risk of AF after CABG would be helpful in prevention of postoperative AF. The present project is based on the analysis of cardiac electrograms recorded in patients after CABG surgery. The first aim of the research is to investigate whether the recordings display typical changes prior to the onset of AF. A second aim is to identify predictors that can discriminate the patients that will develop AF. Recordings were made by the team of Dr. Pierre Pagé on 137 patients treated with CABG surgery. Three unipolar electrodes were sutured on the epicardium of the atria to record continuously during the first 4 post-surgery days. As a first stage of the research, an automatic and unsupervised algorithm was developed to detect and distinguish atrial and ventricular activations on each channel, and join together the activation of the different channels belonging to the same cardiac event. The algorithm was developed and optimized on a training set, and its performance assessed on a test set. Validation software was developed to prepare these two sets and to correct the detections over all recordings that were later used in the analyses. It was complemented with tools to detect, label and validate normal sinus beats, atrial and ventricular premature activations (PAA, PVC) as well as episodes of arrhythmia. Pre-CABG clinical data were then analyzed to establish the preoperative risk of AF. Age, serum creatinine and prior myocardial infarct were found to be the most important predictors. While the preoperative risk score could to a certain extent predict who will develop AF, it was not correlated with the post-operative time of AF onset. Then the set of AF patients was analyzed, considering the last two hours before the onset of the first AF lasting for more than 10 minutes. This prolonged AF was found to be usually triggered by a premature atrial PAA most often originating from the left atrium. However, along the two pre-AF hours, the distribution of PAA and of the fraction of these coming from the left atrium was wide and inhomogeneous among the patients. PAA rate, duration of transient atrial arrhythmia, sinus heart rate, and low frequency portion of heart rate variability (LF portion) showed significant changes in last hour before the onset of AF. Comparing all other PAA, the triggering PAA were characterized by their prematurity, the small value of the maximum derivative of the electrogram nearest to the site of origin, as well as the presence of transient arrhythmia and increase LF portion of the sinus heart rate variation prior to the onset of the arrhythmia. The final step was to compare AF and Non-AF patients to find predictors to discriminate the two groups. Five types of logistic regression models were compared, achieving similar sensitivity, specificity, and ROC curve area, but very low prediction accuracy for Non-AF patients. A weighted moving average method was proposed to design to improve the accuracy for Non-AF patient. Two models were favoured, selected on the criteria of robustness, accuracy, and practicability. Around 70% Non-AF patients were correctly classified, and around 75% of AF patients in the last hour before AF. The PAA rate, the fraction of PAA initiated in the left atrium, pNN50, the atrio-ventricular conduction time, and the correlation between the latter and the heart rhythm were common predictors of these two models.
390

Nature, origine et réactivité de la matière organique fossile dans les sols et sédiments : développements et applications de la photoionisation - spectrométrie de masse haute résolution (APPI-QTOF) et couplage avec la chromatograhie d'exclusion stérique (SEC) / Nature, origin and reactivity of fossil organic matter in soils and sediments : Developments and applications of the Photoionization - High Resolution Mass Spectrometry (APPI-QTOF) and Coupling with Size Exclusion Chromatography (SEC)

Ghislain, Thierry 08 July 2011 (has links)
Le développement des outils analytiques pour l'analyse de la matière organique complexe en géochimie organique a connu de nombreuses avancées ces dernières années. Ce développement a permis de répondre à un grand nombre de questions quant à la composition de la matière organique. Cependant, beaucoup des points restent encore à élucider comme notamment la caractérisation des fractions de hauts poids moléculaires ainsi que le suivi de la réactivité de la matière organique. Ce travail de thèse a eu pour objectif (i) d'adapter les techniques de spectrométrie de masse déjà existantes pour l'analyse de la matière organique fossile (notamment par la sélection de la source d'ionisation atmosphérique la plus adaptée) mais également (ii) de développer un nouveau type de couplage entre la chromatographie d'exclusion stérique (SEC) et la spectrométrie de masse APPI-QTOF pour l'analyse des fractions peu polaires de hauts poids moléculaires. L'adaptation du l'APPI-QTOF a tout d'abord permis de mieux comprendre la réactivité de contaminants organiques polyaromatiques en présence de phases minérales. Le couplage SEC-APPI-QTOF a, quant à lui, permis d'améliorer les connaissances sur la structure des asphaltènes. Cependant, malgré la « simplification » rendue possible par la SEC, la très grande quantité d'informations reste difficile à interpréter et prend beaucoup de temps. Un modèle mathématique a donc été développé basé sur des analyses numériques et statistiques des spectres de masse, permettant de les comparer entre eux afin de distinguer l'origine des échantillons et de suivre l'impact de processus physico-chimiques (altérations naturelles - traitements de remédiation). / The development of analytical tools for organic geochemistry analysis has increased these past years. This development has allowed answering many questions about organic matter composition. However, many issues remain to be clarified including the characterization of high molecular weight fractions and monitoring the reactivity of organic matter. This thesis has focused on both (i) existing method improvements for fossil organic geochemistry analysis but also on (ii) developing a new type of coupling between the size exclusion chromatography (SEC) and the APPI-QTOF mass spectrometry for high molecular weight weakly polar fractions. Adjustments on APPI-QTOF mass spectrometry have allowed a better understanding of polyaromatic organic contaminant reactivity in presence of mineral matrices. The success of this coupling has allowed a better understanding of the structure of asphaltenes. However despite the "simplification" obtained by the SEC, the large amount of information remains difficult to interpret and time-consuming. A mathematical model has been developed based on numerical and statistical analysis of mass spectra, allowing direct comparison of mass spectra and being able to identify several types of information such as origins of samples, monitoring of physico-chemical processes and also the efficiency of soil recovery treatments as well as the identification of analytical protocols.

Page generated in 0.1185 seconds