• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 576
  • 129
  • 96
  • 93
  • 87
  • 37
  • 25
  • 21
  • 20
  • 19
  • 19
  • 18
  • 6
  • 6
  • 5
  • Tagged with
  • 1280
  • 338
  • 195
  • 191
  • 190
  • 175
  • 149
  • 116
  • 106
  • 93
  • 84
  • 83
  • 79
  • 75
  • 68
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
681

Placental ‘€˜Omics’€™ Study to Understand the Pathogenesis of Preeclampsia

Kedia, Komal 01 May 2016 (has links)
Preeclampsia (PE) is a potentially fatal complication of pregnancy characterized by an increase in blood pressure (>140/90 mmHg) and proteinuria (>300 mg/24 hrs), often accompanied by edema. Symptoms of PE start after 20 weeks of gestation. If PE remains untreated, it can lead to eclampsia, grand-mal seizures responsible for most fatalities. PE is believed to affect 2-10% of pregnancies worldwide, and claims the lives of over 75,000 mothers and 500,000 newborns yearly. No therapeutic agents have been developed to prevent or cure PE. Part of the reason for this is the absence of a complete understanding of the pathogenesis of this disease. PE has long been regarded as a “disease of theories”, and the pathophysiology of PE continues to be the subject of debate. Nonetheless, several abnormalities have been observed to precede established, clinical PE and have in turn been proposed to be involved in the causation of this disease, all with involvement of the mother's placenta as a central feature. Removal of placenta is the only cure for PE and results in a rapid resolution of the symptoms. Thus, the placenta remains an organ of substantial interest and many research groups have attempted to identify abnormal placental features occurring in PE. None of these studies have focused on less abundant, low molecular weight (LMW) biomolecules, which play important roles in the pathophysiology of many diseases. There are a number of alterations that are believed to affect the placenta and contribute to the pathogenesis of PE. The most widely accepted ones include hypoxia, oxidative stress, and an increase of pro-inflammatory mediators in the mother's placenta. The goal of my initial study was to identify which of these hypothesized causative pathways has a significance in the etiology of this syndrome as well as to investigate which less abundant, low molecular weight biomolecules change in response to these abnormalities. For this purpose, we first adapted and optimized a previously developed methodology that studied LMW biomolecules in tissue specimens to study placental biomolecules. This approach involved a tissue homogenization step followed by protein depletion using acetonitrile. We compared two regions of human placenta: the chorionic plate and the basal plate to find differences in the LMW fraction. We discovered 16 species with statistically significant differences between the two sides, and identified 12 of them using tandem mass spectrometry. In the second study we collected normal human term placentas from elective C-section deliveries and exposed explants to each of the above-mentioned provocative agents or stress conditions for 48 hrs. Other explants without any stressors were cultured in parallel for the same amount of time. The processing of explants was divided into five steps: 1) explant culture; 2) tissue homogenization; 3) acetonitrile precipitation to remove high abundance, high molecular weight proteins; 4) injection of the protein-depleted specimen into a capillary liquid chromatography–mass spectrometer; 5) analysis of MS data to identify quantitative differences between cases (stressed explants) and controls (normal explants). In total, we observed 146 molecules changed in abundance between the treated explants and the controls with 75 of these molecules changed in response to hypoxic treatment, 23 changed due to hypoxia-reoxygenation, a process generating reactive oxygen species, and 48 changed due to tumor necrosis factor–alpha (TNFα), a pro-inflammatory cytokine. We were successful in identifying 45% of all these molecules by tandem MS. Statistical modeling that applied LASSO analysis allowed for the development of a model that used 16 of the 146 differentially expressed biomolecules to accurately classify and differentiate each of the 4 stressed conditions. In my third study, I then submitted actual preeclamptic and non-diseased placental tissue to our established homogenization and acetonitrile precipitation protocol to see if any of the differences in LMW biomolecules produced under stress conditions in normal placenta were recapitulated in actual diseased placenta. In a preliminary statistical analysis, 8 of the original 146 differentially expressed species, displayed significant or near significant changes in the actual disease placenta. After applying two stringent statistical tests that eliminated any potential influence of gestational age, four out of the 146 biomarkers previously studied, continued to be differentially expressed in both stringent analyses. Of the four, 1 biomarker (m/z 649.49 (+1)) showed an increased abundance in hypoxic placental explants as well as in PE placenta; 2 (461.06 (+1), 476.24 (+1)) were increased in response to TNFα-exposed placental explants and in these PE placentas and 1 (426.35 (+1)) increased in response to hypoxia-reoxygenation-treated placental explants was also increased in PE placenta. We have chemically characterized 2 of the 4 biomarkers. One was a phospholipid (m/z 476.24) while the other was an acyl-carnitine (m/z 426.35). This suggests that features of PE appear to arise from the predicted early abnormalities that affect the placenta. In conclusion, I was successful in developing an ‘omics’ approach to study less abundant, low molecular weight biomolecules in human placenta as well as investigate which biomarkers show differential expression in human placenta when exposed to proposed abnormalities of PE and have data to suggest that these same responses are present in PE placenta.
682

Bi-Objective Optimization of Kidney Exchanges

Xu, Siyao 01 January 2018 (has links)
Matching people to their preferences is an algorithmic topic with real world applications. One such application is the kidney exchange. The best "cure" for patients whose kidneys are failing is to replace it with a healthy one. Unfortunately, biological factors (e.g., blood type) constrain the number of possible replacements. Kidney exchanges seek to alleviate some of this pressure by allowing donors to give their kidney to a patient besides the one they most care about and in turn the donor for that patient gives her kidney to the patient that this first donor most cares about. Roth et al.~first discussed the classic kidney exchange problem. Freedman et al.~expanded upon this work by optimizing an additional objective in addition to maximal matching. In this work, I implement the traditional kidney exchange algorithm as well as expand upon more recent work by considering multi-objective optimization of the exchange. In addition I compare the use of 2-cycles to 3-cycles. I offer two hypotheses regarding the results of my implementation. I end with a summary and a discussion about potential future work.
683

Climate change and plant demography in the sagebrush steppe

Compagnoni, Aldo 01 August 2013 (has links)
We used demographic methods to address one of the main challenges facing ecological science: forecasting the effect of climate change on plant communities. Ecological forecasts will be crucial to inform long-term planning in wildland management and demographic methods are ideal to quantify changes in plant abundance. We carried out our research in the sagebrush steppe, one of the most extensive plant ecosystems of Western North America. Our research intended to inform ecological forecasts on an exotic invader, cheatgrass (Bromus tectorum). Moreover, we investigated the general question asking: to what degree competition among plants influences the outcome of ecological forecasts on the effect of climate change? We carried out two field experiments to test the hypothesis that warming will increase cheatgrass abundance in the sagebrush steppe. This hypothesis was strongly supported by both experiments. Warming increased cheatgrass abundance regardless of elevation, neighboring vegetation or cheatgrass genotype. Moreover, we found cheatgrass was hindered by snow cover. Therefore, warming increases cheatgrass growth directly by increasing temperature, and indirectly by decreasing or removing snow cover. In our last experiment, we tested whether forecasts of climate change effects on rare species can ignore competition from neighbors. This should occur because rare species should have little niche overlap with other species. The lower the niche overlap, the less competition with other species. To test this hypothesis, we used a long-term data set from an Idaho sagebrush steppe. We built population models that reproduced the dynamics of the system by simulating climate and competition. Model simulations supported our hypothesis: rare species have little niche overlap and little competitive interactions with neighbor species.
684

Stimulus-Free RT Level Power Model using Belief Propagation

Ponraj, Sathishkumar 25 October 2004 (has links)
Power consumption is one of the major bottlenecks in current and future VLSI design. Early microprocessors, which consumed a few tens of watts, are now replaced by millions of transistors and with the introduction of easy-to-design tools to explore at unbelievable minimum dimensions, increase in chip density is increasing at a alarming rate and necessitates faster power estimation methods. Gate level power estimation techniques are highly accurate methods but when time is the main constraint, power has to be estimated a lot higher in the abstraction level. Estimating power at higher levels also saves valuable time and cost involved in redesigning when design specifications are not met. We estimate power at every levels of abstraction for a breadth first design-space exploration. This work targets a stimulus-free pattern-insensitive RT level hierarchical probabilistic model, called Behavioral Induced Directed Acyclic Graph (BIDAG), that can freely traverse between the RT and logic level and we prove that such a model corresponds to a Bayesian Network to map all the dependencies and can be used to model the joint probability distribution of a set of variables. Each node or variable in this structure represents a gate level Directed Acyclic Graph structure, called the Logic Induced Directed Acyclic Graph (LIDAG). We employ Bayesian networks for the exact representation of underlying probabilistic framework at RT level, capturing the dependence exactly and again use the same probabilistic model for the logic level. Bayesian networks are graphical representations used to concisely represent the uncertain knowledge of the system. In order to get an posterior belief of a query node or variable, with or without preset nodes or variables called the evidence nodes, we use stochastic inference algorithm, based on importance sampling method, called the Evidence Pre-propagation Importance Sampling (EPIS) which is anytime and scales really well for RT and logic networks. Experimental results indicate that this method of estimation yields high accuracy and is qualitatively superior to macro-models under a wider range of input patterns. The main highlights of this work is that as it is a probabilistic model, it is input pattern independent and nonsimulative property implies less time for power modelling.
685

Identification of fault and top seal effectiveness through an integration of hydrodynamic and capillary analysis techniques

Underschultz, James Ross January 2009 (has links)
Fault and top seal effectiveness has proved to be a significant risk in exploration success, and creates a large uncertainty in predicting reservoir performance. This is particularly true in the Australian context, but equally applies to exploration provinces worldwide. Seals can be broadly classified into fault, intraformational, and top seal. For geological time-scale processes, intraformational and top seals are typically characterised by their membrane seal capacity and fracture threshold pressure. Fault seals are typically characterised by fault geometry, juxtaposition, membrane seal capacity, and reactivation potential. At the production time scale, subtle variations in the permeability distribution within a reservoir can lead to compartmentalization. These are typically characterised by dynamic reservoir models which assume hydrostatic conditions prior to commencement of production. There are few references in the seals literature concerning the integration of hydrodynamic techniques with the various aspects of seal evaluation. The research for this PhD thesis by published papers includes: Methodology for characterising formation water flow systems in faulted strata at exploration and production time scales; a new theory of hydrodynamics and membrane (capillary) seal capacity; and case study evaluations demonstrating integrated multidisciplinary techniques for the evaluation of seal capacity (fault, intraformational and top seal) that demonstrate the new theory in practice. By incorporating hydrodynamic processes in the evaluation of total seal capacity, the evidence shows that existing shale gouge ratio – across fault pressure difference (SGR-AFPD) calibration plots need adjustment resulting in the calibration envelopes shifting to the centre of the plot. / This adjustment sharpens the predictive capacity for membrane seal analysis in the pre-drill scenario. This PhD thesis presents the background and rationale for the thesis topic, presents each published paper to be included as part of the thesis and its contribution to the body of work addressing the thesis topic, and presents related published papers that are not included in the thesis but which support the body of published work on the thesis topic. The result of the thesis is a new theory and approach to characterising membrane seal capacity for the total seal thickness, and has implications for an adjusted SGR-AFPD calibration to be applied in pre-drill evaluations of seal capacity. A large portion of the resources and data required to conduct the research were made available by CSIRO and its associated project sponsors including the CO2CRC.
686

Addressing the contradiction between discourse and practice in health promotion.

Laverack, Glenn, kimg@deakin.edu.au,jillj@deakin.edu.au,mikewood@deakin.edu.au,wildol@deakin.edu.au January 1999 (has links)
The main theme of this thesis is the contradiction between discourse and practice in health promotion. Many health promoters continue to exert power-over the community through top-down programming whilst at the same time using an emancipatory discourse. The thesis has addressed this contradiction in three parts. The first part determines how the emancipatroty discourse has evolved and eplores the role of social movements in the development of contemporary health discourses and their influence on the legitimisation of empowerment. Central to this discourse is the empowerment of communities. To understand the role of this concept the thesis provides an interpretation of the different meanings of power and community, and the different levels of analysis of empowerment in the context of health promotion programming. The second part identifies the nature of health programming and the dominance of top-down, and to a much lesser extent, bottom-up approaches. The thesis argues that these two approaches are not, and do not have to be, mutually exclusive. To address this issue the thesis presents a new methodology is situated within a framework developed for the accomodation of empowerment goals within health promotion programmes. The study also identifies the organisational areas of influence on the processs of community empowerment and it is these which are used for the assessment of this concept. Both the framework and the methodology address the contradiction in health promotion by making community empowerment operational within a programme context. The third part of the thesis supports the rationale for the design of the methodology with field work in rural Fijian communities. The findings are presented as a composite case study to highlight the experiences of implementing the methodolgy and the main themes that emerged during the field work. the final chapter of the thesis brings together the central themes of the study and draws from these and 'emergent agenda' as a way forward for health promotion research and practice.
687

Mesure de la section efficace de production de paires de quarks top dans le canal lepton+tau+jets+met dans l'expérience D0 et interprétation en terme de boson de Higgs chargé

Lacroix, F. 05 December 2008 (has links) (PDF)
Le modèle standard de la physique des particules décrit la matière constituée de particules élémentaires qui interagissent via les interactions fortes et électrofaibles. Le quark top est le quark le plus lourd décrit par ce modèle et a été découvert en 1995 par les collaborations CDF et D0 dans les collisions proton-antiproton du Tevatron. Cette thèse est consacrée à la mesure de la section efficace de production de paire de quarks top par interaction forte, dans un état final contenant un lepton, un tau hadronique, deux jets de b et de l'énergie transverse manquante. Cette analyse utilise les données collectées au début du Run IIb entre juillet 2006 et aout 2007, soit une luminosité de 1,2 fb-1 qui sont combinées avec les données du Run IIa pour atteindre une luminosité de 2,2 fb-1. Une partie du travail de thèse décrit ici est consacrée au système de déclenchement du détecteur D0, qui constitue la première étape de toutes les analyses, et en particulier à l'identification des leptons taus au niveau 3 du système de déclenchement et aux déclenchements « jets+met » basés sur la présence de jets et d'énergie transverse manquante. La problématique de la résolution en énergie des jets est également abordée, sous l'angle de<br />l'intercalibration en eta du calorimètre hadronique et avec l'utilisation du détecteur de pied de gerbe central (CPS) dans la définition de l'énergie des jets. La section efficace de production de paires de quark top obtenue est :<br />sigma=7,32+1,34-1,24(stat)+1,20-1,06(syst)±0,45(lumi) pb<br />Cette mesure est en accord avec les prédictions du modèle standard et permet de contraindre la présence de nouvelle physique, telle que l'existence d'un boson de Higgs chargé plus léger que le quark top. Une limite d'exclusion a ainsi été obtenue dans le plan (tan beta, mH±) et est présentée dans la dernière partie de ce manuscrit.
688

Modélisation des tubes à onde progressive à hélice en domaine temporel

Aïssi, Anass 05 December 2008 (has links) (PDF)
Les tubes à onde progressive (TOP) sont des dispositifs où une onde électromagnétique interagit avec un faisceau d'électrons le long d'une structure à onde lente. Ils sont de nos jours utilisés comme amplificateurs de signaux hyperfréquences, et constituent par ailleurs un outil particulièrement simple et robuste pour étudier l'interaction onde-particule.<br /><br />Le fonctionnement des TOP est aujourd'hui majoritairement simulé en utilisant une approche fréquentielle. Cette approche suppose que l'on ait défini a priori les fréquences qui entrent en jeu dans la dynamique du TOP, ce qui limite son application aux régimes stationnaires.<br /><br />Motivée par le besoin de simuler les régimes non-stationnaires des TOP, cette thèse propose de recourir à une approche en domaine temporel. Deux méthodes pour modéliser la structure à onde lente sont montrées : <br />1. L'utilisation d'un circuit équivalent.<br />2. L'utilisation d'une méthode de « réduction de modèle ».<br />Chaque méthode est étudiée, puis implémentée et essayée sur des exemples physiques.
689

Recherche de paires de stops dans le canal b \ overline {b} eμ E _ {T} auprès de l'expérience DØ

Tissandier, Fabrice 09 October 2007 (has links) (PDF)
Le Modèle Standard fournit une explication satisfaisante des phénomènes subatomiques à basse énergie (<1 TeV). Au-delà, d'autres modèles doivent être envisagés. Parmi eux, la Supersymétrie offre, de manière élégante, des solutions à quelques unes des insuffisances du Modèle Standard.<br />Le travail présenté dans ce document concerne la recherche d'un signal supersymétrique caractérisé par la production de deux stops se désintégrant en deux jets de b, un électron, un muon et de l'énergie manquante. Cette étude a été menée auprès de l'expérience DØ, située sur l'anneau du Tevatron à FermiLab (Chicago, USA) et dont l'énergie dans le centre de masse atteint 1,96 TeV. L'analyse de ce signal porte sur les données collectées pendant la phase IIa du détecteur DØ, d'avril 2003 à mars 2006 (~1fb^-1). L'étude d'un tel signal requiert une bonne maîtrise des différents sous-détecteurs ; aussi bien du calorimètre (électron, jet et énergie transverse manquante) que des détecteurs de muon et des trajectographes.<br />Le Tevatron est un collisionneur hadronique et le nombre de processus du Modèle Standard présentant la même signature que le signal recherché est faible. Le bruit de fond de cette analyse est donc dominé par les processus QCD. Après l'application de critères de sélection permettant de diminuer principalement cette contribution, aucun excès significatif des données par rapport aux prévisions du Modèles Standard n'a été observé. La sensibilité de l'expérience DØ a été améliorée et le domaine d'exclusion dans le plan [m˜_nu,m˜_t1 ] étendu jusqu'à des masses de stop de 170 GeV/c2 et de sneutrino de 105 GeV/c2.<br />De plus, une partie de mon travail préparatoire a consisté dans l'élaboration d'un outil de discrimination des objets calorimétriques au niveau 3 de déclenchement ; ainsi que la calibration des deux chaînes de lecture simulées au niveau 1.
690

Intégration d'un bouchon du trajectographe au silicium de l'expérience CMS au LHC et étude du potentiel de découverte de résonances se désintégrant en paires de quarks top

Chabert, Eric 17 October 2008 (has links) (PDF)
La première partie de cette thèse porte sur l'intégration d'un bouchon du trajectographe au silicium de l'expérience CMS. Les procédures mises en œuvre et les tests qui ont permis d'aboutir à la qualification du système de détection sont présentés dans ce document.<br />La deuxième partie est dédiée à la recherche de nouvelle physique dans le secteur du quark top. Une des voies les plus prometteuses consiste à rechercher une résonance dans la distribution en masse invariante des paires de quarks top. Une analyse réalisée en simulation complète dans le canal lepton+jets montre qu'à l'échelle du TeV, des processus de quelques centaines de fb à 1 pb pourraient être observés lors des premières années de prises de données.

Page generated in 0.0544 seconds