• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 2
  • 1
  • 1
  • Tagged with
  • 12
  • 12
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Recherche de biomarqueurs et études lipidomiques à travers diverses applications en santé / Biomarker research and lipidomics studies through various health applications

Lanzini, Justine 21 November 2016 (has links)
La notion de biomarqueurs est définie comme « une caractéristique mesurée objectivement et évaluée comme indicateur de processus biologiques normaux ou pathologiques, ou de réponses pharmacologiques à une intervention thérapeutique ». L'intérêt scientifique pour les biomarqueurs est de plus en plus important. Ils permettent, entre autres,une meilleure compréhension des processus pathologiques et de diagnostiquer, voire pronostiquer ces pathologies. Les études « omiques » telles que la lipidomique jouent un rôle essentiel dans la découverte de nouveaux biomarqueurs. La lipidomique consiste à explorer le lipidome d'un échantillon biologique et à déceler l'impact de la pathologie sur ce dernier. Les lipides constituent une vaste et importante famille de métabolites retrouvés dans toutes les cellules vivantes, dont leur nombre est estimé à plus de 100 000 espèces chez les mammifères. Ils sont impliqués, notamment, dans le stockage d'énergie et la transduction de signal. Mon travail de thèse a reposé sur la réalisation d'approches lipidomiques en LC-MS sur diverses applications en santé telles que le syndrome de déficit immunitaire combiné sévère associé à une alopécie et une dystrophie des ongles, le syndrome du nystagmus infantile et le rejet de greffe rénale. A cette fin, des analyses statistiques multivariées et univariées ont été employées pour déceler des potentiels lipides biomarqueurs. / Biomarker was defined as "a characteristic that is objectively measured and evaluated as an indicator of normal biological processes, pathogenic processes, or pharmacological responses to therapeutic intervention". The scientific interest in biomarkers is more and more important. They allow, in particular, to better understand pathogenic processes and to diagnose, even to predict pathologies. "Omics" studies, such as lipidomics, play an essential role in the new biomarkers discovery. Lipidomics consist in exploring biological samples lipidome and in detecting pathogenic impact on this latter. Lipids are a large and important metabolite family found in all living cells. Their quantity is estimated to more than 100,000 species in mammals. They are involved, in particular, in the energy storage and the signal transduction. My PhD thesis involved carrying out lipidomics approaches with LC-MS through various health applications such as severe combined immunodeficiency associated with alopecia syndrome, infantile nystagmus syndrome and renal graft rejection. For this purpose, multivariate and univariate statistical analyses were carried out in order to detect potential lipid biomarkers.
2

Addressing intrinsic challenges for next generation sequencing of immunoglobulin repertoires.

Chrysostomou, Constantine 26 August 2015 (has links)
Antibodies are essential molecules that help to provide immunity against a vast population of environmental pathogens. This antibody conferred protection is dependent upon genetic diversification mechanisms that produce an impressive repertoire of lymphocytes expressing unique B-cell receptors. The advent of high throughput sequencing has enabled researchers to sequence populations of B-cell receptors at an unprecedented depth. Such investigations can be used to expand our understanding of mechanistic processes governing adaptive immunity, characterization of immunity related disorders, and the discovery of antibodies specific to antigens of interest. However, next generation sequencing of immunological repertoires is not without its challenges. For example, it is especially difficult to identify biologically relevant features within large datasets. Additionally, within the immunology community, there is a severe lack of standardized and easily accessible bioinformatics analysis pipelines. In this work, we present methods which address many of these concerns. First, we present robust statistical methods for the comparison of immunoglobulin repertoires. Specifically, we quantified the overlap between the antibody heavy chain variable domain (V H ) repertoire of antibody secreting plasma cells isolated from the bone marrow, lymph nodes, and spleen lymphoid tissues of immunized mice. Statistical analysis showed significantly more overlap between the bone marrow and spleen VH repertoires as compared to the lymph node repertoires. Moreover, we identified and synthesized antigen-specific antibodies from the repertoire of a mouse that showed a convergence of highly frequent VH sequences in all three tissues. Second, we introduce a novel algorithm for the rapid and accurate alignment of VH sequences to their respective germline genes. Our tests show that gene assignments reported from this algorithm were more than 99% identical to assignments determined using the well-validated IMGT software, and yet the algorithm is five times faster than an IgBlast based analysis. Finally, in an effort to introduce methods for the standardization, transparency, and replication of future repertoire studies, we have built a cloud-based pipeline of bioinformatics tools specific to immunoglobulin repertoire studies. These tools provide solutions for data curation and long-term storage of immunological sequencing data in a database, annotation of sequences with biologically relevant features, and analysis of repertoire experiments. / text
3

Dealing with sparsity in genotype x environment analyses : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Statistics at Massey University, Palmerston North, New Zealand

Godfrey, A. Jonathan R. January 2004 (has links)
Researchers are frequently faced with the problem of analyzing incomplete and often unbalanced genotype-by-environment (GxE) matrices which arise as a trials programme progresses over seasons. The principal data for this investigation, arising from a ten year programme of onion trials, has less than 2,300 of the 49,200 combinations from the 400 genotypes and 123 environments. This 'sparsity' renders standard GxE methodology inapplicable. Analysis of this data to identify onion varieties that suit the shorter, hotter days of tropical and subtropical locations therefore presented a unique challenge. Removal of some data to form a complete GxE matrix wastes information and is consequently undesirable. An incomplete GxE matrix can be analyzed using the additive main effects and multiplicative interaction (AMMI) model in conjunction with the EM algorithm but proved unsatisfactory in this instance. Cluster analysis has been commonly used in GxE analyses, but current methods are inadequate when the data matrix is incomplete. If clustering is to be applied to incomplete data sets, one of two routes needs to be taken: either the clustering procedure must be modified to handle the missing data, or the missing entries must be imputed so that standard cluster analysis can be performed. A new clustering method capable of handling incomplete data has been developed. 'Two-stage clustering', as it has been named, relies on a partitioning of squared Euclidean distance into two independent components, the GxE interaction and the genotype main effect. These components are used in the first and second stages of clustering respectively. Two-stage clustering forms the basis for imputing missing values in a GxE matrix, so that a more complete data array is available for other GxE analyses. 'Two-stage imputation' estimates unobserved GxE yields using inter-genotype similarities to adjust observed yield data in the environment in which the yield is missing. This new imputation method is transferrable to any two-way data situation where all observations are measured on the same scale and the two factors are expected to have significant interaction. This simple, but effective, imputation method is shown to improve on an existing method that confounds the GxE interaction and the genotype main effect. Future development of two-stage imputation will use a parameterization of two-stage clustering in a multiple imputation process. Varieties recommended for use in a certain environment would normally be chosen using results from similar environments. Differing cluster analysis approaches were applied, but led to inconsistent environment clusterings. A graphical summary tool, created to ease the difficulty in identifying the differences between pairs of clusterings, proved especially useful when the number of clusters and clustered observations were high. 'Cluster influence diagrams' were also used to investigate the effects the new imputation method had on the qualitative structure of the data. A consequence of the principal data's sparsity was that imputed values were found to be dependent on the existence of observable inter-genotype relationships, rather than the strength of these observable relationships. As a result of this investigation, practical recommendations are provided for limiting the detrimental effects of sparsity. Applying these recommendations will enhance the future ability of two-stage imputation to identify those onion varieties that suit tropical and subtropical locations.
4

[en] OPTIMIZATION OF FRAMED STRUCTURES CONSIDERING UNCERTAINTIES / [pt] OTIMIZAÇÃO DE ESTRUTURAS RETICULADAS CONSIDERANDO INCERTEZAS

ANDRE LUIS MULLER 23 May 2003 (has links)
[pt] Parâmetros mecânicos utilizados na análise e projeto de estruturas, tais como módulo de elasticidade e tensão de escoamento do material assim como cargas, são na verdade variáveis aleatórias e não determinísticas como se considera normalmente. Conseqüentemente, as respostas das estruturas, tais como deslocamentos e tensões, são também variáveis aleatórias e, portanto há incerteza quanto à determinação dessas variáveis. Nesse trabalho as incertezas relativas aos parâmetros mecânicos dos materiais e, por conseguinte as incertezas quanto às respostas da estrutura, serão consideradas no projeto ótimo de estruturas reticuladas planas tais como treliças e pórticos. Para a determinação da resposta incorporando incertezas será usado o método de análise estatística linear. As análises de sensibilidade serão feitas pelo método analítico direto e o algoritmo de otimização empregado será o de pontos interiores. / [en] Mechanical parameters used in the analysis and design of structures, such as the elasticity modulus and the yielding stress for the material as well as the loads, are in fact, random rather than deterministic variables as they are usually considered. Consequently, the structural response, such as displacements and stresses, are also random variables, and in this way, there are uncertainties while determining these variables. In this work the uncertainties related to the mechanical parameters for the material and, as consequence, the uncertainties corresponding to structural response will be taken into account in the optimum design of framed structures, such as trusses and frames. For the determination of the structural response considering uncertainties, the linear statistical method will be applied. The sensitivity analyses will be performed by the direct analytical method and the interior point algorithm will be used as optimization algorithm.
5

Dimensional Sandstones: Weathering Phenomena, Technical Properties and Numerical Modeling of Water Migration

Stück, Heidrun Louise 08 March 2013 (has links)
No description available.
6

Statistics and modelling of the influence of the volume, fall height and topography on volcanic debris avalanche deposits

Pouget, Solene January 2010 (has links)
This research project on volcanic debris avalanches aims to provide a better understanding of the influence of the volume, fall height and topography on the deposit location and morphology. This will enable improvements in delineation of the areas at risk from volcanic debris avalanches, and improvements in management of a disaster should it occur. Undertaken to fulfil the requirements for a double degree (Geological Engineering and MSc in Hazard and Disaster Management) this work is the result of a collaboration between Polytechnic Institute LaSalle-Beauvais in France and the University of Canterbury in New Zealand. Following a brief introduction to the topic, statistical analyses of volcanic debris avalanche deposits are undertaken. Multiple variables analyses (Principal Components Analyses and Regressions) were carried out using a database of 298 volcanic debris avalanches derived from modification of Dufresne’s recent database. It was found that the volume has the main influence on the deposits rather than the fall height; the latter seems to have greater effect on avalanches of small volume. The topography into which the deposit is emplaced mainly determines its geometrical characteristics. These statistical results were compared with the results of laboratory-scale analogue modelling. A model similar to that used by Shea in 2005 provided data indicating similar trends of the influence of volume, fall height and topography on mass movement deposits at all scales. The final aspect of this project was a numerical simulation of a large debris avalanche from the north flank of the Taranaki volcano in the direction of the city of New Plymouth. The numerical code VolcFlow developed by Kelfoun in 2005 was used, after being tested against the laboratory experiments to verity its accuracy. The simulations showed that the Pouaki range protects the city of New Plymouth form major impacts from Taranaki collapses, but also indicated some potential problems with the hazard zoning and evacuation zones presently in place.
7

Analyse des images de tomographie par émission de positons pour la prédiction de récidive du cancer du col de l'utérus / Analysis of positron emission tomography images for recurrence prediction of cervical cancer

Roman Jimenez, Geoffrey 25 March 2016 (has links)
Ces travaux de thèse s'inscrivent dans le contexte de la prédiction de la récidive en radiothérapie du cancer de l'utérus. L'objectif était d'analyser les images de tomographie par émission de positons (TEP) au 18F-fluorodésoxyglucose (18F-FDG) en vue d'en extraire des paramètres quantitatifs statistiquement corrélés aux événements de récidive. Six études ont été réalisées afin de répondre aux différentes problématiques soulevées par l'analyse des images 18F-FDG TEP telles que la présence d'artefact, l'isolation du métabolisme tumoral ou l'évaluation du signal en cours de traitement. Les études statistiques ont porté sur l'analyse de paramètres reflétant l'intensité, la forme et la texture du métabolisme tumoral avant, et en cours de traitement. À l'issue de ces travaux, le volume métabolique tumoral pré-thérapeutique ainsi que la glycolyse totale de la lésion per-thérapeutique apparaissent comme les paramètres les plus prometteurs pour la prédiction de récidive de cancers du col de l'utérus. De plus, il apparaît que la combinaison de ces paramètres avec d'autres caractéristiques de texture ou de forme, à l'aide de modèles statistiques d'apprentissage supervisé ou de modèles de régression plus classiques, ont permis d'augmenter la prédiction des événements de récidive. / This thesis deals with the issue of predicting the recurrence within the context of cervical cancer radiotherapy. The objective was to analyze positron emission tomography (PET) with 18F-fluorodeoxyglucose (18F-FDG) to extract quantitative parameters that could show statistical correlation with tumor recurrence. Six study were performed to address 18F-FDG PET imaging issues such as the presence of bladder uptake artifacts, tumor segmentation impact, as well as the analysis of tumor evolution along the treatment. Statistical analyses were performed among parameters reflecting intensity, shape and texture of the tumor metabolism before, and during treatment. Results show that the pre-treatment metabolic tumor volume and the per-treatment total lesion glycolysis are the most promising parameters for cervical cancer recurrence prediction. In addition, combinations of these parameters with shape descriptors and texture features, using machine-learning methods or regression models, are able to increase the prediction capability.
8

Automatic non-functional testing and tuning of configurable generators / Une approche pour le test non-fonctionnel et la configuration automatique des générateurs

Boussaa, Mohamed 06 September 2017 (has links)
Les techniques émergentes de l’ingénierie dirigée par les modèles et de la programmation générative ont permis la création de plusieurs générateurs (générateurs de code et compilateurs). Ceux-ci sont souvent utilisés afin de faciliter le développement logiciel et automatiser le processus de génération de code à partir des spécifications abstraites. De plus, les générateurs modernes comme les compilateurs C, sont devenus hautement configurables, offrant de nombreuses options de configuration à l'utilisateur de manière à personnaliser facilement le code généré pour la plateforme matérielle cible. Par conséquent, la qualité logicielle est devenue fortement corrélée aux paramètres de configuration ainsi qu'au générateur lui-même. Dans ce contexte, il est devenu indispensable de vérifier le bon comportement des générateurs. Cette thèse établit trois contributions principales : Contribution I: détection automatique des inconsistances dans les familles de générateurs de code : Dans cette contribution, nous abordons le problème de l'oracle dans le domaine du test non-fonctionnel des générateurs de code. La disponibilité de multiples générateurs de code avec des fonctionnalités comparables (c.-à-d. familles de générateurs de code) nous permet d'appliquer l'idée du test métamorphique en définissant des oracles de test de haut-niveau (c.-à-d. relation métamorphique) pour détecter des inconsistances. Une inconsistance est détectée lorsque le code généré présente un comportement inattendu par rapport à toutes les implémentations équivalentes de la même famille. Nous évaluons notre approche en analysant la performance de Haxe, un langage de programmation de haut niveau impliquant un ensemble de générateurs de code multi-plateformes. Les résultats expérimentaux montrent que notre approche est capable de détecter plusieurs inconsistances qui révèlent des problèmes réels dans cette famille de générateurs de code. Contribution II: une approche pour l'auto-configuration des compilateurs. Le grand nombre d'options de compilation des compilateurs nécessite une méthode efficace pour explorer l'espace d’optimisation. Ainsi, nous appliquons, dans cette contribution, une méta-heuristique appelée Novelty Search pour l'exploration de cet espace de recherche. Cette approche aide les utilisateurs à paramétrer automatiquement les compilateurs pour une architecture matérielle cible et pour une métrique non-fonctionnelle spécifique tel que la performance et l'utilisation des ressources. Nous évaluons l'efficacité de notre approche en vérifiant les optimisations fournies par le compilateur GCC. Nos résultats expérimentaux montrent que notre approche permet d'auto-configurer les compilateurs en fonction des besoins de l'utilisateur et de construire des optimisations qui surpassent les niveaux d'optimisation standard. Nous démontrons également que notre approche peut être utilisée pour construire automatiquement des niveaux d'optimisation qui représentent des compromis optimaux entre plusieurs propriétés non-fonctionnelles telles que le temps d'exécution et la consommation des ressources. Contribution III: Un environnement d'exécution léger pour le test et la surveillance de la consommation des ressources des logiciels. Enfin, nous proposons une infrastructure basée sur les micro-services pour assurer le déploiement et la surveillance de la consommation des ressources des différentes variantes du code généré. Cette contribution traite le problème de l'hétérogénéité des plateformes logicielles et matérielles. Nous décrivons une approche qui automatise le processus de génération, compilation, et exécution du code dans le but de faciliter le test et l'auto-configuration des générateurs. Cet environnement isolé repose sur des conteneurs système, comme plateformes d'exécution, pour une surveillance et analyse fine des propriétés liées à l'utilisation des ressources (CPU et mémoire). / Generative software development has paved the way for the creation of multiple generators (code generators and compilers) that serve as a basis for automatically producing code to a broad range of software and hardware platforms. With full automatic code generation, users are able to rapidly synthesize software artifacts for various software platforms. In addition, they can easily customize the generated code for the target hardware platform since modern generators (i.e., C compilers) become highly configurable, offering numerous configuration options that the user can apply. Consequently, the quality of generated software becomes highly correlated to the configuration settings as well as to the generator itself. In this context, it is crucial to verify the correct behavior of generators. Numerous approaches have been proposed to verify the functional outcome of generated code but few of them evaluate the non-functional properties of automatically generated code, namely the performance and resource usage properties. This thesis addresses three problems : (1) Non-functional testing of generators: We benefit from the existence of multiple code generators with comparable functionality (i.e., code generator families) to automatically test the generated code. We leverage the metamorphic testing approach to detect non-functional inconsistencies in code generator families by defining metamorphic relations as test oracles. We define the metamorphic relation as a comparison between the variations of performance and resource usage of code, generated from the same code generator family. We evaluate our approach by analyzing the performance of HAXE, a popular code generator family. Experimental results show that our approach is able to automatically detect several inconsistencies that reveal real issues in this family of code generators. (2) Generators auto-tuning: We exploit the recent advances in search-based software engineering in order to provide an effective approach to tune generators (i.e., through optimizations) according to user's non-functional requirements (i.e., performance and resource usage). We also demonstrate that our approach can be used to automatically construct optimization levels that represent optimal trade-offs between multiple non-functional properties such as execution time and resource usage requirements. We evaluate our approach by verifying the optimizations performed by the GCC compiler. Our experimental results show that our approach is able to auto-tune compilers and construct optimizations that yield to better performance results than standard optimization levels. (3) Handling the diversity of software and hardware platforms in software testing: Running tests and evaluating the resource usage in heterogeneous environments is tedious. To handle this problem, we benefit from the recent advances in lightweight system virtualization, in particular container-based virtualization, in order to offer effective support for automatically deploying, executing, and monitoring code in heterogeneous environment, and collect non-functional metrics (e.g., memory and CPU consumptions). This testing infrastructure serves as a basis for evaluating the experiments conducted in the two first contributions.
9

Analysis of Worldwide Pesticide Regulatory Models and Standards for Controlling Human Health Risk

Li, Zijian 13 September 2016 (has links)
No description available.
10

Untersuchungen zur Eignung des Laktosegehalts der Milch für das Leistungs- und Gesundheitsmonitoring bei laktierenden Milchkühen

Lindenbeck, Mario 22 February 2016 (has links)
In den vorliegenden Untersuchungen wurde das Ziel verfolgt die Nutzbarkeit des Milchinhaltsstoffes Laktose als praxistaugliche Managementhilfe zu prüfen. Die Primärdaten stammen aus drei israelischen Hochleistungsherden, über mehrere Laktationen erhoben. Der Parameter Laktosegehalt wurde in der Datenaufbereitung dahingehend geprüft, ob dieser zur Gesundheits- und Leistungsvorhersage ausreicht oder welche zusätzlichen Merkmale für die Verwendung in einem Prognose-Modell von Bedeutung sein könnten. Als leistungs- bzw. gesundheitsrelevante Ereignisse (Events) wurden Brunst, Diarrhoe, Endometritis, Fieber, Infektionen, Klauenerkrankungen, Mastitis, Stress, Stoffwechselstörungen sowie Verletzungen zugeordnet. Die Bewertung der Nützlichkeit einzelner Merkmale für die Prädiktion erfolgte anhand der Erkennungsraten. Zwei- und dreistufige Entscheidungsbäume wurden entwickelt, um diese Events zu identifizieren. Ein einzelnes Merkmal ist oft nicht ausreichend, weshalb verschiedene Kombinationen von Variablen analysiert wurden. Die wichtigste Erkenntnis der vorliegenden Arbeit besteht darin, dass der Abfall der Laktosekonzentration und Laktosemenge immer ein kritisches Ereignis darstellt. Das Hauptziel eines Gesundheitsmonitorings im Milchkuhbestand sollte deshalb darin bestehen, frühzeitig eine Stoffwechselüberlastung "sichtbar" oder "erkennbar" zu machen. Unabhängig davon, welche Erkrankung sich anbahnt, muss das Herdenmanagement darauf hinwirken, die Glukoseversorgungssituation des Einzeltieres zu verbessern. Aus der Analyse für die einzelnen Herden und Laktationen kann grundlegend abgeleitet werden, dass die Ergebnisse der Milchkontrolldaten, die im Zuge der datengestützten Herdenüberwachung erhoben wurden, sich verwenden lassen, um den Leistungs- und Gesundheitsstatus der Kühe im Laktationsverlauf einzuschätzen und zu prognostizieren. Die Verwendung von Informationen zum Laktosegehalt des Gemelks verbesserten in jedem Fall die Erkennungsraten. / The aim of the current studies was to investigate whether the milk ingredient lactose can be used as a practical support management. The primary data comes from three Israeli high-performance herds, collected over several lactations. In the data preparation, the parameter "lactose content" was examined to see whether it is sufficient for a health and performance prediction or whether additional features may be of importance for usage in a forecasting model. Oestrus, diarrhea, endometritis, fever, infections, hoof diseases, mastitis, stress, metabolic disorders, and injuries have been assigned to the performance- and/or health-affecting events. The usefulness of individual features for the prediction was evaluated on the basis of the recognition rates. Thus two- and three-level decision trees have been developed to identify these events. As one single feature is often insufficient, different combinations of variables were analyzed. The most important finding of this study is that the drop in the lactose concentration and lactose quantity always represents a critical event. The main objective of a health monitoring in the dairy herd should therefore be to make a metabolic overload "visible" or "recognisable" at an early stage. Whichever disease begins to take shape, the herd management must work on improving the glucose supply situation of the individual animal. In conclusion from the analysis of the individual herds and lactations it can be inferred that the results of the milk control data collected in the course of the data-based herd monitoring can be used in order to assess and to predict the performance and health status of the cows in the course of lactation. The use of information on the lactose content of the milk improved in any case the recognition rates.

Page generated in 0.5112 seconds