• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 1
  • Tagged with
  • 8
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Theoretically Tested Remediation in Response to Insect Resistance to Bt Corn and Bt Cotton: A New Paradigm

Martinez, Jeannette C 09 May 2015 (has links)
Various models of density dependence predicted different evolutionary outcomes for Helicoverpa zea, Diabrotica virgifera, and Ostrinia nubilalis using simple and complex resistance evolution models, different dose assumptions and refuge proportions. Increasing available refuge increased durabilities of pyramided Plant-Incorporated-Protectants (PIPs), especially between 1-5%. For some models of density dependence and pests, additional refuge resulted in faster adaptation rates. Significant considerations should be given to a pest’s intra-specific competition in simple and complex theoretical models when designing insect resistance management plans. Life-history, refuge, and dose characteristics of a PIP had different effects on the adaptation rate of a generic pest of Bt, and unexpected outcomes occurred. Intrinsic growth rate ‘R0’ was the strongest evolutionary force, and large R0’s reduced time to resistance for a high dose PIP to similar levels as projected for a low dose PIP. This was caused by differential density dependent effects in refuge and Bt fields that elevated generational resistance increases beyond those from selection alone. Interactions between density dependence and R0 were always present and further affected the life-time of the PIPs. Varying ‘average dispersal distance’ did not affect evolutionary outcomes; however, increasing the proportion of the population engaging in dispersal often increased the durability of high dose PIPs. When resistance genes spread from a hypothetical hotspot, local resistance phenomena developed in the immediate surroundings. Higher growth rates lead resistance to spread faster through the landscape than lower rates. Increasing available refuges slowed adaptation rates to single PIPs and low dose pyramids, although non-linear trends were possible. Integrated Pest Management (IPM) practices at the onset of PIP commercialization slowed pest adaptation rates. For corn rootworm, interspersing non-selective periods with IPM+IRM delayed resistance evolution, yet crop rotation was the best strategy to delay resistance. For bollworm inclusion of isoline corn as an IPM tool did not increase the life-time of the PIP. A local resistance phenomenon for rootworm was maintained immediately surrounding the hotspot; random selection of mitigatory strategies in the landscape slowed adaptation rates while mitigation in the hotspot alone did not. Mitigation extended the life-time of the pyramid minimally for both corn rootworm and bollworm.
2

Human protein-protein interaction prediction

McDowall, Mark January 2011 (has links)
Protein-protein interactions are essential for the survival of all living cells, allowing for processes such as cell signalling, metabolism and cell division to occur. Yet in humans there are only >38k annotated interactions of an interactome estimated to range between 150k to 600k interactions and out of a potential 300M protein pairs.Experimental methods to define the human interactome generate high quality results, but are expensive and slow. Computational methods play an important role to fill the gap.To further this goal, the prediction of human protein-protein interactions was investigated by the development of new predictive modules and the analysis of diverse datasets within the framework of the previously established PIPs protein-protein interaction predictor Scott and Barton 2007. New features considered include the semantic similarity of Gene Ontology annotating terms, clustering of interaction networks, primary sequences and gene co-expression. Integrating the new features in a naive Bayesian manner as part of the PIPs 2 predictor resulted in two sets of predictions. With a conservative threshold, the union of both sets is >300k predicted human interactions with an intersect of >94k interactions, of which a subset have been experimentally validated. The PIPs 2 predictor is also capable of making predictions in organisms that have no annotated interactions. This is achieved by training the PIPs 2 predictor based on a set of evidence and annotated interactions in another organism resulting in a ranking of protein pairs in the original organism of interest. Such an approach allows for predictions to be made across the whole proteome of poorly characterised organism, rather than being limited only to proteins with known orthologues. The work described here has increased the coverage of the human interactome and introduced a method to predict interactions in organisms that have previously had limited or no annotated interactions. The thesis aims to provide a stepping stone towards the completion of the human interactome and a way of predicting interactions in organisms that have been less well studied, but are often clinically relevant.
3

Polymerization Induced Phase Separation (PIPS) in Epoxy / Poly(ε-Caprolactone) Systems

Luo, Xiaofan January 2008 (has links)
No description available.
4

Parallélisation automatique et statique de tâches sous contraintes de ressources : une approche générique / Automatic Resource-Constrained Static Task Parallelization : A Generic Approach

Khaldi, Dounia 27 November 2013 (has links)
Le but de cette thèse est d'exploiter efficacement le parallélisme présent dans les applications informatiques séquentielles afin de bénéficier des performances fournies par les multiprocesseurs, en utilisant une nouvelle méthodologie pour la parallélisation automatique des tâches au sein des compilateurs. Les caractéristiques clés de notre approche sont la prise en compte des contraintes de ressources et le caractère statique de l'ordonnancement des tâches. Notre méthodologie contient les techniques nécessaires pour la décomposition des applications en tâches et la génération de code parallèle équivalent, en utilisant une approche générique qui vise différents langages et architectures parallèles. Nous implémentons cette méthodologie dans le compilateur source-à-source PIPS. Cette thèse répond principalement à trois questions. Primo, comme l'extraction du parallélisme de tâches des codes séquentiels est un problème d'ordonnancement, nous concevons et implémentons un algorithme d'ordonnancement efficace, que nous nommons BDSC, pour la détection du parallélisme ; le résultat est un SDG ordonnancé, qui est une nouvelle structure de données de graphe de tâches. Secondo, nous proposons une nouvelle extension générique des représentations intermédiaires séquentielles en des représentations intermédiaires parallèles que nous nommons SPIRE, pour la représentation des codes parallèles. Enfin, nous développons, en utilisant BDSC et SPIRE, un générateur de code que nous intégrons dans PIPS. Ce générateur de code cible les systèmes à mémoire partagée et à mémoire distribuée via des codes OpenMP et MPI générés automatiquement. / This thesis intends to show how to efficiently exploit the parallelism present in applications in order to enjoy the performance benefits that multiprocessors can provide, using a new automatic task parallelization methodology for compilers. The key characteristics we focus on are resource constraints and static scheduling. This methodology includes the techniques required to decompose applications into tasks and generate equivalent parallel code, using a generic approach that targets both different parallel languages and architectures. We apply this methodology in the existing tool PIPS, a comprehensive source-to-source compilation platform. This thesis mainly focuses on three issues. First, since extracting task parallelism from sequential codes is a scheduling problem, we design and implement an efficient, automatic scheduling algorithm called BDSC for parallelism detection; the result is a scheduled SDG, a new task graph data structure. In a second step, we design a new generic parallel intermediate representation extension called SPIRE, in which parallelized code may be expressed. Finally, we wrap up our goal of automatic parallelization in a new BDSC- and SPIRE-based parallel code generator, which is integrated within the PIPS compiler framework. It targets both shared and distributed memory systems using automatically generated OpenMP and MPI code.
5

Parallélisation automatique et statique de tâches sous contraintes de ressources : une approche générique

Khaldi, Dounia 27 November 2013 (has links) (PDF)
Le but de cette thèse est d'exploiter efficacement le parallélisme présent dans les applications informatiques séquentielles afin de bénéficier des performances fournies par les multiprocesseurs, en utilisant une nouvelle méthodologie pour la parallélisation automatique des tâches au sein des compilateurs. Les caractéristiques clés de notre approche sont la prise en compte des contraintes de ressources et le caractère statique de l'ordonnancement des tâches. Notre méthodologie contient les techniques nécessaires pour la décomposition des applications en tâches et la génération de code parallèle équivalent, en utilisant une approche générique qui vise différents langages et architectures parallèles. Nous implémentons cette méthodologie dans le compilateur source-à-source PIPS. Cette thèse répond principalement à trois questions. Primo, comme l'extraction du parallélisme de tâches des codes séquentiels est un problème d'ordonnancement, nous concevons et implémentons un algorithme d'ordonnancement efficace, que nous nommons BDSC, pour la détection du parallélisme ; le résultat est un SDG ordonnancé, qui est une nouvelle structure de données de graphe de tâches. Secondo, nous proposons une nouvelle extension générique des représentations intermédiaires séquentielles en des représentations intermédiaires parallèles que nous nommons SPIRE, pour la représentation des codes parallèles. Enfin, nous développons, en utilisant BDSC et SPIRE, un générateur de code que nous intégrons dans PIPS. Ce générateur de code cible les systèmes à mémoire partagée et à mémoire distribuée via des codes OpenMP et MPI générés automatiquement.
6

Verificação de Programas Embarcados ANSI-C baseada em indução Matemática e Invariantes

Melo, Raimundo Williame Rocha de, 92-99345-3625 10 August 2017 (has links)
Submitted by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2018-03-21T17:40:38Z No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Dissertação_Raimundo W. R. Melo.pdf: 1511352 bytes, checksum: 35f1429da9fc237f23a6e983f4c6abd9 (MD5) / Approved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2018-03-21T17:40:50Z (GMT) No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Dissertação_Raimundo W. R. Melo.pdf: 1511352 bytes, checksum: 35f1429da9fc237f23a6e983f4c6abd9 (MD5) / Made available in DSpace on 2018-03-21T17:40:50Z (GMT). No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Dissertação_Raimundo W. R. Melo.pdf: 1511352 bytes, checksum: 35f1429da9fc237f23a6e983f4c6abd9 (MD5) Previous issue date: 2017-08-10 / FAPEAM - Fundação de Amparo à Pesquisa do Estado do Amazonas / The use of embedded systems, i.e., computer systems focused on performing specific functions in larger (electronic or mechanical) systems, has been growing lately, and ensuring the robustness of such systems has become increasingly important. There are several techniques to ensure that a system is released without errors. In particular, formal verification of programs is proving itself to be effective in the search for failures. In this work, an induction-proof algorithm is described, which combines k-induction and invariants to verify and refute safety properties in embedded ANSI-C software. Moreover, the proposed k-induction-based approach infers invariants in the program to assist in verification tasks, using constraint refinement (i.e., polyhedral) to specify pre- and post-conditions. We adopted two invariant generators to produce such and feed the k-induction algorithm, which is implemented in the Efficient SMT-Based Context-Bounded Model Checker tool. Public benchmarks were used to assess the effectiveness of our approach. In addition, a comparison to other state-of-the-art verification tools using a set of benchmarks from the International Competition for Software Verification in addition to embedded systems applications. Experimental results have shown that the proposed approach, with and without invariants, can verify a wide variety of safety properties in programs with loops and embedded software from telecommunications, control systems, and medical domains. / O uso de sistemas embarcados, sistemas computacionais especializados para execução em sistemas eletrônicos ou mecânicos tem crescido de forma vertiginosa devido a utilização cada vez mais intensa de sensores, interfaces de rede e protocolos de comunicação em diversas áreas. Por isso, é cada vez mais importante garantir a robustez desses sistemas, uma vez que estão se tornando mais complexos e integrados. Existem várias técnicas para garantir que um sistema seja entregue ao cliente sem erros, em particular, a verificação formal dos programas tem se revelado eficaz na busca de falhas. Neste trabalho é descrito um algoritmo de indução matemática conhecido como k-induction combinado ao uso de invariantes para verificar e refutar propriedades de segurança em programas desenvolvidos na linguagem ANSI-C. Em particular, a abordagem proposta infere invariantes no programa para auxiliar na verificação de programas ANSI-C através da técnica de indução matemática através do refinamento de restrição (i.e, poliédrico) para especificar pré- e pós-condições. No método proposto, adotamos dois geradores de invariantes para produzir e alimentar o algoritmo de indução matemática o qual é implementado na ferramenta Efficient SMT-Based Context-Bounded Model Checker. A motivação para a combinação de invariantes com o algoritmo de indução matemática é fechar um gap na verificação formal de programas que possuam variáveis globais, além de programas com loops que possuem desvios condicionais e o número de iterações é desconhecido. PIPS e PAGAI são as ferramentas utilizadas para analisar o código e produzir invariantes indutivas responsáveis por guiar o algoritmo de indução matemática na verificação do benchmark, sendo este o principal desafio do método proposto. Para avaliar a eficácia da abordagem proposta neste trabalho, além de aplicações de Sistemas Embarcados foram utilizados benchmarks públicos disponibilizados pela Competição Internacional de Verificação de Software onde participam Universidades, pesquisadores, estudandantes de doutorado de várias partes do mundo, e fornece amplo conjunto de casos de teste para verificação. Além disso, foram utilizadas ferramentas estado-da-arte para a comparação dos resultados e, assim mensurar a eficácia do método proposto. Os resultados experimentais foram positivos e mostraram que o algoritmo de indução matemática com invariantes pode verificar uma grande variedade de propriedades de segurança em programas com loops e aplicações de sistemas embarcados de telecomunicações, sistemas de controle e dispositivos médicos.
7

METHOD DEVELOPMENT AND INVESTIGATION OF FLUORESCENT PHOSPHOINOSITIDE CELL SIGNALING PROPERTIES BY CAPILLARY ELECTROPHORESIS

Quainoo, Emmanuel W0bil 21 April 2010 (has links)
No description available.
8

Simulations And Experiments Of Plasma-Induced Effects In Silicon Detectors

Gomez L, Ana Maria January 2023 (has links)
When an atomic nucleus undergoes fission, two fragments with different mass and kinetic energy are emitted. The highly unstable fission fragments (FFs) evaporate prompt neutrons soon after the nucleus splits. A precise measurement of both, the mass yield distribution of the FFs and the average prompt neutron emission, $\bar{\nu}$, is important not only for current nuclear technologies but also for the development of future technologies such as Generation IV nuclear power plants. Moreover, the experimental determination of the mass yield distributions, both pre- and post-neutron emission, is valuable for testing fission models. Additionally, a precise measurement of the average neutron multiplicity as a function of the FFs mass, <img src="http://www.diva-portal.org/cgi-bin/mimetex.cgi?$%5Cbar%7B%5Cnu%7D(A)$" data-classname="equation" data-title="" />, is crucial in the understanding of how the excitation energy is shared between nascent FFs.  The VElocity foR DIrect particle identification spectrometer (VERDI) is designed to achieve pre- and post-fission mass distributions with resolutions between 1-2 u. VERDI is a double-energy double-velocity instrument that consists of two arms. On each arm is operated one Microchannel Plate detector (MCP) for the collection of the FFs start time and up to 32 Passive Implanted Planar Silicon (PIPS) detectors for the stop time and energy detection of the FFs. However, challenges in the experimental measurements with VERDI arise due to the high degree of ionization (plasma) in the detector material from the interaction with the FFs. The plasma causes a delay in the charge carriers' migration for the signal start, known as the plasma delay time effect (PDT). Furthermore, the recombination of charge carriers in the plasma causes a shrinking in the signal's height, known as pulse height defect (PHD). This phenomenon leads to inaccuracies in the measurement of FFs mass distributions and increased systematic uncertainties.  Previous studies on PDT and PHD have shown varying behaviors across different detector types, which motivated dedicated studies in the type of PIPS detectors used in VERDI. An experimental campaign to characterize the PDT and PHD in PIPS detectors was conducted in the LOHENGRIN recoil separator, which is part of the ILL nuclear facility in Grenoble, France. Measurements of FFs in a range of masses between 80 u and 149 u, with energies between 20 MeV to 110 MeV, were taken to fully characterize six PIPS detectors. The resulting PDT and PHD values were 1 ns to 4 ns and 2 MeV to 10 MeV respectively. The PDT and PHD exhibited consistent energy and mass dependencies across the detectors, which enables the possibility of an event-by-event correction of VERDI data. In this thesis, the basis for discussing the results of the studies of the PDT and PHD effects will be presented.

Page generated in 0.0369 seconds