• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 184
  • 56
  • 44
  • 23
  • 20
  • 12
  • 12
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 438
  • 95
  • 56
  • 53
  • 50
  • 46
  • 43
  • 39
  • 35
  • 32
  • 31
  • 30
  • 30
  • 27
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Biodiversity and Species Extinctions in Model Food Webs

Borrvall, Charlotte January 2006 (has links)
Many of the earth’s ecosystems are experiencing large species losses due to human impacts such as habitat destruction and fragmentation, climate change, species invasions, pollution, and overfishing. Due to the complex interactions between species in food webs the extinction of one species could lead to a cascade of further extinctions and hence cause dramatic changes in species composition and ecosystem processes. The complexity of ecological systems makes it difficult to study them empirically. The systems often consist of large species numbers with lots of interactions between species. Investigating ecological communities within a theoretical approach, using mathematical models and computer simulations, is an alternative or a complement to experimental studies. This thesis is a collection of theoretical studies. We use model food webs in order to explore how biodiversity (species number) affects the response of communities to species loss (Paper I-III) and to environmental variability (Paper IV). In paper I and II we investigate the risk of secondary extinctions following deletion of one species. It is shown that resistance against additional species extinctions increases with redundancy (number of species per functional group) (Paper I) in the absence of competition between basal species but decreases with redundancy in the presence of competition between basal species (Paper II). It is further shown that food webs with low redundancy run the risk of losing a greater proportion of species following a species deletion in a deterministic environment but when demographic stochasticity is included the benefits of redundancy are largely lost (Paper II). This finding implies that in the construction of nature reserves the advantages of redundancy for conservation of communities may be lost if the reserves are small in size. Additionally, food webs show higher risks of further extinctions after the loss of basal species and herbivores than after the loss of top predators (Paper I and II). Secondary extinctions caused by a primary extinction and mediated through direct and indirect effects, are likely to occur with a time delay since the manifestation of indirect effects can take long time to appear. In paper III we show that the loss of a top predator leads to a significantly earlier onset of secondary extinctions in model communities than does the loss of a species from other trophic levels. If local secondary extinctions occur early they are less likely to be balanced by immigration of species from local communities nearby implying that secondary extinctions caused by the loss of top predators are less likely to be balanced by dispersal than secondary extinctions caused by the loss of other species. As top predators are vulnerable to human-induced disturbances on ecosystems in the first place, our results suggest that conservation of top predators should be a priority. Moreover, in most cases time to secondary extinction is shown to increase with species richness indicating the decay of ecological communities to be slower in species-rich than in species-poor communities. Apart from the human-induced disturbances that often force species towards extinction the environment is also, to a smaller or larger extent, varying over time in a natural way. Such environmental stochasticity influences the dynamics of populations. In paper IV we compare the responses of food webs of different sizes to environmental stochasticity. Species-rich webs are found to be more sensitive to environmental stochasticity. Particularly, species-rich webs lose a greater proportion of species than species-poor webs and they also begin losing species faster than species-poor webs. However, once one species is lost time to final extinction is longer in species-rich webs than in species-poor webs. We also find that the results differ depending on whether species respond similarly to environmental fluctuations or whether they are totally uncorrelated in their response. For a given species richness, communities with uncorrelated species responses run a considerable higher risk of losing a fixed proportion of species compared with communities with correlated species responses.
142

PELICAN : a PipELIne, including a novel redundancy-eliminating algorithm, to Create and maintain a topicAl family-specific Non-redundant protein database

Andersson, Christoffer January 2005 (has links)
The increasing number of biological databases today requires that users are able to search more efficiently among as well as in individual databases. One of the most widespread problems is redundancy, i.e. the problem of duplicated information in sets of data. This thesis aims at implementing an algorithm that distinguishes from other related attempts by using the genomic positions of sequences, instead of similarity based sequence comparisons, when making a sequence data set non-redundant. In an automatic updating procedure the algorithm drastically increases the possibility to update and to maintain the topicality of a non-redundant database. The procedure creates a biologically sound non-redundant data set with accuracy comparable to other algorithms focusing on making data sets non-redundant
143

Multi-State Reliability Analysis of Nuclear Power Plant Systems

Veeramany, Arun January 2012 (has links)
The probabilistic safety assessment of engineering systems involving high-consequence low-probability events is stochastic in nature due to uncertainties inherent in time to an event. The event could be a failure, repair, maintenance or degradation associated with system ageing. Accurate reliability prediction accounting for these uncertainties is a precursor to considerably good risk assessment model. Stochastic Markov reliability models have been constructed to quantify basic events in a static fault tree analysis as part of the safety assessment process. The models assume that a system transits through various states and that the time spent in a state is statistically random. The system failure probability estimates of these models assuming constant transition rate are extensively utilized in the industry to obtain failure frequency of catastrophic events. An example is core damage frequency in a nuclear power plant where the initiating event is loss of cooling system. However, the assumption of constant state transition rates for analysis of safety critical systems is debatable due to the fact that these rates do not properly account for variability in the time to an event. An ill-consequence of such an assumption is conservative reliability prediction leading to addition of unnecessary redundancies in modified versions of prototype designs, excess spare inventory and an expensive maintenance policy with shorter maintenance intervals. The reason for this discrepancy is that a constant transition rate is always associated with an exponential distribution for the time spent in a state. The subject matter of this thesis is to develop sophisticated mathematical models to improve predictive capabilities that accurately represent reliability of an engineering system. The generalization of the Markov process called the semi-Markov process is a well known stochastic process, yet it is not well explored in the reliability analysis of nuclear power plant systems. The continuous-time, discrete-state semi-Markov process model is a stochastic process model that describes the state transitions through a system of integral equations which can be solved using the trapezoidal rule. The primary objective is to determine the probability of being in each state. This process model ensures that time spent in the states can be represented by a suitable non-exponential distribution thus capturing the variability in the time to event. When exponential distribution is assumed for all the state transitions, the model reduces to the standard Markov model. This thesis illustrates the proposed concepts using basic examples and then develops advanced case studies for nuclear cooling systems, piping systems, digital instrumentation and control (I&C) systems, fire modelling and system maintenance. The first case study on nuclear component cooling water system (NCCW) shows that the proposed technique can be used to solve a fault tree involving redundant repairable components to yield initiating event probability quantifying the loss of cooling system. The time-to-failure of the pump train is assumed to be a Weibull distribution and the resulting system failure probability is validated using a Monte Carlo simulation of the corresponding reliability block diagram. Nuclear piping systems develop flaws, leaks and ruptures due to various underlying damage mechanisms. This thesis presents a general model for evaluating rupture frequencies of such repairable piping systems. The proposed model is able to incorporate the effect of aging related degradation of piping systems. Time dependent rupture frequencies are computed and the influence of inspection intervals on the piping rupture probability is investigated. There is an increasing interest worldwide in the installation of digital instrumentation and control systems in nuclear power plants. The main feedwater valve (MFV) controller system is used for regulating the water level in a steam generator. An existing Markov model in the literature is extended to a semi-Markov model to accurately predict the controller system reliability. The proposed model considers variability in the time to output from the computer to the controller with intrinsic software and mechanical failures. State-of-the-art time-to-flashover fire models used in the nuclear industry are either based on conservative analytical equations or computationally intensive simulation models. The proposed semi-Markov based case study describes an innovative fire growth model that allows prediction of fire development and containment including time to flashover. The model considers variability in time when transiting from one stage of the fire to the other. The proposed model is a reusable framework that can be of importance to product design engineers and fire safety regulators. Operational unavailability is at risk of being over-estimated because of assuming a constant degradation rate in a slowly ageing system. In the last case study, it is justified that variability in time to degradation has a remarkable effect on the choice of an effective maintenance policy. The proposed model is able to accurately predict the optimal maintenance interval assuming a non-exponential time to degradation. Further, the model reduces to a binary state Markov model equivalent to a classic probabilistic risk assessment model if the degradation and maintenance states are eliminated. In summary, variability in time to an event is not properly captured in existing Markov type reliability models though they are stochastic and account for uncertainties. The proposed semi-Markov process models are easy to implement, faster than intensive simulations and accurately model the reliability of engineering systems.
144

Computational Fluid Dynamics Modeling of Redundant Stent-graft Configurations in Endovascular Aneurysm Repair

Tse, Leonard 11 January 2011 (has links)
During endovascular aneurysm repair (EVAR), if the stent-graft device is too long for a given patient the redundant (extra) length adopts a convex configuration in the aneurysm. Based on clinical experience, we hypothesize that redundant stent-graft configurations increase the downward force acting on the device, thereby increasing the risk of device dislodgement and failure. This work numerically studies both steady-state and physiologic pulsatile blood flow in redundant stent-graft configurations. Computational fluid dynamics simulations predicted a peak downward displacement force for the zero-, moderate- and severe-redundancy configurations of 7.36, 7.44 and 7.81 N, respectively for steady-state flow; and 7.35, 7.41 and 7.85 N, respectively for physiologic pulsatile flow. These results suggest that redundant stent-graft configurations in EVAR do increase the downward force acting on the device, but the clinical consequence depends significantly on device-specific resistance to dislodgement.
145

Computational Fluid Dynamics Modeling of Redundant Stent-graft Configurations in Endovascular Aneurysm Repair

Tse, Leonard 11 January 2011 (has links)
During endovascular aneurysm repair (EVAR), if the stent-graft device is too long for a given patient the redundant (extra) length adopts a convex configuration in the aneurysm. Based on clinical experience, we hypothesize that redundant stent-graft configurations increase the downward force acting on the device, thereby increasing the risk of device dislodgement and failure. This work numerically studies both steady-state and physiologic pulsatile blood flow in redundant stent-graft configurations. Computational fluid dynamics simulations predicted a peak downward displacement force for the zero-, moderate- and severe-redundancy configurations of 7.36, 7.44 and 7.81 N, respectively for steady-state flow; and 7.35, 7.41 and 7.85 N, respectively for physiologic pulsatile flow. These results suggest that redundant stent-graft configurations in EVAR do increase the downward force acting on the device, but the clinical consequence depends significantly on device-specific resistance to dislodgement.
146

Multi-Objective Genetic Programming with Redundancy-Regulations for Automatic Construction of Image Feature Extractors

OHNISHI, Noboru, KUDO, Hiroaki, TAKEUCHI, Yoshinori, MATSUMOTO, Tetsuya, WATCHAREERUETAI, Ukrit 01 September 2010 (has links)
No description available.
147

The Consequences of stochastic gene expression in the nematode Caenorhabditis elegans

Burga Ramos, Alejandro Raúl, 1985- 20 July 2012 (has links)
Genetically identical cells and organisms growing in homogenous environmental conditions can show significant phenotypic variation. Furthermore, mutations often have consequences that vary among individuals (incomplete penetrance). Biochemical processes such as those involved in gene expression are subjected to fluctuations due to their inherent probabilistic nature. However, it is not clear how these fluctuations affect multicellular organisms carrying mutations and if stochastic variation in gene expression among individuals could confer any advantage to populations. We have investigated the consequences of stochastic gene expression using the nematode Caenorhabditis elegans as a model. Here we show that inter-individual stochastic variation in the induction of both specific and more general buffering systems combine to determine the outcome of inherited mutations in each individual. Also, we demonstrate that genetic and environmental robustness are coupled in C. elegans. Individuals with higher induction of stress response are more robust to the effect of mutations, however they incur a fitness cost, thus suggesting that variation at the population level could be beneficial in unpredictable environments. / Células y organismos genéticamente idénticos y creciendo en un ambiente homogéneo pueden mostrar diferencias en sus fenotipos. Además, una misma mutación puede afectar de un modo distinto a individuos de una misma población. Es sabido que los procesos bioquímicos responsables de la expresión de genes están sujetos a fluctuaciones debido a su inherentemente naturaleza probabilística. Sin embargo, el rol que juegan estas fluctuaciones en individuos portadores de mutaciones ha sido poco estudiado, así cómo si la expresión estocástica de genes puede conferir alguna ventaja al nivel poblacional. Para investigar las consecuencias de la expresión estocástica de genes usamos como modelo al nemátodo Caenorhabditis elegans. En este trabajo demostramos que existe variación entre individuos en la inducción de mecanismos (tanto gen-específicos como globales) que confieren robustez al desarrollo. En consecuencia, diferencias fenotípicas entre mutantes están determinadas por su variación. También, demostramos que la robustez a perturbaciones genéticos y ambientales están estrechamente ligadas en C. elegans. Individuos que inducen estocásticamente una mayor respuesta a stress, están fenotípicamente mejor protegidos al efecto de mutaciones pero incurren en un costo reproductivo importante. Eso sugiere, que variaciones estocásticas al nivel poblacional pueden ser benéficas cuando las poblaciones afrontan ambientes impredecibles.
148

Binary Redundancy Elimination

Fernández Gómez, Manuel 13 April 2005 (has links)
Dos de las limitaciones de rendimiento más importantes en los procesadores de hoy en día provienen de las operaciones de memoria y de las dependencias de control. Para resolver estos problemas, las memorias cache y los predictores de salto son dos alternativas hardware bien conocidas que explotan, entre otros factores, el reuso temporal de memoria y la correlación de saltos. En otras palabras, estas estructuras tratan de explotar la redundancia dinámica existente en los programas. Esta redundancia proviene parcialmente de la forma en que los programadores escriben código, pero también de limitaciones existentes en el modelo de compilación tradicional, lo cual introduce instrucciones de memoria y de salto innecesarias. Pensamos que los compiladores deberían ser muy agresivos optimizando programas, y por tanto ser capaces de eliminar una parte importante de esta redundancia. Por otro lado, las optimizaciones aplicadas en tiempo de enlace o directamente al programa ejecutable final han recibido una atención creciente en los últimos años, debido a limitaciones existentes en el modelo de compilación tradicional. Incluso aplicando sofisticados análisis y transformaciones interprocedurales, un compilador tradicional no es capaz de optimizar un programa como una entidad completa. Un problema similar aparece aplicando técnicas de compilación dirigidas por profiling: grandes proyectos se ven forzados a recompilar todos y cada uno de sus módulos para aprovechar dicha información. Por el contrario, seria más conveniente construir la aplicación completa, instrumentarla para obtener información de profiling y optimizar entonces el binario final sin recompilar ni un solo fichero fuente.En esta tesis presentamos nuevas técnicas de compilación dirigidas por profiling para eliminar la redundancia encontrada en programas ejecutables a nivel binario (esto es, redundancia binaria), incluso aunque estos programas hayan sido compilados agresivamente con un novísimo compilador comercial. Nuestras técnicas de eliminación de redundancia están diseñadas para eliminar operaciones de memoria y de salto redundantes, que son las más importantes para mitigar los problemas de rendimiento que hemos mencionado. Estas propuestas están basadas en técnicas de eliminación de redundancia parcial sensibles al camino de ejecución. Los resultados muestran que aplicando nuestras optimizaciones, somos capaces de alcanzar una reducción del 14% en el tiempo de ejecución de nuestro conjunto de programas.En este trabajo también revisamos el problemas del análisis de alias en programas ejecutables, identificando el por qué la desambiguación de memoria es uno de los puntos débiles en la modificación de código objeto. Proponemos varios análisis para ser aplicados en el contexto de optimizadores binarios. Primero un análisis de alias estricto para descubrir dependencias de memoria sensibles al camino de ejecución, el cual es usado en nuestras optimizaciones para la eliminación de redundancias de memoria. Seguidamente, dos análisis especulativos de posibles alias para detección de independencias de memoria. Estos análisis están basados en introducir información especulativa en tiempo de análisis, lo que incrementa la precisión en partes importantes de código manteniendo el análisis eficiente. Los resultados muestran que nuestras propuestas son altamente útiles para incrementar la desambiguación de memoria de código binario, lo que se traduce en oportunidades para aplicar optimizaciones. Todos nuestros algoritmos, tanto de análisis como de optimización, han sido implementados en un optimizador binario, enfatizando los problemas más relevantes en la aplicaciones de nuestros algoritmos en código ejecutable, sin la ayuda de gran parte de la información de alto nivel presente en compiladores tradicionales. / Two of the most important performance limiters in today's processor families comes from solving the memory wall and handling control dependencies. In order to address these issues, cache memories and branch predictors are well-known hardware proposals that take advantage of, among other things, exploiting both temporal memory reuse and branch correlation. In other words, they try to exploit the dynamic redundancy existing in programs. This redundancy comes partly from the way that programmers write source code, but also from limitations in the compilation model of traditional compilers, which introduces unnecessary memory and conditional branch instructions. We believe that today's optimizing compilers should be very aggressive in optimizing programs, and then they should be expected to optimize a significant part of this redundancy away.On the other hand, optimizations performed at link-time or directly applied to final program executables have received increased attention in recent years, due to limitations in the traditional compilation model. First, even though performing sophisticated interprocedural analyses and transformations, traditional compilers do not have the opportunity to optimize the program as a whole. A similar problem arises when applying profile-directe compilation techniques: large projects will be forced to re-build every source file to take advantage of profile information. By contrast, it would be more convenient to build the full application, instrument it to obtain profile data and then re-optimize the final binary without recompiling a single source file.In this thesis we present new profile-guided compiler optimizations for eliminating the redundancy encountered on executable programs at binary level (i.e.: binary redundancy), even though these programs have been compiled with full optimizations using a state-ofthe- art commercial compiler. In particular, our Binary Redundancy Elimination (BRE) techniques are targeted at eliminating both redundant memory operations and redundant conditional branches, which are the most important ones for addressing the performance issues that we mentioned above in today's microprocessors. These new proposals are mainly based on Partial Redundancy Elimination (PRE) techniques for eliminating partial redundancies in a path-sensitive fashion. Our results show that, by applying our optimizations, we are able to achieve a 14% execution time reduction in our benchmark suite.In this work we also review the problem of alias analysis at the executable program level, identifying why memory disambiguation is one of the weak points of object code modification. We then propose several alias analyses to be applied in the context of linktime or executable code optimizers. First, we present a must-alias analysis to recognize memory dependencies in a path- sensitive fashion, which is used in our optimization for eliminating redundant memory operations. Next, we propose two speculative may-alias data-flow algorithms to recognize memory independencies. These may-alias analyses are based on introducing unsafe speculation at analysis time, which increases alias precision on important portions of code while keeping the analysis reasonably cost-efficient. Our results show that our analyses prove to be very useful for increasing memory disambiguation accuracy of binary code, which turns out into opportunities for applying optimizations.All our algorithms, both for the analyses and the optimizations, have been implemented within a binary optimizer, which overcomes most of the existing limitations of traditional source-code compilers. Therefore, our work also points out the most relevant issues of applying our algorithms at the executable code level, since most of the high-level information available in traditional compilers is lost.
149

Design of Soft Error Robust High Speed 64-bit Logarithmic Adder

Shah, Jaspal Singh January 2008 (has links)
Continuous scaling of the transistor size and reduction of the operating voltage have led to a significant performance improvement of integrated circuits. However, the vulnerability of the scaled circuits to transient data upsets or soft errors, which are caused by alpha particles and cosmic neutrons, has emerged as a major reliability concern. In this thesis, we have investigated the effects of soft errors in combinational circuits and proposed soft error detection techniques for high speed adders. In particular, we have proposed an area-efficient 64-bit soft error robust logarithmic adder (SRA). The adder employs the carry merge Sklansky adder architecture in which carries are generated every 4 bits. Since the particle-induced transient, which is often referred to as a single event transient (SET) typically lasts for 100~200 ps, the adder uses time redundancy by sampling the sum outputs twice. The sampling instances have been set at 110 ps apart. In contrast to the traditional time redundancy, which requires two clock cycles to generate a given output, the SRA generates an output in a single clock cycle. The sampled sum outputs are compared using a 64-bit XOR tree to detect any possible error. An energy efficient 4-input transmission gate based XOR logic is implemented to reduce the delay and the power in this case. The pseudo-static logic (PSL), which has the ability to recover from a particle induced transient, is used in the adder implementation. In comparison with the space redundant approach which requires hardware duplication for error detection, the SRA is 50% more area efficient. The proposed SRA is simulated for different operands with errors inserted at different nodes at the inputs, the carry merge tree, and the sum generation circuit. The simulation vectors are carefully chosen such that the SET is not masked by error masking mechanisms, which are inherently present in combinational circuits. Simulation results show that the proposed SRA is capable of detecting 77% of the errors. The undetected errors primarily result when the SET causes an even number of errors and when errors occur outside the sampling window.
150

Construction of image feature extractors based on multi-objective genetic programming with redundancy regulations

Watchareeruetai, Ukrit, Matsumoto, Tetsuya, Takeuchi, Yoshinori, Kudo, Hiroaki, Ohnishi, Noboru 11 October 2009 (has links)
No description available.

Page generated in 0.0291 seconds