• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 344
  • 128
  • 49
  • 39
  • 12
  • 10
  • 9
  • 7
  • 5
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 707
  • 183
  • 93
  • 88
  • 87
  • 76
  • 68
  • 54
  • 53
  • 53
  • 52
  • 51
  • 49
  • 42
  • 41
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Role of network topology based methods in discovering novel gene-phenotype associations

Güney, Emre, 1983- 25 September 2012 (has links)
The cell is governed by the complex interactions among various types of biomolecules. Coupled with environmental factors, variations in DNA can cause alterations in normal gene function and lead to a disease condition. Often, such disease phenotypes involve coordinated dysregulation of multiple genes that implicate inter-connected pathways. Towards a better understanding and characterization of mechanisms underlying human diseases, here, I present GUILD, a network-based disease-gene prioritization framework. GUILD associates genes with diseases using the global topology of the protein-protein interaction network and an initial set of genes known to be implicated in the disease. Furthermore, I investigate the mechanistic relationships between disease-genes and explain the robustness emerging from these relationships. I also introduce GUILDify, an online and user-friendly tool which prioritizes genes for their association to any user-provided phenotype. Finally, I describe current state-of-the-art systems-biology approaches where network modeling has helped extending our view on diseases such as cancer. / La cèl•lula es regeix per interaccions complexes entre diferents tipus de biomolècules. Juntament amb factors ambientals, variacions en el DNA poden causar alteracions en la funció normal dels gens i provocar malalties. Sovint, aquests fenotips de malaltia involucren una desregulació coordinada de múltiples gens implicats en vies interconnectades. Per tal de comprendre i caracteritzar millor els mecanismes subjacents en malalties humanes, en aquesta tesis presento el programa GUILD, una plataforma que prioritza gens relacionats amb una malaltia en concret fent us de la topologia de xarxe. A partir d’un conjunt conegut de gens implicats en una malaltia, GUILD associa altres gens amb la malaltia mitjancant la topologia global de la xarxa d’interaccions de proteïnes. A més a més, analitzo les relacions mecanístiques entre gens associats a malalties i explico la robustesa es desprèn d’aquesta anàlisi. També presento GUILDify, un servidor web de fácil ús per la priorització de gens i la seva associació a un determinat fenotip. Finalment, descric els mètodes més recents en què el model•latge de xarxes ha ajudat extendre el coneixement sobre malalties complexes, com per exemple a càncer.
192

Integration Techniques of Fault Detection and Isolation Using Interval Observers

Meseguer Amela, Jordi 30 June 2009 (has links)
An interval observer has been illustrated to be a suitable approach to detect and isolate faults affecting complex dynamical industrial systems. Concerning fault detection, interval observation is an appropriate passive robust strategy to generate an adaptive threshold to be used in residual evaluation when model uncertainty is located in parameters (interval model). In such approach, the observer gain is a key parameter since it determines the time evolution of the residual sensitivity to a fault and the minimum detectable fault. This thesis illustrates that the whole fault detection process is ruled by the dynamics of the fault residual sensitivity functions and by the time evolution of the adaptive threshold related to the interval observer. Besides, it must be taken into account that these two observer fault detection properties depend on the used observer gain. As a consequence, the observer gain becomes a tuning parameter which allows enhancing the observer fault detection performance while avoiding some drawbacks related to the analytical models, as the wrapping effect. In this thesis, the effect of the observer gain on fault detection and how this parameter can avoid some observer drawbacks (i.e. wrapping effect) are deeply analyzed. One of the results of this analysis is the determination of the minimum detectable fault function related to a given fault type. This function allows introducing a fault classification according to the fault detectability time evolution: permanently (strongly) detected, non-permanently (weakly) detected or just non-detected. In this fault detection part of this thesis, two examples have been used to illustrate the derived results: a mineral grinding-classification process and an industrial servo actuator. Concerning the interface between fault detection and fault isolation, this thesis shows that both modules can not be considered separately since the fault detection process has an important influence on the fault isolation result. This influence is not only due to the time evolution of the fault signals generated by the fault detection module but also to the fact that the fault residual sensitivity functions determines the faults which are affecting a given fault signal and the dynamics of this fault signal for each fault. This thesis illustrates this point suggesting that the interface between fault detection and fault isolation must consider a set of fault signals properties: binary property, sign property, fault residual sensitivity property, occurrence order property and occurrence time instant property. Moreover, as a result of the influence of the observer gain on the fault detection stage and on the fault residual sensitivity functions, this thesis demonstrates that the observer gain has also a key role in the fault isolation module which might allow enhancing its performance when this parameter is tuned properly (i.e. fault distinguishability may be increased). As a last point, this thesis analyzes the timed discrete-event nature of the fault signals generated by the fault detection module. As a consequence, it suggests using timed discrete-event models to model the fault isolation module. This thesis illustrates that this kind of models allow enhancing the fault isolation result. Moreover, as the monitored system is modelled using an interval observer, this thesis shows as this qualitative fault isolation model can be built up on the grounds of this system analytical model. Finally, the proposed fault isolation method is applied to detect and isolate faults of the Barcelona’s urban sewer system limnimeters. Keywords: Fault Detection, Fault Diagnosis, Robustness, Observers, Intervals, Discrete-event Systems. / En la presente tesis se demuestra que el uso de observadores intervalares para detectar y aislar fallos en sistemas dinámicos complejos constituye una estrategia apropiada. En la etapa de detección del fallo, dicha estrategia permite determinar el umbral adaptativo usado en la evaluación del residuo (robustez pasiva). Dicha metodología, responde a la consideración de modelos con parámetros inciertos (modelos intervalares). En dicho enfoque, la ganancia del observador es un parámetro clave que permite determinar la evolución temporal de la sensibilidad del residuo a un fallo y el mínimo fallo detectable para un tipo de fallo determinado. Esta tesis establece que todo el proceso de detección de fallos viene determinado por la dinámica de las funciones sensibilidad del residuo a los diferentes fallos considerados y por la evolución temporal del umbral adaptativo asociado al observador intervalar. Además, se debe tener en cuenta que estas dos propiedades del observador respecto la detección de fallos dependen de la ganancia del observador. En consecuencia, la ganancia del observador se convierte en el parámetro de diseño que permite mejorar las prestaciones de dicho modelo respecto la detección de fallos mientras que permite evitar algunos defectos asociados al uso de modelos intervalares, como el efecto wrapping. Uno de los resultados obtenidos es la determinación de la función fallo mínimo detectable para un tipo de fallo dado. Esta función permite introducir una clasificación de los fallos en función de la evolución temporal de su detectabilidad: fallos permanentemente detectados, fallos no permanentemente detectados y fallos no detectados. En la primera parte de la tesis centrada en la detección de fallos se utilizan dos ejemplos para ilustrar los resultados obtenidos: un proceso de trituración y separación de minerales y un servoactuador industrial. Respecto a la interfaz entre la etapa de detección de fallos y el proceso de aislamiento, esta tesis muestra que ambos módulos no pueden considerarse separadamente dado que el proceso de detección tiene una importante influencia en el resultado de la etapa de aislamiento. Esta influencia no es debida sólo a la evolución temporal de las señales de fallo generados por el módulo de detección sino también porque las funciones sensibilidad del residuo a los diferentes posibles fallos determinan los fallos que afectan a un determinado señal de fallo y la dinámica de éste para cada uno de los fallos. Esta tesis ilustra este punto sugiriendo que el interfaz entre detección y aislamiento del fallo debe considerar un conjunto de propiedades de dichos señales: propiedad binaria, propiedad del signo, propiedad de la sensibilidad del residuo a un fallo dado, propiedad del orden de aparición de las señales causados por los fallos y la propiedad del tiempo de aparición de estos. Además, como resultado de la influencia de la ganancia del observador en la etapa de detección y en las funciones sensibilidad asociadas a los residuos, esta tesis ilustra que la ganancia del observador tiene también un papel crucial en el módulo de aislamiento, el cual podría permitir mejorar el comportamiento de dicho módulo diseñando éste parámetro del observador de forma adecuada (Ej. Incrementar la distinción de los fallos para su mejor aislamiento). Como último punto, esta tesis analiza la naturaleza temporal de eventos discretos asociada a las señales de fallo generados por el módulo de detección. A consecuencia, se sugiere usar modelos de eventos discretos temporales para modelizar el módulo de aislamiento del fallo. Esta tesis muestra que este tipo de modelos permite mejorar el resultado de aislamiento del fallo. Además, dado que el sistema monitorizado es modelado usando un observador intervalar, esta tesis muestra como este modelo cualitativo de aislamiento puede ser construido usando dicho modelo analítico del sistema. Finalmente, el método propuesto de aislamiento del fallo es aplicado para detectar y aislar fallos en los limnimetros del sistema de alcantarillado de Barcelona. Palabras clave: Detección de Fallos, Diagnosis de Fallos, Robusteza, Observadores, Intervalos, Sistemas de Eventos Discretos.
193

Crafting a Resilient andRobust Supply Chain : An Empirical Case study of Volvo CE & Kapsch

Rasmussen, Kristoffer January 2012 (has links)
Globalized markets, resources and means of communication reshaped the modern world. Today global supply chains are more agile and vulnerable to any crises. There is a need to have appropriate measures and strategies to dismantle and mitigate effects of any small or major disasters in the supply chains. Today supply chains are both directly and indirectly under influence of crises. The purpose is to study how companies assess and avoids risk in their supply chains and how they mitigate risks. Two cases have been used to explore the subject. Primary face-to-face interviews have been conducted with representatives from the two companies. Further secondary documentation from the companies has been analyzed. Managers are well aware of the risks embedded in their supply chains. They have processes to forecast and handle disruptions. Redundant resources are used in various extents to prepare for unforeseen events. To ensure uninterrupted supply different techniques such as dual-sourcing is used. Even though they see themselves as prepared companies are having problems mitigating the impact of uncertain events and these disruptions can have a big negative impact on their supply chains.
194

Robust Automotive Positioning: Integration of GPS and Relative Motion Sensors / Robust fordonspositionering: Integration av GPS och sensorer för relativ rörelse

Kronander, Jon January 2004 (has links)
Automotive positioning systems relying exclusively on the input from a GPS receiver, which is a line of sight sensor, tend to be sensitive to situations with limited sky visibility. Such situations include: urban environments with tall buildings; inside parking structures; underneath trees; in tunnels and under bridges. In these situations, the system has to rely on integration of relative motion sensors to estimate vehicle position. However, these sensor measurements are generally affected by errors such as offsets and scale factors, that will cause the resulting position accuracy to deteriorate rapidly once GPS input is lost. The approach in this thesis is to use a GPS receiver in combination with low cost sensor equipment to produce a robust positioning module. The module should be capable of handling situations where GPS input is corrupted or unavailable. The working principle is to calibrate the relative motion sensors when GPS is available to improve the accuracy during GPS intermission. To fuse the GPS information with the sensor outputs, different models have been proposed and evaluated on real data sets. These models tend to be nonlinear, and have therefore been processed in an Extended Kalman Filter structure. Experiments show that the proposed solutions can compensate for most of the errors associated with the relative motion sensors, and that the resulting positioning accuracy is improved accordingly.
195

Regularization Using a Parameterized Trust Region Subproblem

Grodzevich, Oleg January 2004 (has links)
We present a new method for regularization of ill-conditioned problems that extends the traditional trust-region approach. Ill-conditioned problems arise, for example, in image restoration or mathematical processing of medical data, and involve matrices that are very ill-conditioned. The method makes use of the L-curve and L-curve maximum curvature criterion as a strategy recently proposed to find a good regularization parameter. We describe the method and show its application to an image restoration problem. We also provide a MATLAB code for the algorithm. Finally, a comparison to the CGLS approach is given and analyzed, and future research directions are proposed.
196

Computational Studies on the Evolution of Metabolism

Ullrich, Alexander 27 February 2012 (has links) (PDF)
Living organisms throughout evolution have developed desired properties, such as the ability of maintaining functionality despite changes in the environment or their inner structure, the formation of functional modules, from metabolic pathways to organs, and most essentially the capacity to adapt and evolve in a process called natural selection. It can be observed in the metabolic networks of modern organisms that many key pathways such as the citric acid cycle, glycolysis, or the biosynthesis of most amino acids are common to all of them. Understanding the evolutionary mechanisms behind this development of complex biological systems is an intriguing and important task of current research in biology as well as artificial life. Several competing hypotheses for the formation of metabolic pathways and the mecha- nisms that shape metabolic networks have been discussed in the literature, each of which finds support from comparative analysis of extant genomes. However, while being powerful tools for the investigation of metabolic evolution, these traditional methods do not allow to look back in evolution far enough to the time when metabolism had to emerge and evolve to the form we can observe today. To this end, simulation studies have been introduced to discover the principles of metabolic evolution and the sources for the emergence of metabolism prop- erties. These approaches differ considerably in the realism and explicitness of the underlying models. A difficult trade-off between realism and computational feasibility has to be made and further modeling decisions on many scales have to be taken into account, requiring the combination of knowledge from different fields such as chemistry, physics, biology and last but not least also computer science. In this thesis, a novel computational model for the in silico evolution of early metabolism is introduced. It comprises all the components on different scales to resemble a situation of evolving metabolic protocells in an RNA-world. Therefore, the model contains a minimal RNA-based genetics and an evolving metabolism of catalytic ribozymes that manipulate a rich underlying chemistry. To allow the metabolic organization to escape from the confines of the chemical space set by the initial conditions of the simulation and in general an open- ended evolution, an evolvable sequence-to-function map is used. At the heart of the metabolic subsystem is a graph-based artificial chemistry equipped with a built-in thermodynamics. The generation of the metabolic reaction network is realized as a rule-based stochastic simulation. The necessary reaction rates are calculated from the chemical graphs of the reactants on the fly. The selection procedure among the population of protocells is based on the optimal metabolic yield of the protocells, which is computed using flux balance analysis. The introduced computational model allows for profound investigations of the evolution of early metabolism and the underlying evolutionary mechanisms. One application in this thesis is the study of the formation of metabolic pathways. Therefore, four established hypothe- ses, namely the backwards evolution, forward evolution, patchwork evolution and the shell hypothesis, are discussed within the realms of this in silico evolution study. The metabolic pathways of the networks, evolved in various simulation runs, are determined and analyzed in terms of their evolutionary direction. The simulation results suggest that the seemingly mutually exclusive hypotheses may well be compatible when considering that different pro- cesses dominate different phases in the evolution of a metabolic system. Further, it is found that forward evolution shapes the metabolic network in the very early steps of evolution. In later and more complex stages, enzyme recruitment supersedes forward evolution, keeping a core set of pathways from the early phase. Backward evolution can only be observed under conditions of steady environmental change. Additionally, evolutionary history of enzymes and metabolites were studied on the network level as well as for single instances, showing a great variety of evolutionary mechanisms at work. The second major focus of the in silico evolutionary study is the emergence of complex system properties, such as robustness and modularity. To this end several techniques to analyze the metabolic systems were used. The measures for complex properties stem from the fields of graph theory, steady state analysis and neutral network theory. Some are used in general network analysis and others were developed specifically for the purpose introduced in this work. To discover potential sources for the emergence of system properties, three different evolutionary scenarios were tested and compared. The first two scenarios are the same as for the first part of the investigation, one scenario of evolution under static conditions and one incorporating a steady change in the set of ”food” molecules. A third scenario was added that also simulates a static evolution but with an increased mutation rate and regular events of horizontal gene transfer between protocells of the population. The comparison of all three scenarios with real world metabolic networks shows a significant similarity in structure and properties. Among the three scenarios, the two static evolutions yield the most robust metabolic networks, however, the networks evolved under environmental change exhibit their own strategy to a robustness more suited to their conditions. As expected from theory, horizontal gene transfer and changes in the environment seem to produce higher degrees of modularity in metabolism. Both scenarios develop rather different kinds of modularity, while horizontal gene transfer provides for more isolated modules, the modules of the second scenario are far more interconnected.
197

Optimum Designs for Model Discrimination and Estimation in Binary Response Models

Hsieh, Wei-shan 29 June 2005 (has links)
This paper is concerned with the problem of finding an experimental design for discrimination between two rival models and for model robustness that minimizing the maximum bias simultaneously in binary response experiments. The criterion for model discrimination is based on the $T$-optimality criterion proposed in Atkinson and Fedorov (1975), which maximizes the sum of squares of deviations between the two rival models while the criterion for model robustness is based on minimizing the maximum probability bias of the two rival models. In this paper we obtain the optimum designs satisfy the above two criteria for some commonly used rival models in binary response experiments such as the probit and logit models etc.
198

Adaptive Estimation And Hypothesis Testing Methods

Donmez, Ayca 01 March 2010 (has links) (PDF)
For statistical estimation of population parameters, Fisher&rsquo / s maximum likelihood estimators (MLEs) are commonly used. They are consistent, unbiased and efficient, at any rate for large n. In most situations, however, MLEs are elusive because of computational difficulties. To alleviate these difficulties, Tiku&rsquo / s modified maximum likelihood estimators (MMLEs) are used. They are explicit functions of sample observations and easy to compute. They are asymptotically equivalent to MLEs and, for small n, are equally efficient. Moreover, MLEs and MMLEs are numerically very close to one another. For calculating MLEs and MMLEs, the functional form of the underlying distribution has to be known. For machine data processing, however, such is not the case. Instead, what is reasonable to assume for machine data processing is that the underlying distribution is a member of a broad class of distributions. Huber assumed that the underlying distribution is long-tailed symmetric and developed the so called M-estimators. It is very desirable for an estimator to be robust and have bounded influence function. M-estimators, however, implicitly censor certain sample observations which most practitioners do not appreciate. Tiku and Surucu suggested a modification to Tiku&rsquo / s MMLEs. The new MMLEs are robust and have bounded influence functions. In fact, these new estimators are overall more efficient than M-estimators for long-tailed symmetric distributions. In this thesis, we have proposed a new modification to MMLEs. The resulting estimators are robust and have bounded influence functions. We have also shown that they can be used not only for long-tailed symmetric distributions but for skew distributions as well. We have used the proposed modification in the context of experimental design and linear regression. We have shown that the resulting estimators and the hypothesis testing procedures based on them are indeed superior to earlier such estimators and tests.
199

Robust Estimation And Hypothesis Testing In Microarray Analysis

Ulgen, Burcin Emre 01 August 2010 (has links) (PDF)
Microarray technology allows the simultaneous measurement of thousands of gene expressions simultaneously. As a result of this, many statistical methods emerged for identifying differentially expressed genes. Kerr et al. (2001) proposed analysis of variance (ANOVA) procedure for the analysis of gene expression data. Their estimators are based on the assumption of normality, however the parameter estimates and residuals from this analysis are notably heavier-tailed than normal as they commented. Since non-normality complicates the data analysis and results in inefficient estimators, it is very important to develop statistical procedures which are efficient and robust. For this reason, in this work, we use Modified Maximum Likelihood (MML) and Adaptive Maximum Likelihood estimation method (Tiku and Suresh, 1992) and show that MML and AMML estimators are more efficient and robust. In our study we compared MML and AMML method with widely used statistical analysis methods via simulations and real microarray data sets.
200

Quality-of-Service-Aware Data Stream Processing

Schmidt, Sven 21 March 2007 (has links) (PDF)
Data stream processing in the industrial as well as in the academic field has gained more and more importance during the last years. Consider the monitoring of industrial processes as an example. There, sensors are mounted to gather lots of data within a short time range. Storing and post-processing these data may occasionally be useless or even impossible. On the one hand, only a small part of the monitored data is relevant. To efficiently use the storage capacity, only a preselection of the data should be considered. On the other hand, it may occur that the volume of incoming data is generally too high to be stored in time or–in other words–the technical efforts for storing the data in time would be out of scale. Processing data streams in the context of this thesis means to apply database operations to the stream in an on-the-fly manner (without explicitly storing the data). The challenges for this task lie in the limited amount of resources while data streams are potentially infinite. Furthermore, data stream processing must be fast and the results have to be disseminated as soon as possible. This thesis focuses on the latter issue. The goal is to provide a so-called Quality-of-Service (QoS) for the data stream processing task. Therefore, adequate QoS metrics like maximum output delay or minimum result data rate are defined. Thereafter, a cost model for obtaining the required processing resources from the specified QoS is presented. On that basis, the stream processing operations are scheduled. Depending on the required QoS and on the available resources, the weight can be shifted among the individual resources and QoS metrics, respectively. Calculating and scheduling resources requires a lot of expert knowledge regarding the characteristics of the stream operations and regarding the incoming data streams. Often, this knowledge is based on experience and thus, a revision of the resource calculation and reservation becomes necessary from time to time. This leads to occasional interruptions of the continuous data stream processing, of the delivery of the result, and thus, of the negotiated Quality-of-Service. The proposed robustness concept supports the user and facilitates a decrease in the number of interruptions by providing more resources.

Page generated in 0.061 seconds